Enable javascript in your browser for better experience. Need to know to enable it?

魅影直播

Macro trends in the tech industry | April 2025

Another edition of the Technology Radar is out, and with it comes our expanded view into the macro trends that informed our discussions during the Radar meeting, along with observations from the broader technology landscape. Mike Mason, our Chief AI Officer who previously authored this series, took some time from his busy schedule and is back as a co-author for this piece. He's pairing with Will Amaral, the current Tech Radar product owner, to provide extra insights beyond the current Radar edition.

The AI buzz isn鈥檛 slowing down, and 鈥渧ibe coding鈥 is the new frontier

Excitement around AI remains strong, with new AI capabilities and use cases being announced seemingly every week. We previously mentioned the exponential growth of AI-related tools, both for general use and regular software engineering. In the past six to twelve months, AI-powered coding assistants have moved beyond basic autocomplete; modern tools can now handle complex refactoring, understand entire codebases and even execute commands. On the last edition of the Radar we noted the emergence of 鈥渁gentic鈥 coding assistants 鈥 essentially AI programmers that undertake multi-step coding tasks based on high-level prompts 鈥 and this trend continues at pace today. Early products like Cursor, Cline and Windsurf lead the way in integrating these features into the IDE and dozens of companies promise an 鈥榓gentic鈥 software development solution.

While all of this sounds promising, it's important to note that these tools work in a supervised fashion 鈥 the human developer stays 鈥渋n the loop,鈥 guiding the AI and approving its actions. A recent example is 鈥溾 鈥斕 a relaxed workflow where developers casually instruct AI through voice or chat prompts. The concept seems appealing due to its speed and casual nature 鈥 particularly suitable for quick projects. However, the term quickly went viral, with some companies and startups claiming to exclusively use 鈥渧ibe coding鈥 for critical production code. This sparked discussions about responsible AI use, reinforcing the necessity of developer judgment and thorough code review in AI-assisted workflows. We remain skeptical of claims of software developers being 100% replaced by AI: our own experiments showed Claude Code saving us 97% of the effort on its first try, then falling flat on its face for the next two attempts.

Enterprise intelligence: The AI-ification of the enterprise

AI is steadily weaving itself into the fabric of enterprise operations 鈥 not just as a tool to automate tasks, but as something that could fundamentally shift how organizations make decisions, manage risk and connect with customers. We鈥檙e not all the way there yet, and transformation is still uneven. But more and more companies are starting to see the outlines of a future where AI isn鈥檛 a layer on top of the business 鈥 it鈥檚 baked into its core.

That raises an important question: not whether AI becomes foundational infrastructure, but how we prepare for that without getting caught flat-footed.

As this shift unfolds, quality assurance and governance are becoming more complex and more urgent. Traditional QA practices weren鈥檛 built to handle things like model drift, hallucinations or unpredictable behavior. So we鈥檙e seeing engineering teams begin to adopt model observability tools, eval frameworks and AI-specific testing practices 鈥 especially in industries where the cost of getting it wrong is high.

One of the trickier challenges emerging is what you might call 鈥AI as shadow IT.鈥 Individual teams are spinning up their own tools 鈥 sometimes open-source, sometimes SaaS 鈥 without going through official channels. It鈥檚 easy to see why: these tools are accessible, powerful and often solve real problems. But they also introduce risk 鈥 creating a patchwork of AI usage with little oversight or consistency. Some enterprises are starting to respond with lightweight registries, usage tracking and flexible policy frameworks to get ahead of it. It鈥檚 still early, but the intent is clear: enable innovation without losing the thread on governance.

There鈥檚 also a bigger, less talked-about shift happening: AI is starting to reshape how organizations are designed. This isn鈥檛 just about doing more, faster 鈥 it鈥檚 about changing who does what, how decisions get made, and where accountability sits. Roles are blurring. Assumptions about trust and authority are being tested. And it鈥檚 not just a tech issue 鈥 it touches leadership, HR and governance, too. Most companies aren鈥檛 quite ready for how deep this could go.

At the team level, AI is prompting developers and designers to step back and ask: are we building for humans, or building for machines? As AI tooling gets better 鈥 code generation, design suggestions, automation 鈥 it鈥檚 easy to default to speed. But some teams are pushing back, re-centering on product thinking and UX to make sure what we鈥檙e building remains meaningful and sustainable. AI can accelerate delivery, but it shouldn鈥檛 come at the cost of clarity or care.

The 鈥淎I-ification鈥 of the enterprise isn鈥檛 a tidal wave. It鈥檚 more like a rising tide 鈥 quiet, persistent and shaping everything in its path. The organizations that adapt well won鈥檛 just adopt new tools. They鈥檒l ask bigger questions 鈥 about structure, capability and trust 鈥 and use those answers to steer with intention.

Observability keeps complexity in check

Modern software systems are highly distributed and increasingly infused with AI components, making observability more critical (and more challenging) than ever. This edition of the Radar highlights a wave of innovation in the observability space aimed at keeping up with this complexity. First, as observability becomes increasingly important, much-needed standards gain traction. We saw a great boost in OpenTelemetry's adoption; it鈥檚 now one of CNCF鈥檚 fastest-growing projects with contributions from over 200 organizations. It fosters a vendor-neutral ecosystem and with the support of tools such as Alloy, Tempo and Loki 鈥 allows a wide range of choices and flexibility for developers.

Another driving force behind observability is AI, of course. Observability for AI and LLMs is a focal point with unique challenges. Tracking metrics and logs isn鈥檛 enough to detect model drift, prompt failures and hallucinations. In response, new platforms such as Arize Phoenix, Helicone and Humanloop have emerged to trace and evaluate LLM calls. These tools record prompts, track model responses and help diagnose quality issues. As teams operationalize AI, this visibility is vital for trust and reliability 鈥 much like APM (application performance monitoring) was vital for microservices.

AI's influence on observability is also reciprocated by the integration of AI assistance into observability tools themselves. Given the massive scale of telemetry data (logs, metrics, traces) in cloud applications, operators increasingly rely on AI to detect anomalies and pinpoint issues faster than humans could. Major monitoring platforms now embed machine learning for anomaly detection, alert correlation and root-cause analysis, such as Weights & Biases鈥 鈥淲eave.鈥

Beyond the AI spotlight

It鈥檚 easy to get swept up in the excitement around AI 鈥 every week brings a new headline, a new breakthrough, or a bold prediction. But some of the most meaningful progress is happening in what we might call 鈥渢raditional鈥 software development. AI still hasn鈥檛 cracked some of our biggest day-to-day frustrations 鈥 like the persistent quirks of cross-platform frameworks. And that鈥檚 where we鈥檙e seeing familiar tools quietly evolve in powerful ways.

Take command-line interfaces (CLIs), for example. Even with the rise of polished GUIs, chat-based tooling, and auto-magic everything, CLI tools are not just sticking around 鈥 they're thriving. Developers keep coming back to them for their speed, control, and transparency. And with modern tools like uv and MarkItDown, we鈥檙e seeing a fresh generation of CLIs that feel both sophisticated and refreshingly simple. They鈥檙e proof that the command line isn鈥檛 fading into the background 鈥 it鈥檚 adapting to stay essential.

We're also seeing some interesting shifts in programming languages. While newer entrants like Gleam are starting to gain traction, others like Swift are expanding their reach well beyond their original ecosystems. Swift, in particular, is carving out a role in resource-constrained environments 鈥 an area where performance, reliability, and memory safety matter more than ever. It鈥檚 a good reminder that developers are actively seeking out tools that balance modern safety features with real-world efficiency.

Solid ground in a shifting landscape

While AI dominates headlines and tooling 鈥 appearing in everything from code assistants to ops platforms 鈥 its ubiquity has thrown the spotlight back on the fundamentals: data quality and reliable systems. Without high-quality, well-managed data, even the most powerful AI models falter. And the core of software, ultimately, is still about how we store, manipulate and transform data into value.

In our conversations, a consistent theme emerged: organizations and researchers are rethinking how they manage, serve, and retrieve data. Retrieval-augmented generation (RAG) techniques are evolving fast, because effective retrieval is the bridge between general-purpose models and organization-specific intelligence. A massive model with outdated or irrelevant context is often less useful than a smaller one with timely, high-quality data. The frontier now includes improving relevance, traceability, and explainability to make RAG more reliable and transparent.

But these advances mean little if the underlying data isn't cared for. Scaling AI and analytics demands a solid data foundation. Increasingly, teams are treating data not as a backend artifact, but as a first-class product 鈥 with clear ownership, quality standards, documentation, and a focus on usability. This 鈥data product thinking鈥 draws from concepts like data mesh, where domain teams are responsible for curating and maintaining discoverable, interoperable data assets.

In practice, a data product might be a customer 360 dataset, a risk-scoring pipeline, or an internal dashboard 鈥 something designed, versioned, and supported just like any software product. It has customers, provides value, and it evolves over time.

The message is clear: embrace the new, but don鈥檛 neglect the foundations. The next era of software won鈥檛 be built by AI alone 鈥 it will be shaped by teams that combine human creativity, machine intelligence, and strong engineering discipline. That鈥檚 where the real leverage lies.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of 魅影直播.

Explore a snapshot of today's technology landscape