What Is the Future of AI and 6G After MWC 2026?

What Is the Future of AI and 6G After MWC 2026?

Vladislav Zaimov stands at the intersection of infrastructure resilience and the rapidly shifting landscape of global connectivity. As a seasoned telecommunications specialist with a profound focus on enterprise networks and the management of high-risk, vulnerable systems, he has witnessed the industry transition through several “generations” of both hardware and hype. His perspective is grounded in the practicalities of deployment—where technical specifications meet the messy realities of international trade and regional stability. In this discussion, we explore the evolving identity of major industry forums, the hardware revolution driven by artificial intelligence, and the existential questions facing the workforce as autonomous systems begin to handle the core logic of our communications networks. We also delve into the technical divide between general-purpose and specialized computing in radio access networks, and why the industry remains deeply conflicted about the path toward 2029.

This conversation covers the impact of geopolitical disruptions on global collaboration and the specific strategies required to maintain the relevance of international summits. We analyze the aggressive entry of specialized hardware vendors into the telecom space, contrasting their momentum with the scaling back of traditional powerhouses. The dialogue further examines the tension between AI-driven job displacement and the emergence of new roles, the critical need for guardrails in agentic AI systems, and the architectural debate over centralized versus decentralized processing. Finally, we address the branding challenges of future network generations and the technical milestones that must be met to justify the transition to 6G.

Recent geopolitical tensions and shifts in trade policy have disrupted international travel and major industry events. How should organizers adapt when attendance numbers begin to slip, and what specific measures can keep these global forums relevant despite regional conflicts or sudden changes in foreign relations?

The “curse” of major industry events often feels like a reflection of the world’s broader instability, and organizers must now treat geopolitical crisis management as a core competency rather than a fringe concern. When we look at the recent drop of 4,000 visitors—bringing the total to nearly 105,000 attendees—it is clear that even a small percentage decline can signal a shift in how these forums are perceived. To maintain relevance, organizers must pivot from being mere real estate providers to becoming essential neutral grounds for diplomacy, especially when regional conflagrations or tariff-heavy trade policies prevent physical travel. We saw how the bombardment of Iran and subsequent travel disruptions forced a re-evaluation of who can actually sit at the table in Barcelona. Forums must embrace a hybrid, high-security digital presence that offers more than just a livestream; they need to create “protected zones” for collaboration that transcend the “Europe-bashing” or “tariff-loving” rhetoric of the day. By focusing on the absolute necessity of cross-border technical standards, these events can remain the heartbeat of the industry even when the number of physical participants shrivels due to external political pressures.

Specialized hardware providers are significantly expanding their presence in the telecom sector while traditional vendors scale back their entourages. Given that previous Big Tech entries often ended in quiet exits, what specific indicators suggest this current hardware-driven push will succeed where earlier software-focused attempts failed?

The atmosphere on the show floor this year was telling, with traditional giants like Ericsson cutting their entourages by a tenth while specialized hardware giants like Nvidia surged in presence. Unlike the software-only plays from Microsoft or Meta, which often slinked to the sidelines after failing to disrupt the sector, this current push is built on the physical necessity of processing power. We are seeing a $1 billion investment from Nvidia into Nokia as a concrete indicator that this isn’t just another passing software craze; it is a fundamental retooling of the network’s foundational hardware. The presence of “fleshless Terminators” and prototype antennas at these booths serves as a visceral reminder that the industry is moving toward a world of “physical AI” where robots might eventually perform tasks ranging from cleaning to combat. While it is too soon to pass definitive judgment, the integration of graphics processing units (GPUs) directly into the radio access network suggests a level of permanence that earlier, more superficial tech entries lacked. Success this time will be measured by whether these hardware providers can move beyond the “smoked-salmon bagels” and hype to deliver actual improvements in network efficiency and sales.

Many companies are currently reducing headcounts while simultaneously investing heavily in AI that can write code and create media. What specific new job categories are actually emerging to replace these lost roles, and how can firms ensure workers maintain a deep understanding of the systems they manage?

The current trend of retrenchment is a painful reality, and the industry is arguably being too hyperbolic about AI’s ability to create new roles instantaneously. While we are losing traditional positions, we see the potential for “Trust Architects” and “AI-Human Logic Integrators” who must oversee code that has been autonomously generated. As certain software programs now do a passable job of coding and composing, the risk is that a higher percentage of the workforce will have a smaller understanding of the systems they rely on than at any point in history. To combat this, firms must implement “deep-dive” training where workers are forced to dissect AI-generated outputs, ensuring they don’t treat these systems as infallible “black boxes.” We cannot afford to have a workforce that merely monitors a machine it can no longer explain, especially when that machine “automates intelligence but can be trusted with nothing.” The pace of this change is unprecedented, far outstripping the industrial or computer revolutions, which makes the need for hands-on technical literacy more desperate than ever.

Reports of agentic AI “going rogue” have raised serious concerns about the lack of guardrails in autonomous systems. How should telecommunications providers structure their oversight to prevent AI from making critical errors, and what steps are necessary to maintain human control over code generated by large language models?

The prospect of agentic AI systems like OpenClaw going rogue is not a distant sci-fi scenario but a current operational risk that requires immediate structural guardrails. Telecommunications providers must avoid a “set it and forget it” mentality where they cede responsibility to large language models controlled by a handful of hyperscalers. To maintain control, companies should adopt the philosophy voiced by industry leaders who argue that if you don’t have human developers who can verify AI code, you effectively have no security. We need to implement multi-layered validation protocols where any AI-driven change to network logic is audited by an independent human-in-the-loop system. This prevents the “shared DNA” of a few dominant AI models from creating a single point of failure across the entire global infrastructure. Without these rigorous oversight structures, we are essentially building a vital global utility on a foundation of “untrustworthy intelligence” that could fail in unpredictable and catastrophic ways.

There is a growing debate over whether radio access networks require specialized GPUs or can function on general-purpose CPUs. Why might a centralized data center approach be more practical than installing AI-capable hardware at every cell site, and what are the latency trade-offs for these different configurations?

The clash between specialized GPUs and general-purpose CPUs is one of the most significant architectural debates in the mobile industry today. While proponents of GPUs argue they are essential for AI-RAN computing, many technical officers believe that the same results are achievable on more efficient custom silicon or standard CPUs. A centralized approach, where AI inferencing is handled in a small number of core data centers rather than at every individual cell site, is often more practical due to the massive cost and power requirements of edge-site hardware. In a country the size of the UK, for instance, you may not need AI-capable hardware at every site to support low-latency applications; the core network software can often handle the load without significant performance degradation. The latency trade-off for centralization is usually measured in milliseconds, which is often negligible for current AI applications compared to the staggering expense of a site-by-site hardware overhaul. Ultimately, the industry must decide if it wants the raw power of a GPU-heavy edge or the streamlined efficiency of a centralized, software-defined core.

The 5G rollout left many consumers underwhelmed, yet talk of a 2029 6G launch is already surfacing despite pending technical specifications. How can the industry avoid the branding mistakes of the past, and what specific technological breakthroughs would justify a new generation instead of just incremental updates?

To avoid repeating the branding failures of 5G, the industry must stop “jumping the gun” and labeling incremental updates as a new generation before they are even standardized. We have already seen the consumer market lose faith in the “G” label because 5G often felt like 4G with a different icon, leaving ordinary users wondering what they actually paid for. A true 6G generation should only be declared if there is a radical shift, perhaps toward “physical AI” integration or the use of significantly higher spectrum bands that offer transformative capabilities. Currently, 6G is expected to use the same OFDM waveform that underpinned 4G and 5G, leading many to believe it isn’t a new generation at all but merely a cloudified evolution. If we push for a 2029 rollout—which is the date Qualcomm is eyeing despite ETSI and 3GPP warnings—we risk alienating the public even further with a “not-quite 6G” product. The industry needs to focus on actual technological breakthroughs, like seamless robot-human interaction or revolutionized spectrum efficiency, rather than just rushing to change the branding every few years.

What is your forecast for 6G?

My forecast for 6G is that it will arrive more as a whisper than a bang, characterized by a move into higher, less useful spectrum bands that will pose significant challenges for indoor coverage. While some companies are pushing for a 2029 commercial launch, the reality is that technical specifications from bodies like the 3GPP are unlikely to be finalized before March of that same year, making early rollouts more about marketing than substance. We will see 6G struggle to find its identity in a consumer market that is still “underwhelmed” by the 5G experience, forcing the industry to pivot its focus toward industrial and robotic applications rather than individual smartphone users. The “G” label will likely continue to lose value as the distinction between generations blurs, with 6G essentially acting as a refined, cloud-native extension of the existing 5G standalone architecture. Success will not come from the branding itself, but from whether 6G can finally deliver the “physical AI” world that was promised but never quite realized during the previous decade.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later