Vladislav Zaimov is a seasoned Telecommunications specialist known for his deep dive into enterprise infrastructure and the intricate management of vulnerable network risks. With a career dedicated to navigating the shift from legacy systems to software-defined environments, he brings a pragmatic perspective on how operators can reclaim control over their own network intelligence. In this discussion, we explore the transition toward fully open, multi-vendor architectures and the strategic deployment of AI to manage the complexities of modern 5G Standalone ecosystems.
The conversation delves into the integration challenges of Open RAN, the necessity of decoupling software intelligence from hardware silos, and the shift from being a simple connectivity pipe to a managed service provider. We also examine the vital link between disaggregated systems and the future of AI-driven resource management, ensuring that the operator, rather than the vendor, remains the architect of the network.
While some operators are still in the testing phase, others have already reached 20% open RAN deployment using multi-vendor architectures. What specific integration hurdles did you face when pairing third-party radios with various cloud infrastructures, and how did you quantify the resulting value for your end customers?
Achieving a 20% deployment in a fully open RAN environment is no small feat, as it requires moving past simple pilot programs into the “pain” of real-world integration. When you pair third-party radios with diverse cloud infrastructures, the primary hurdle is ensuring that these disparate components communicate seamlessly without the safety net of a single-vendor ecosystem. We have had to embrace a modular Service Management and Orchestration layer to bridge these gaps, meticulously building rApps to handle the heavy lifting of coordination. The value for the end customer is quantified by our ability to hand-pick “best of breed” solutions rather than being forced into a mediocre, one-size-fits-all package. This flexibility allows us to optimize performance at a granular level, ensuring that the final service delivered to the user is more resilient and tailored to their specific connectivity needs.
Network intelligence is often bundled with hardware, creating vendor-specific silos. When shifting to a platform that separates intelligence from the equipment layer, what steps are necessary to ensure the system remains vendor-agnostic, and how does this independence specifically enable new revenue streams like slice assurance?
To break down these silos, we must implement an intelligence platform that sits entirely above the hardware, effectively treating the equipment layer as a commodity. By utilizing platforms like EVO and EXA, we ensure that the decision-making logic is separated from the physical gear, which prevents vendors from “selling you your own decisions” alongside their hardware. This independence is maintained by opening the system to applications from third parties, hyperscalers, and even our own internal development teams. Once the intelligence is truly agnostic, we can unlock sophisticated revenue-generating use cases such as slice assurance in radio networks. This allows us to guarantee specific quality-of-service levels for different types of traffic, creating a premium tier of connectivity that was previously impossible to manage in a locked environment.
Moving beyond locked interfaces provides access to data from bare-metal sensors, CPU memory, and the SMO layer. How do you consolidate this multi-layered visibility into a single source of truth, and what is your step-by-step process for using predictive analysis to fix network issues before they impact the user?
The transition to Open RAN has finally granted us contractual and architectural access to interfaces that were historically locked behind proprietary walls. We consolidate this visibility by building custom internal tooling that pulls data from the very bottom of the stack—specifically bare-metal server sensors tracking CPU and memory usage—and feeds it up through the container layers to the SMO. Our process involves using AI-driven applications to perform constant anomaly detection and root cause analysis across these layers simultaneously. By identifying a spike in memory usage or a slight degradation in server performance early on, we can perform preventive maintenance. This “predictive analysis” allows us to cure a potential network problem in the software layer before the user ever experiences a dropped call or a laggy connection.
There is a common perception that the industry must wait for 6G to realize the full potential of intelligent networks, yet 5G Standalone (SA) already exposes massive amounts of data. How can operators leverage current per-slice quality metrics to transition into managed service providers for B2B and B2G sectors today?
Waiting for 6G is a missed opportunity because 5G Standalone already provides an enormous amount of data that can be actioned right now. The disaggregated nature of 5G SA exposes per-slice quality metrics, which serve as the foundation for sophisticated B2B and B2G service level agreements. By leveraging this data, we can transition from being a simple “connectivity pipe” to a high-value managed service provider that offers tailored networks for government or industrial applications. We are currently building use cases that utilize this real-time information to provide dedicated, secure, and high-performance slices for critical infrastructure. It is about taking the tools available today and applying intelligent automation to prove that we can meet the rigorous demands of enterprise and government clients without needing a generational hardware shift.
If open RAN is a prerequisite for AI RAN, it implies that the operator must be the sole architect of network intelligence. How does the transition from simple disaggregation to AI-driven resource management and intelligent scheduling change your daily operations, and what specific internal capabilities must an operator develop to lead this shift?
The shift to AI RAN fundamentally changes our daily operations by moving us from reactive troubleshooting to proactive system architecture. If the network isn’t open, you are essentially relying on someone else’s intelligence, but in an open environment, the operator becomes the primary architect of the network’s brain. This requires us to develop deep internal capabilities in data science and software engineering to manage intelligent scheduling and resource allocation in real-time. Our teams are no longer just maintaining hardware; they are managing AI agents that optimize the network on the fly. This evolution demands a cultural shift within the organization, where we prioritize the development of proprietary rApps and internal tooling to ensure we maintain absolute control over how our network behaves.
What is your forecast for the evolution of AI-driven network automation over the next three years?
Over the next three years, I expect a total departure from the traditional model where hardware vendors dictate network logic. We will see AI-driven automation move from basic anomaly detection to fully autonomous resource management and intelligent scheduling that adapts to user demand in milliseconds. The industry will likely see a massive surge in the deployment of specialized AI agents that reside on vendor-agnostic platforms, allowing for a level of network customization we haven’t seen before. Operators who have already embraced the “pain” of integration today will be the ones dominating the B2B2C market, as they will have the mature software infrastructure required to host these advanced AI capabilities. Ultimately, the next three years will prove that the operator’s ability to architect their own intelligence is the single most important factor in their commercial success.
