Red Hat Drives Automation and Sovereign AI in Telecom

Red Hat Drives Automation and Sovereign AI in Telecom

Vladislav Zaimov is a seasoned professional in the telecommunications sector, known for his deep understanding of enterprise architecture and the intricate security protocols required for vulnerable networks. As the industry pivots from rigid hardware to fluid, software-defined environments, Zaimov’s insights provide a roadmap for navigating the complexities of hybrid clouds and autonomous network management. This conversation explores the shift toward software-centric models, the necessity of integrating back-office systems into cloud strategies, and the critical emergence of sovereign AI infrastructure as a pillar of national security and operational efficiency.

Transitioning to a software-centric model requires running modern microservices alongside legacy monolithic applications. How do you maintain consistent observability across this hybrid environment, and what are the primary challenges when extending these capabilities from the network core out to the edge?

Maintaining observability in a heterogeneous, multi-generational network is essentially about creating a single hybrid platform that treats every network element as an application. When you are managing a massive scale where you simultaneously run decomposed microservices and legacy monolithic applications on virtual machines, you need a common capability layer for control and security. The primary challenge at the edge is ensuring that the transition of workloads remains seamless despite the different heritages of the applications involved. By deploying a unified private cloud platform that extends from the core to the very edge of the network, operators can gain the visibility needed to handle unplanned remediation without losing track of the broader system health.

Implementing closed-loop automation enables millions of network changes without human intervention. As the industry moves toward agentic AI, how does your approach to scaling change, and what specific safeguards are required to manage autonomous agents in a high-stakes network environment?

The shift from structured automation to agentic AI represents a potential ten-fold increase in our operational capacity, effectively turning every team member into a developer and amplifying their impact limitlessly. In 2025 alone, we have seen systems handle millions of changes—both for planned lifecycle management and unplanned remediation—in a closed-loop mode with no human in the loop. To scale safely, we must implement rigorous guardrails that ensure these autonomous agents operate within predefined operational envelopes, particularly when they move beyond structured tasks. The transition to agentic AI allows us to handle complexities far beyond what traditional automation could touch, but it requires a robust software foundation to prevent autonomous decisions from cascading into network-wide instabilities.

Modernizing OSS and BSS is often as critical as the network transformation itself. Why is it vital to treat these IT-centric systems as part of the broader cloud strategy, and what step-by-step process ensures they can support the speed of modern, decomposed network applications?

It is a mistake to view OSS and BSS as isolated back-office functions because they run like the IT engine that supports the entire network infrastructure. To ensure these systems keep pace with a fast-moving, software-centric network, they must be modernized to run on the same cloud-native principles as the network functions themselves. This involves a deliberate rearchitecting of private clouds using platforms like OpenShift to allow for the rapid deployment of decomposed applications. By integrating these systems into the broader cloud strategy, operators can ensure that billing and operational support aren’t becoming bottlenecks for the high-speed services being deployed at the edge.

National operators are increasingly building sovereign AI infrastructure to host local language models and secure cloud services. What are the key technical components of a sovereign AI factory, and how does this specialized infrastructure differ from traditional private cloud deployments regarding security?

A sovereign AI factory is built on a specialized software foundation, often combining advanced AI platforms with high-performance hardware, like the collaborations we see with Nvidia. The key components include local language models and secure cloud services that are specifically tailored to the national interest, such as Norway’s first sovereign AI cloud. Unlike traditional private clouds, this infrastructure is designed to accelerate highly specific industrial automation and voice translation while keeping sensitive data within national borders. It provides a more secure environment for private 5G network optimization because it minimizes reliance on external, global public clouds that may not comply with local data sovereignty laws.

Moving AI agents from a pilot phase to production involves significant architectural shifts. What performance metrics are most important during this transition, and how do you ensure that agentic AI can effectively manage both modern and legacy network elements at a massive scale?

When moving from pilot to production, the most critical metrics revolve around the speed of response and the accuracy of remediation in a live, high-traffic environment. We look closely at how the agentic AI handles the interplay between legacy monolithic apps and modern microservices, ensuring that the automation doesn’t create friction between these two different software generations. Rearchitecting the private cloud is often a prerequisite to ensure the infrastructure can support the heavy compute demands of AI agents at scale. Success is defined by the ability of the AI to take structured network data and translate it into millions of autonomous, successful actions without requiring manual oversight or causing downtime.

What is your forecast for the evolution of sovereign AI and automated network operations over the next five years?

Over the next five years, I expect sovereign AI to become the standard for national operators who want to maintain control over their data while leveraging the power of local language models and industrial automation. We will see a massive shift where network operations move almost entirely to an agentic AI model, potentially increasing the volume of automated network changes by 10x compared to current benchmarks. The distinction between “IT” and “Network” will continue to blur as OSS and BSS become fully cloud-native, enabling a truly responsive, software-defined ecosystem. Ultimately, the operators who successfully deploy these AI factories will be the ones who can offer the most secure, optimized, and low-latency services in their respective regions.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later