The landscape of global telecommunications is shifting from a hardware-centric model to one defined by open-source software and geopolitical strategy. As the industry looks toward 6G, the traditional “gatekeeper” status of major vendors is being challenged by initiatives like OCUDU (Open Centralized Unit Distributed Unit), which aims to commoditize the core RAN stack. This move, driven significantly by defense interests and a desire to break proprietary cycles, promises to open the ecosystem to a new generation of developers and niche innovators.
In this interview, we explore how the transition from closed systems to hardware-independent platforms is reshaping the market. We discuss the technical hurdles of cross-architecture compatibility, the strategic reasons why industry giants are embracing open-source models, and the growing influence of national security requirements on civilian infrastructure.
Previous efforts to standardize radio access networks often struggled because companies replicated existing proprietary stacks rather than innovating. How does providing a full software-based reference platform for central and distributed units change the development landscape, and what specific steps must a developer take to integrate novel 6G applications?
The shift toward a full software-based reference platform like OCUDU fundamentally alters the landscape by removing the “entry fee” of building a basic RAN stack from scratch. Previously, developers spent millions of dollars and years of engineering time just to replicate what established vendors had already perfected, leaving little room for actual differentiation. By providing an open-source foundation for the central and distributed units, we allow a developer to bypass that foundational drudgery and focus entirely on the “edge” of the 6G stack. To integrate a novel application, a developer familiar with standard tools like Linux and Kubernetes can now treat the telecommunications infrastructure like any other cloud environment. They can slot in specialized algorithms for AI-driven beamforming or network slicing without needing a secret handshake from a proprietary vendor’s ecosystem.
Developing virtualized software that remains compatible across Arm, x86, and GPU architectures has historically been a significant technical hurdle. What are the primary trade-offs when ensuring silicon-agnostic performance, and how can engineers manage resource-heavy functions while scaling from small embedded systems to large-scale data centers?
The primary trade-off in silicon-agnosticism is the balance between broad compatibility and the hyper-optimization that comes with hardware-specific coding, such as Nvidia’s CUDA. To manage this, we utilize a tiered approach where less resource-hungry functions, like Layer 2 and Layer 3 signaling, run on general-purpose processors like Intel x86 or Arm-based embedded systems. For the most demanding Layer 1 functions, we can offload tasks to specialized accelerators like GPUs while maintaining the same software base across the rest of the network. This flexibility allows an operator to scale a network from a tiny, low-power Arm sensor in the field to a massive data center full of high-performance GPUs. The goal is to ensure that the software logic remains consistent, regardless of whether the underlying silicon is provided by AMD, Ampere, or Nvidia.
Major telecommunications vendors currently holding gatekeeper status are joining ecosystems that favor open-source models. Why would these established giants choose to move away from their own proprietary code, and how does this collaborative approach specifically address the market challenges posed by state-supported competitors in the 5G era?
It may seem counterintuitive for giants to dismantle their own moats, but there is a realization that the “unfair marketplace” created by state-supported competitors cannot be defeated by simply building more masts or spending more money on the same closed architectures. By joining the OCUDU Ecosystem Foundation, these vendors ensure they remain relevant in the lucrative US market, which is a massive driver of their global profits. Furthermore, they recognize that the dominance of competitors like Huawei was built on owning the entire vertical space; by opening the stack, they can invite a broader ecosystem of innovators to compete on their platforms. This collaborative approach turns the network into a collaborative innovation hub rather than a stagnant, proprietary box, making it much harder for a single entity to monopolize the entire 6G lifecycle.
Alternative technologies like OTFS offer potential advantages over standard OFDM for military sensing, yet they often face significant integration barriers. How does a hardware-independent software base facilitate the rapid switching between different waveforms, and what are the practical implications for deploying these capabilities in 6G networks?
In traditional 5G networks, switching a waveform like OFDM to an alternative like OTFS (Orthogonal Time Frequency Space) is nearly impossible without the explicit permission and technical cooperation of the primary hardware vendor. Because OCUDU is built on a hardware-independent software base, we can integrate these alternative waveforms directly into the reference platform as modular components. This allows for a “plug-and-play” capability where a network can rapidly switch between waveforms depending on the specific mission requirements, such as high-mobility military sensing or integrated communication. For 6G, this means we are no longer locked into a single air interface for ten years; we can deploy specialized sensing capabilities for national security while maintaining standard civilian communications on the same underlying compute hardware.
Transitioning from legacy proprietary systems to a new reference platform typically involves a long, gradual migration rather than an overnight replacement. What metrics will indicate that a company is successfully phasing out its homegrown code, and what are the primary risks of maintaining hybrid environments during this shift?
Success in this migration isn’t measured by a sudden “flip of the switch” but by the percentage of the RAN stack that can be updated or replaced without vendor intervention. We look for metrics such as the reduction in integration time for third-party apps and the adoption rate of open-source components within the central and distributed units. The primary risk of a hybrid environment is “complexity creep,” where engineers must maintain both the old proprietary interfaces and the new open-source platform, potentially doubling the maintenance overhead. However, the gradual move allows vendors to preserve their existing investments while slowly “hollowing out” the proprietary core in favor of more compatible, flexible software that can evolve with the market.
As spending on civilian telecommunications infrastructure flattens, defense-related funding is becoming a primary driver for next-generation network innovation. How does the involvement of national security agencies change the typical development lifecycle of wireless technology, and what new opportunities does this create for independent software startups?
The involvement of national security agencies, such as the Department of War and the Department of Homeland Security, shifts the focus from purely commercial ROI to “resilience and sovereignty.” This changes the development lifecycle by prioritizing security and flexibility over the lowest possible cost-per-bit, which actually opens the door for independent startups like DeepSig and Software Radio Systems. These smaller players can now receive direct funding to build critical pieces of the 6G stack that were previously the exclusive domain of multi-billion dollar corporations. As civilian spending plateaus, the defense sector provides a stable, high-stakes environment where startups can prove their technology’s worth in demanding scenarios before scaling it to the broader commercial market.
What is your forecast for 6G development?
I believe 6G will be the first generation of wireless technology that is truly “software-defined” from the ground up, moving away from the hardware-locked cycles of the past. We will see a massive shift where the value of the network moves from the radio mast to the software applications running on the edge, driven largely by the convergence of AI and telecommunications. My forecast is that by the time 6G is fully deployed, the distinction between a “telecom vendor” and a “cloud provider” will have almost entirely vanished. This will create a much more volatile but innovative market, where the ability to quickly deploy a new algorithm or waveform becomes more important than having the most expensive hardware on the tower.
