Vladislav Zaimov has spent years hardening enterprise-grade telecommunications and de-risking vulnerable networks, so he views 6G not just as a speed race but as a systems integration and resilience challenge. In this conversation with Lisa Aidle, he connects a milestone-driven roadmap—pre-commercial demonstrations in 2028 and initial rollout from 2029—to the gritty realities of spectrum coordination, RF alignment, and brownfield deployment constraints. Key themes include: turning study items into work items and full-stack validation with over-the-air trials; harmonizing upper 6 GHz to 8.4 GHz choices with regional realities; proving uplink coverage and efficiency; and embracing AI-native RAN with heterogeneous compute across CPUs, NPUs, and accelerators. Threaded through is a pragmatic insistence on measurable readiness, from 400 MHz channels with a 300/100 SBFD split to partner lab calibrations with Ericsson, Nokia, and Samsung.
A milestone-driven 6G roadmap targets pre-commercial demos in 2028 and initial rollout from 2029. What gates must be cleared each year to stay on track, and which metrics—throughput, latency, energy per bit—will you use to prove readiness at each stage?
To earn 2028 demos and 2029 rollout, each year needs a crisp gate: align study items, convert them into work items, then graduate to full-stack OTA validation with field-capable prototypes. The proof points pair standards readiness with measurable performance—steady throughput scaling over a 400 MHz channel, latency budgets validated end-to-end, and energy per bit trending down as features harden. We’ll treat each build as a releasable system: spectrum decisions locked from upper 6 GHz to 8.4 GHz, numerology frozen for tests, and lab-to-OTA deltas tracked. If a feature can’t show stable throughput gains and an energy-per-bit improvement under the same test harness in consecutive quarters, it doesn’t move to field trials.
2026 is framed as a key inflection point. What tangible deliverables should we expect by then—prototype gNodeBs, UE silicon, end-to-end trials—and how will you sequence lab, OTA, and limited field pilots? Share any past rollout lessons that shape this plan.
In 2026, I expect both sides of the link in hand: internal prototype gNodeBs and UE form factors running a 400 MHz SBFD stack, plus end-to-end demos that are more than slideware. We’ll start with lab bring-up, then controlled OTA in partner facilities, then limited field pilots that respect brownfield constraints at live sites. The 5G lesson that still stings is skipping hard OTA before field—this time we burn down integration risk in labs with Ericsson, Nokia, and Samsung first, then scale. Each pilot ends with a red-team review: what failed, what degraded under interference, and what we fix before widening the footprint.
A broad industry coalition is aligning on spectrum, numerology, and bandwidths. How do you reconcile differing regional spectrum realities from upper 6 GHz to 8.4 GHz, and what fallback paths exist if certain bands lag? Provide concrete coordination steps and decision thresholds.
We start with a common RF front-end design envelope that spans upper 6 GHz to 8.4 GHz, then regionalize filters and PA tuning as policy hardens. Coordination steps are simple but strict: quarterly partner labs to validate band-specific numerology, and a go/no-go threshold that requires two independent vendors to pass OTA in any target band before we scale. If a band lags, we keep the baseband and scheduler common while spinning RF options—think “A/B” front ends—so deployment doesn’t stall. The principle is graceful fallback: never block full-stack progress on one spectrum decision.
Early 6G work spans both gNodeB and UE development. How are hardware, RF, and baseband roadmaps synchronized to avoid integration surprises, and what cross-team review rituals keep alignment tight? Describe a real example where early co-design prevented a costly rework.
We run a rolling three-sprint cadence where baseband features don’t merge unless RF has a matching calibration script and hardware exposes the necessary hooks. Every month we host multi-vendor OTA days to reconcile lab results, then freeze interfaces before the next silicon spin. In one cycle, early co-design caught a mismatch between higher-order constellation mapping and PA linearization that would have forced a board respin; aligning the calibration tables with the modem’s probabilistic shaping saved a quarter. The rule is: no feature lands in the stack without a paired RF characterization and a reproducible test in the shared lab.
Partnerships with Ericsson, Nokia, and Samsung aim for RF alignment. What shared test plans, calibration methods, and interoperability checkpoints are in place, and how will success be quantified across vendors? Offer anecdotes on resolving a tricky multi-vendor mismatch.
We use a shared test plan that walks from conducted to radiated, then to multi-cell OTA, with common vector signal traces and golden BLER curves. Calibration is harmonized: same reference oscillators, same PA back-off tables, and cross-checked beam codebooks captured in partner labs. Success is simple: identical link behavior within tolerance across vendors over a 400 MHz channel, and matching throughput with a 300 MHz downlink and 100 MHz uplink SBFD profile. We once traced a multi-vendor mismatch to slightly different phase noise masks; aligning the LO cleanup filter spec fixed beam coherence overnight.
Moving from study items to work items to full-stack validation is core. Which features will graduate first, and what evidence—link budgets, BLER curves, mobility KPIs—will tip them over the line? Outline the exact go/no-go criteria you’ll use.
First to graduate are numerology and channelization for upper 6 GHz–8.4 GHz, SBFD operation, and the initial beam management loops needed for Giga-MIMO. We’ll require closed BLER curves in lab and OTA that match link budget predictions and stable handover KPIs under stress. Go means: consistent BLER headroom across the 400 MHz band, reproducible throughput under mobility, and no scheduler regressions in uplink coverage tests. No-go is any feature that breaks under partner lab replication or shows widening lab-to-OTA gaps over two sprints.
A 400 MHz channel with SBFD currently splits 300 MHz downlink and 100 MHz uplink. What trade-offs drove that split, and under what traffic models would you rebalance it dynamically? Share simulation or lab metrics that guided this choice.
We chose 300/100 because downlink demand dominates early demos while preserving uplink coverage headroom, especially as we validate higher-order constellations. In the lab, that split let us stabilize downlink scheduling while safeguarding uplink BLER and HARQ behavior under edge-of-cell conditions. We’d rebalance when uplink-heavy workloads—like AI model uploads or immersive creator traffic—saturate the 100 MHz UL slice, and only if scheduler telemetry shows persistent queueing with no beamforming remedy. The SBFD framework lets us shift, but we do it with guardrails tied to sustained uplink congestion signals.
Probabilistic shaping and higher-order constellations are on the table. Where do you see the best spectral efficiency gains without untenable SNR demands, and how do you harden against phase noise and PA nonlinearity? Walk through your calibration and verification steps.
The sweet spot is pairing probabilistic shaping with constellation orders that our 400 MHz hardware can linearize reliably—shaping helps squeeze efficiency without demanding brittle SNR margins. We counter phase noise with LO cleanup specs agreed in partner labs and mitigate PA nonlinearity via digital predistortion tuned alongside the shaped symbol distribution. Calibration starts with conducted EVM sweeps, then radiated checks against golden traces, finishing with OTA under temperature drift. We verify by replaying identical vectors across vendors and ensuring the same spectral mask and BLER response show up in each lab.
Uplink coverage and efficiency are priority outcomes. Which techniques—Giga-MIMO, UL power control refinements, HARQ optimizations—deliver the biggest uplink lift, and how will you validate them in dense urban and rural cells? Provide field test methodologies and target deltas.
Giga-MIMO on 6–8 GHz is the headline lever, but it only pays off with tighter uplink power control and HARQ tuned to realistic interference. We validate in dense urban with stacked cells and reflective paths, then in rural long-haul links to stress coverage. Methodology is consistent: mobility KPIs, link budgets matched to OTA logs, and BLER versus geometry under the 100 MHz uplink slice. Targets are qualitative but firm—demonstrate clear uplink coverage protection while keeping efficiency trending up across both scenarios before we scale trials.
AI-native 6G implies distributed inference at the edge. What workloads live on telco servers versus radios, and how do CPUs, NPUs, and RAN accelerators split duties? Describe scheduling policies, model update cadences, and failover plans for brownfield constraints.
We keep heavy inference and model aggregation on power-optimized telco servers, close to the RAN but not crowding radio cabinets; lightweight feature extraction and time-critical inference ride in the radios. CPUs orchestrate, NPUs chew through neural ops, and RAN accelerators handle scheduling and layer-1 crunch. Scheduling favors latency-critical paths on-radio, bulk tasks on servers, with model updates batched on predictable cadences so brownfield power budgets aren’t blown. Failover is graceful: if a server pool goes dark, radios fall back to simpler policies until the next maintenance window.
Brownfield sites face tight space and power limits. What are the concrete power budgets per sector you’re designing to, and how will you retrofit cabinets without truck rolls spiraling? Share a step-by-step upgrade playbook from audit to cutover.
We plan to brownfield constraints first: power-optimized telco servers and radio gear sized so they can live within cabinet envelopes without surprise overdraw. The retrofit playbook is disciplined—site audit, remote pre-staging, cabinet rewire plans, lab replication of the exact stack, then a short cutover window with rollback scripted. We compress truck rolls by shipping pre-calibrated gear aligned in partner labs and using remote commissioning where possible. Post-cutover, we watch power and thermal telemetry and only scale features once stability is proven.
Giga-MIMO in 6–8 GHz promises capacity but raises hardware complexity. How will you manage thermal envelopes, calibration drift, and cost per antenna element, and what manufacturing yields are acceptable? Provide the testing regimen that ensures array reliability at scale.
We manage thermal by choosing power-optimized radio designs and spreading load with smart duty cycles; calibration drift is handled with periodic OTA checks drawn from partner lab baselines. Cost per element comes down when RF alignment is shared across vendors, letting us standardize components for upper 6 GHz to 8.4 GHz. Yields improve when the same calibration scripts and acceptance limits travel from labs to factories. Reliability is earned with conducted tests, chamber-based radiated verification, and multi-vendor OTA that reproduces the 400 MHz, 300/100 SBFD behavior before arrays ship.
Operator economics hinge on efficiency. How do you model TCO improvements from AI-native scheduling, smarter beam management, and heterogeneous compute, and which KPIs must move to justify deployment? Offer a concrete ROI example with assumptions and sensitivity checks.
We model TCO by tying energy per bit and scheduler efficiency to site-level power draws and cabinet limits, then rolling it up across clusters. KPIs that must move include consistent throughput over 400 MHz, uplink coverage resilience, and stability under AI offload to telco servers. An ROI passes when those KPIs improve together—if uplink coverage gains don’t show alongside energy savings, we halt. Sensitivity testing toggles AI placement between radios and servers to confirm savings hold even when brownfield constraints tighten.
Security and resilience must evolve with new compute models. How will you safeguard distributed AI pipelines, protect model integrity, and ensure graceful degradation during partial outages? Detail monitoring metrics, red-team exercises, and incident response drills.
We treat the AI pipeline like any critical RAN function: signed models, verified load paths, and telemetry that watches for drift or tampering. Integrity is protected with chain-of-trust from server to radio, and models roll back if verification fails. We red-team the OTA stack and the server edge, then drill partial-outage scenarios where radios revert to simpler schedules while telco servers recover. Monitoring stays practical: model versioning, inference latency, error rates, and energy-per-bit trends to spot anomalies early.
What is your forecast for 6G?
I expect the 2028 pre-commercial demos to feel real—end-to-end, with 400 MHz channels and that 300 MHz downlink/100 MHz uplink SBFD split proving its worth—and the 2029 rollout to start where spectrum and brownfield constraints line up. The winners will marry Giga-MIMO in the 6–8 GHz range with AI-native systems that respect cabinet and power limits from day one. Most of all, we’ll see uplink coverage and efficiency advance together so the user experience climbs without waste. If we keep study items flowing into work items and lock alignment in partner labs, 6G arrives not as a leap of faith, but as a steady, verifiable build.
