NVIDIA and T-Mobile are working with Nokia and an ecosystem of developers to embed GPUs in the RAN, for connectivity, inference and interacting with the physical world
Yesterday at the event GTC event in California, NVIDIA and T-Mobile announced they are working with Nokia and an “ecosystem of developers to bring physical AI applications over distributed edge AI networks”. They are to embed GPUs into the RAN to provide better radio frequency signal processing and enable physical AI inference at the edge.
Commentator Sebastian Barros wrote in his blog, “Although we are really early in the game, it is abundantly clear that AI has left the central datacenters and training labs. It is now all about the physical world and inference, where telcos can play a big role”.
Rewiring the globe’s compute power?
Barros added, “What we are witnessing is a fundamental rewiring of how the world computes, and this groundbreaking development was showcased alongside 120 different robots at GTC 2026 today,” with NVIDIA’s founder and CEO, Jensen Huang talking about “Robotic AI Radio”. Huang’s image is at the centre of the graphic above.
Huang said in a statement, “By turning the 5G network into a distributed AI computer with T-Mobile and Nokia, we’re creating a scalable blueprint for the world’s edge AI infrastructure.” In less extravagent terms, the ecosystem play is intended to create a foundation for developers to deploy agents that ‘understand’ the physical world across cities, utilities and industrial worksites using NVIDIA Metropolis Blueprint for video search and summarization (VSS).
T-Mobile was the first operator to launch a 5G SA network nationwide in 2020 and is the first in the US to pilot NVIDIA’s AI-RAN infrastructure with Nokia’s anyRAN software. The operator is now working with “NVIDIA physical AI partners,” demonstrating how cell sites and mobile switching offices (MSOs – more usually called mobile switching centres on this side of the Atlantic) can support distributed edge AI workloads while delivering 5G connectivity.
Solution looking for a problem?
According to press statements, the transition to AI-RAN built on NVIDIA accelerated computing addresses a critical bottleneck in scaling physical AI, namely, the lack of low-latency, secure and ubiquitous connectivity.
In Huang’s vision for telecoms, the greatest potential of AI lies is interaction with the physical world, such as autonomous vehicles, smart city applications and industrial robotics – now referred to as physical AI. However, the physical world generates immense amounts of unstructured data from sensors such as video, radio frequencies and light detection and ranging (LiDAR).
It is impractical to send all this back to the centralised cloud because of what the press release calls “crippling latency”. Adding, “A factory robot cannot wait 200 milliseconds for a cloud server to generate the tokens required to tell it to stop”. So tokens must be created at the edge where the data is generated and happily, there are many millions of cell towers and mobile switching centres all over the world, which “are the perfect, pre-existing real estate for these new intelligence factories”. Bingo.
Pilot use cases include:
- Smart city operations – LinkerVision, Inchor and Voxelmaps are testing integrated computer vision-based ‘City Operations Agents’ and a digital twin that can perceive, simulate and optimise traffic light timing, targeting 5 times faster incident response times for The City of San Jose.
- Automated utility inspection – Levatas and Skydio are automating the inspection of hundreds of thousands of miles of transmission lines over 5G with NVIDIA compute to detect and resolve anomalies such as leaning power poles, corrosion and thermal hotspots 5 times faster. They are now evaluating AI-RAN infrastructure to further reduce costs, improve storm recovery time and accelerate the shift from reactive to predictive maintenance.
- Vision-based facility management – Developers such as Vaidio are using the VSS blueprint to build facility management agents that move beyond simple sensors to perform threat detection and failure forecasting, triggering automated workflows to improve facility management.
- Real-time industrial safety – Fogsphere provides safety AI agents for SAIPEM to detect and respond in real-time to hazardous events — such as workers under suspended loads or hydrocarbon spills — in high-risk construction onshore, offshore and drilling environments. Fogsphere is now validating how AI-RAN infrastructure can enhance the capabilities and performance of these agents — already running 24/7 without relying on Wi-Fi — over secure and distributed network compute.
T-Mobile says these initiatives reflect its broader strategy to test and enable edge AI capabilities in collaboration with NVIDIA, Nokia and a diverse ecosystem of software providers, manufacturers and enterprise innovators.
NVIDIA’s AI-RAN portfolio encompasses NVIDIA ARC-Pro built on NVIDIA RTX PRO 4500 Blackwell Server Edition for power-constrained cell sites, and NVIDIA RTX PRO 6000 Blackwell Server Edition for higher-capacity mobile switching offices/centres.
“Turning networks into distributed AI computing platforms to unlock the full potential of Physical AI will require ultra-low latency and space time coherency at the network edge for billions of endpoints, and that’s what we’ve built at T-Mobile,” enthused Srini Gopalan, Chief Executive Officer of T-Mobile. “With the first nationwide 5G Standalone and 5G Advanced network, we are uniquely positioned to help power a future where intelligent systems don’t wait on the cloud but rely on intelligent networks that allow them to act in real time.”


