More
    HomeAutomation/AINvidia buys AI edge cloud management company 

    Nvidia buys AI edge cloud management company 

    -

    The acquisition of GPU orchestration software provider Run:ai shows it wants the edge and the RAN

    Nvidia continues to cover all bases of the telco AI value chain after already positioning its vRAN solution as the anchor tenant for multipurpose edge cloud infrastructure. It is now spending $700m to acquire Israeli orchestration startup Run:ai which promotes efficient cluster resource utilization for AI workloads across shared accelerated computing infrastructure. 

    Run.ai’s forte is Kubernetes-based workload management and orchestration software, and the acquisition demonstrates Nvidia’s thinking on how the service provider edge will evolve – or at least how it wants the edge to evolve.  

    Run:ai enables enterprise customers to manage and optimize their compute infrastructure, whether on premises, in the cloud or in hybrid environments. The company has built an open platform on Kubernetes, the orchestration layer for modern AI and cloud infrastructure. It supports all popular Kubernetes variants and integrates with third-party AI tools and frameworks. 

    The start-up’s customers include some of the world’s largest enterprises across multiple industries, which use the Run:ai platform to manage data-centre-scale GPU clusters. 

    The Run:ai platform has some useful features too like the ability to pool GPUs and share computing power – from fractions of GPUs to multiple GPUs or multiple nodes of GPUs running on different clusters — for separate tasks. You can also add users, curate them under teams, provide access to cluster resources, control over quotas, priorities and pools, and monitor and report on resource use. 

    Nvidia said it will continue to offer Run:ai’s products under the same business model for the immediate future. And it will continue to invest in the Run:ai product roadmap, including enabling on Nvidia DGX Cloud, an AI platform co-engineered with leading clouds for enterprise developers, offering an integrated, full-stack service optimized for generative AI. 

    Telcos using Nvidia HGX, DGX and DGX Cloud will gain access to Run:ai’s capabilities for their AI workloads, particularly for large language model deployments. Run:ai’s solutions are already integrated with Nvidia DGX, DGX SuperPOD, Base Command, NGC containers, and its AI Enterprise software, among other products. 

     Keeping on top of AI ecosystem 

    Nvidia dominates the GPU market and that won’t change anytime soon. However, the silicon giant knows it needs to stay ahead of the pack and when it comes to telcos, custom chips are important given cloud computing and AI demands have grown increasingly specialised and that means tailored solutions to achieve optimal performance and better efficiency. Earlier this year Nvidia moved to head off the likes of AMD by establishing a separate business unit dedicated to building custom AI chips for external clients. 

    Around the same time, Nvidia announced it was forming the AI-RAN Alliance with the likes of heavyweights Ericsson, Nokia and Samsung. Nvidia believes that by consolidating their RAN workloads and other AI/ML applications onto a shared GPU platform, telcos can reduce hardware costs, simplify infrastructure and get more bang for the buck.  

    But using GPUs in RAN potentially means telcos could be locked-in to Nvidia’s ecosystem. However, as the founder and president of consultancy Mobile Experts Inc Joe Madden recently pointed out on LinkedIn, there is a bigger psychological barrier to Nvidia owning the RAN space: operators are unwilling to run their RAN workload on the same processor as anybody else’s Edge app.  

    “Somebody could deploy the GPUs in the edge, but the operators need to be convinced that they should run their L1/L2 on the Nvidia GPU, while other people use the same GPU for other purposes,” he said. “They are very risk-averse, so this is a challenge.”