Nvidia Charts Ambitious AI Future with Robotics and Open Infrastructure at Computex 2025
Taipei, Taiwan—Nvidia made a significant splash at this year’s Computex Taipei tech expo on Monday, unveiling a raft of announcements that span from pioneering humanoid robot development to a strategic opening of its high-performance NVLink technology. These moves are set to empower companies to construct semi-custom AI servers utilizing Nvidia’s robust infrastructure, further cementing the tech giant’s pivotal role in the artificial intelligence revolution.
The announcements arrive as Nvidia enjoys a period of strong momentum, notably bolstered by the recent U.S. decision to abandon the Biden administration’s AI diffusion rules. These rules would have imposed limitations on which countries could acquire the company’s advanced AI chips. Adding to this favorable climate, Nvidia was a key topic during President Trump’s visit to Saudi Arabia, where it was confirmed that the company will supply several hundred thousand AI processors over the next five years to Humain, an AI startup under the umbrella of Saudi Arabia’s sovereign wealth fund.
Venturing into Physical AI: The Dawn of Humanoid Robotics
A highlight of Nvidia’s presentation was the reveal of Nvidia Isaac GR00T-Dreams. According to the company, this platform is engineered to help developers generate vast quantities of training data. This data is crucial for teaching robots to perform a diverse array of behaviors and to adapt effectively to new and dynamic environments.
Nvidia CEO Jensen Huang has vocally championed the potential of physical AI, stating that it represents “the world’s next trillion-dollar industry.” To realize this vision, Nvidia is heavily investing in building the sophisticated software necessary to train and operate humanoid robots, initially targeting factory applications before an eventual expansion into homes. This strategic focus underscores Nvidia’s ambition to be at the forefront of the next wave of AI-driven automation.
NVLink Fusion: Revolutionizing AI Server Customization
In a significant move to enhance flexibility for its customers, Nvidia introduced its new NVLink Fusion offering. This technology permits customers to build custom servers by combining Nvidia’s Grace CPU with a third-party AI chip, all integrated within Nvidia’s comprehensive server infrastructure offerings. Alternatively, clients can opt to pair their own CPU solutions with one of Nvidia’s cutting-edge AI chips.
“Using NVLink Fusion, hyperscalers can work with the NVIDIA partner ecosystem to integrate NVIDIA rack-scale solutions for seamless deployment in data center infrastructure,” the company explained in a statement. The core objective behind this initiative is to provide infrastructure customers with a broader spectrum of choices when designing and implementing their data center and server systems, allowing for more tailored and efficient AI operations.
Empowering Enterprise AI: The RTX Pro Blackwell Servers
Further expanding its hardware portfolio, Nvidia is developing what it terms its RTX Pro Blackwell servers. These systems, powered by the company’s Blackwell Server Edition GPUs, are designed to spearhead “the shift from CPU-based systems to efficient GPU-accelerated infrastructure,” according to Nvidia.
The company elaborated that these powerful servers are intended to manage “virtually every enterprise workload.” This encompasses a wide array of applications, from demanding design and simulation software to running sophisticated agentic AI programs and beyond, promising a significant leap in processing capability and efficiency for businesses.
Democratizing AI Development with DGX Cloud Lepton
Nvidia also debuted its DGX Cloud Lepton platform, a solution that grants customers access to cloud-based AI processing. This service empowers users to develop and roll out their own AI software with greater ease and scalability. Nvidia stated that it achieves this by collaborating with a network of partners, including notable names like CoreWeave, Foxconn, and SoftBank. These partners will host a global network of GPU clouds, making high-performance AI resources more accessible to developers and organizations worldwide.
Through these multifaceted announcements, Nvidia is not only showcasing its current technological prowess but also laying a clear and ambitious roadmap for its continued leadership in the ever-expanding universe of artificial intelligence, from foundational hardware and software to the intelligent machines of tomorrow.
Sources