HELSINKI, Finland, Feb 10, 2025 – Eficode is using NVIDIA NIM microservices and NVIDIA NeMo – both part of the NVIDIA AI Enterprise software platform — to bring Generative AI (GenAI) into software development. This collaboration helps enterprises accelerate GenAI adoption across their Software Development Lifecycle (SDLC) and provides organizations with the tools and assistance required to build the future of software development.
Software development today faces challenges beyond just coding. Communication barriers, misaligned processes, and operational inefficiencies can cost time and resources for organizations. Experts in software economics (Boehm, DeMarco, Lister, Jones, and Brooks,etc.) state that 80% of development costs relate to sociotechnical factors, including how teams communicate and workflow coordination.
Eficode addresses these issues by deploying AI agents and assistants to enhance workflows, automate routine tasks, and boost teamwork. NVIDIA NeMo supports AI model development, while NVIDIA NIM microservices provide an option for scalable inference. This combination helps businesses incorporate AI into their current DevOps systems effectively.
Eficode’s integration of these technologies delivers a structured, three-phase approach to integrating AI into software development and introduces AI-powered tools that streamline software development:
GenAI foundation: Eficode assesses the infrastructure and workflows, providing organizations with a strategic roadmap for AI adoption. NVIDIA NIM microservices enable flexible deployment across on-premises, multi-cloud, and hybrid environments. Hands-on training workshops help ensure teams are ready to implement AI-powered solutions.
GenAI capability: Eficode develops and deploys custom AI agents and assistants using NVIDIA NeMo to address key workflows: Portfolio Agents that align requirements and specifications across teams, Quality Assurance and Test Automation Agents that automate test generation to improve software quality, and Refactoring Agents that modernize and optimize legacy code. Pilot programs validate AI solutions in real-world environments, enabling organizations to scale incrementally.
GenAI acceleration and support: The NVIDIA AI Enterprise platform ensures continuous optimization and lifecycle management of AI agents. NVIDIA NeMo and NIM and Eficode’s DevOps AI safety net enable iteratively scalable solutions like automated validator oracles to enhance decision-making and task execution in every phase of the service and product development.
“Our collaboration empowers organizations to realize the full potential of generative AI,” said Ilari Nurmi, CEO of Eficode. “With the NVIDIA AI Enterprise platform, we provide a clear, phased approach to AI adoption — helping enterprises move from experimentation to full-scale integration, faster time-to-market, and greater innovation.”
Eficode’s solutions enable software development, quality assurance, and portfolio management professionals to concentrate on the value they bring to the business. NVIDIA’s AI platform helps organizations respond to changing demands, incorporate new AI models, and use resources effectively. With Eficode’s experience in AI management across DevOps platforms, companies enjoy enhanced flexibility, cost savings, and high performance over time.
About Eficode
Eficode, founded in 2005, is a European DevOps and Agile consultancy headquartered in Helsinki, Finland. The company specializes in helping organizations develop high-quality software more efficiently by offering services such as DevOps transformation, Agile practices, cloud capabilities, and UX design. Eficode serves a diverse range of industries, including finance, manufacturing, telecommunications, public sector, and defense. As of December 2024, the company employs approximately 616 professionals across 18 locations in 10 countries, including offices in Helsinki, Copenhagen, Berlin, and Amsterdam. Eficode’s annual revenue is reported to be in the range of $100 million to $1 billion. The company is recognized for its expertise in modern software development methodologies and its commitment to continuous learning and innovation.
Supermicro Ramps Full Production of NVIDIA Blackwell Rack-Scale Solutions with NVIDIA HGX B200.
SAN JOSE, CA, Feb 6, 2025 – Supermicro, Inc. has announced the production availability of its end-to-end AI data center building block solutions accelerated by the NVIDIA Blackwell platform. The Supermicro building block portfolio provides the core infrastructure elements necessary to scale Blackwell solutions with exceptional time to deployment. The portfolio includes a broad range of air-cooled and liquid-cooled systems with multiple CPU options. These include superior thermal design supporting traditional air cooling, liquid-to-liquid (L2L) and liquid-to-air (L2A) cooling. In addition, a full data center management software suite, rack-level integration, including full network switching and cabling and cluster-level L12 solution validation, can be delivered as a turn-key offering with global delivery, professional support, and service.
“In this transformative moment of AI, where scaling laws are pushing the limits of data center capabilities, our latest NVIDIA Blackwell-powered solutions, developed through close collaboration with NVIDIA, deliver outstanding computational power,” said Charles Liang, president and CEO of Supermicro. “Supermicro’s NVIDIA Blackwell GPU offerings in plug-and-play scalable units with advanced liquid cooling and air cooling are empowering customers to deploy an infrastructure that supports increasingly complex AI workloads while maintaining exceptional efficiency. This reinforces our commitment to providing sustainable, cutting-edge solutions that accelerate AI innovation.”
Supermicro’s NVIDIA HGX B200 8-GPU systems utilize next-gen liquid-cooling and air-cooling technology. The newly developed cold plates and the new 250kW coolant distribution unit (CDU) more than double the cooling capacity of the previous generation in the same 4U form factor. Available in 42U, 48U, or 52U configurations, the rack-scale design with the new vertical coolant distribution manifolds (CDM) no longer occupy valuable rack units. This enables 8 systems, comprising 64 NVIDIA Blackwell GPUs in a 42U rack, and all the way up to 12 systems with 96 NVIDIA Blackwell GPUs in a 52U rack.
The new air-cooled 10U NVIDIA HGX B200 system features a redesigned chassis with expanded thermal headroom to accommodate eight 1000W TDP Blackwell GPUs. Up to 4 of the new 10U air-cooled systems can be installed and fully integrated in a rack, the same density as the previous generation, while providing up to 15x inference and 3x training performance.
The new SuperCluster designs incorporate NVIDIA Quantum-2 InfiniBand or NVIDIA Spectrum-X Ethernet networking in a centralized rack, enabling a non-blocking, 256-GPU scalable unit in five racks or an extended 768-GPU scalable unit in nine racks. The architecture — purpose-built for NVIDIA HGX B200 systems with native support for the NVIDIA AI Enterprise software platform for developing and deploying production-grade, end-to-end agentic AI pipelines — combined with Supermicro’s expertise in deploying the world’s largest liquid-cooled data centers delivers exceptional efficiency and time-to-online for today’s most ambitious AI data center projects.
Liquid-Cooled or Air-Cooled: Supermicro NVIDIA HGX B200 Systems
Liquid-cooled NVIDIA HGX B200 Systems and Racks. Image: Supermicro
The new liquid-cooled 4U NVIDIA HGX B200 8-GPU system features newly developed cold plates and improved tubing design that further enhance the efficiency and serviceability of the predecessor that was used for the NVIDIA HGX H100/H200 8-GPU system. Complemented by a new 250kW cooling distribution unit, more than doubling the cooling capacity of the previous generation while maintaining the same 4U form factor, the new rack-scale design with the new vertical coolant distribution manifolds (CDM) enables denser architecture with flexible configuration scenarios used for various data center environments. Supermicro offers 42U, 48U, or 52U rack configurations for liquid-cooled data centers. The 42U or 48U configuration provides 8 systems and 64-GPU in a rack, and 256-GPU scalable unit in five racks. The 52U rack configuration allows 96-GPU in a rack and enables 768-GPU scalable unit in nine racks for the most advanced AI data center deployments. Supermicro also offers an in-row CDU option for large deployments, as well as liquid-to-air cooling rack solution that doesn’t require facility water.
Supermicro’s NVIDIA HGX B200 systems natively support NVIDIA AI Enterprise software to accelerate time to production AI. NVIDIA NIM microservices allow organizations to access the latest AI models for fast, secure, and reliable deployment on NVIDIA accelerated infrastructure anywhere – whether in data centers, the cloud or workstations.
For traditional data centers, the new 10U air-cooled NVIDIA B200 8-GPU system is also available, with a redesigned modular GPU tray to house the NVIDIA Blackwell GPUs in an air-cooled environment. The air-cooled rack design follows the proven, industry-leading architecture of the previous generation, four systems and 32 GPUs in a 48U rack, while providing NVIDIA Blackwell performance. All Supermicro NVIDIA HGX B200 systems are equipped with a 1:1 GPU-to-NIC ratio supporting NVIDIA BlueField-3 SuperNICs or NVIDIA ConnectX-7 NICs for scaling across a high-performance compute fabric.
Supermicro provides support for systems included in the NVIDIA-Certified Systems program. This program incorporates NVIDIA GPUs, CPUs, and high-speed, secure networking technologies into systems from leading NVIDIA partners, ensuring configurations that are validated for optimal performance, reliability, and scalability. By choosing an NVIDIA-Certified System, enterprises can confidently select hardware solutions to power their accelerated computing workloads. NVIDIA has certified Supermicro systems with NVIDIA H100 and H200 GPUs.
End-to-end Liquid-Cooling Solution for NVIDIA GB200 NVL72
Supermicro NVIDIA GB200 NVL72 SuperCluster features the new advanced in-rack coolant distribution unit. Image: Supermicro
Supermicro’s SuperCluster solution, based on the NVIDIA GB200 NVL72 system, represents a breakthrough in AI computing infrastructure, combining Supermicro’s end-to-end liquid-cooling technology. The system integrates 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs in a single rack, delivering exascale computing capabilities through NVIDIA’s most extensive NVLink network to date, achieving 130 TB/s of GPU communications.
The 48U solution’s versatility supports both liquid-to-air and liquid-to-liquid cooling configurations, accommodating various data center environments. Additionally, Supermicro’s SuperCloud Composer software provides management tools for monitoring and optimizing liquid-cooled infrastructure, delivering a complete solution from proof of concept to full-scale deployment.
End-to-end Data Center Solution and Deployment Services for NVIDIA Blackwell
From proof-of-concept (PoC) to full-scale deployment, Supermicro serves as a comprehensive one-stop solution provider with global manufacturing scale, delivering all necessary components, data center-level solution design, liquid-cooling technologies, networking solutions, cabling, management software, testing and validation, and onsite installation services. Its in-house liquid-cooling ecosystem offers a complete, custom-designed thermal management solution, featuring optimized cold plates for GPUs, CPUs, and memory modules, along with versatile coolant distribution unit form factors and capacities, manifolds, hoses, connectors, cooling towers, and sophisticated monitoring and management software. With production facilities across San Jose, Europe, and Asia, Supermicro offers unmatched manufacturing capacity for liquid-cooled rack systems, ensuring timely delivery, reduced total cost of ownership (TCO) and environmental impact, and consistent quality.
About Super Micro Computer, Inc.
Super Micro Computer Inc., or Supermicro, is a leading provider of high-performance server technology and green computing solutions. Founded in 1993 by Charles Liang and Sara Liu, the company is headquartered in San Jose, California. Supermicro offers a complete range of products, including servers, storage systems, networking devices, and server management software, serving industries such as enterprise data centers, cloud computing, artificial intelligence, 5G, and edge computing. As of June 2023, the company employs approximately 5,126 individuals globally. In the fiscal year 2024, Supermicro reported revenues of approximately $15 billion, reflecting significant growth driven by its innovative solutions and expanding market presence.
AUSTIN, TX, Jan 24, 2025 – BOXX Technologies has announced that, as a supplier of NVIDIA-Certified Systems, select BOXX products will support the new NVIDIA GeForce RTX 50 Series GPUs available as of date. The new NVIDIA Blackwell architecture GPUs combine the latest-generation RT Cores and Tensor Cores with GDDR7 memory, increased clock speed, and VRAM, to deliver improved AI, graphics, rendering, and ray-tracing performance.
“Our support for the latest NVIDIA GeForce RTX 50 Series technology is essential because these GPUs accelerate application performance and creative workflows,” said BOXX CEO Kirk Schell. “Now video editors, VFX artists, animators, architects, and other content creators can take advantage of all AI has to offer and design, render, collaborate, and meet project deadlines faster than ever before.”
The NVIDIA GeForce RTX 50 Series GPU supported by BOXX systems features the latest Blackwell technology for accelerated AI and ray tracing, as well as up to1.8TB/s of GDDR7 memory bandwidth to power:
Faster content creation
Multi-application workflows
Improved AI and machine learning support
The new GPU series also offers 33% more VRAM than the previous generation, enabling users of Adobe Creative Cloud, DaVinci Resolve, Cinema 4D, Revit, Rhino, and other applications supported by NVIDIA Studio Drivers, to optimize creative tasks like:
Next-gen raytracing & AI-powered graphics
AI-assisted video editing and rendering
Real-time 8K video editing
To accelerate V-Ray, Autodesk Arnold, Lumion, and other 3D renderers supported by NVIDIA Studio Drivers, the new NVIDIA GeForce RTX 50 Series GPUs inside BOXX systems feature DLSS 4 with Multi Frame Generation which also delivers superior image quality by multiplying frame rates by up to eight times over traditional rendering.
“Demanding graphics and workflows require powerful, purpose-built solutions,” added Schell, “and NVIDIA GeForce RTX 50 Series GPUs supported by innovative BOXX solutions give creators the performance they need to run the latest 3D and AI-accelerated applications.”
About BOXX
Founded in 1996 and headquartered in Austin, Texas, BOXX Technologies specializes in high-performance computer workstations tailored for professionals in architecture, engineering, product design, visual effects, animation, and data science. Their product lineup includes deskside and rack-mounted workstations, rendering systems, and servers, all designed to accelerate workflows in industries such as media and entertainment, manufacturing, and government. BOXX’s APEXX series, for instance, offers performance-tuned, liquid-cooled systems featuring the latest Intel Core Ultra and AMD Ryzen processors, delivering unparalleled speed and reliability. The company maintains its design, manufacturing, and support operations at its Austin headquarters, ensuring quality control and customer service excellence. With a global presence through 40 international resellers, BOXX provides purpose-built solutions that enhance productivity and meet the demanding requirements of creative professionals worldwide. As of 2024, BOXX Technologies employs approximately 67 individuals and has an estimated annual revenue of $24.1 million.
SAN JOSE, CA, Jan 23, 2025 – Cadence has announced that MediaTek has adopted the AI-driven Cadence Virtuoso Studio and Spectre X Simulator on the NVIDIA accelerated computing platform for its 2nm development. As design size and complexity continue to escalate, advanced-node technology development has become increasingly challenging for SoC providers. To meet the aggressive performance and turnaround time (TAT) requirements for its 2nm high-speed analog IP, MediaTek is leveraging Cadence’s proven custom/analog design solutions, enhanced by AI, to achieve a 30% productivity gain.
“As MediaTek continues to push technology boundaries for 2nm development, we need a trusted design solution with strong AI-powered tools to achieve our goals,” said Ching San Wu, corporate vice president at MediaTek. “Closely collaborating with Cadence, we have adopted the Cadence Virtuoso Studio and Spectre X Simulator, which deliver the performance and accuracy necessary to achieve our tight design turnaround time requirements. Cadence’s comprehensive automation features enhance our throughput and efficiency, enabling our designers to be 30% more productive.”
MediaTek has used the Virtuoso ADE Suite to add its AI-based optimization algorithm to streamline future product development. This has helped its designers work more efficiently on circuit designs. Cadence’s Spectre X running on NVIDIA H100 GPUs delivers the same accuracy as Spectre X running on CPUs while delivering up to a 6X performance improvement for post-layout simulations of large, advanced-node designs.
“Improved performance and efficiency are key to advancing today’s complex chip design processes,” said Dion Harris, director of accelerated computing at NVIDIA. “With Cadence’s Spectre X running on NVIDIA Hopper GPUs, companies like MediaTek can accelerate the verification of their complex post-layout designs, maximize analog circuit simulation performance and reduce time to market.”
MediaTek’s analog layout team now uses the Virtuoso Layout Suite device-level router for custom digital blocks in 2nm technology, improving layout efficiency.Additionally, MediaTek is leveraging AI and Virtuoso’s open platform to create a prototyping placement and low-power prediction process. This approach improves design productivity by 30%.
“MediaTek’s validation of our latest Virtuoso Studio release and Spectre X Simulator on NVIDIA’s accelerated computing platform demonstrates that Cadence’s continued investment in enhancing our industry-leading custom design solutions and AI tools is a game changer for our customers’ most challenging 2nm designs,” said Vinod Kariat, corporate vice president and general manager of the Custom Products Group at Cadence. “Bringing the power of AI and GPUs to Spectre X enables MediaTek to solve its large-scale verification simulation challenges even more quickly, without sacrificing accuracy.”
HOUSTON, TX, Jan 16, 2025 – ionstream.ai has announced the immediate availability of NVIDIA L40S GPUs on its GPU as a Service (GaaS) platform. This strategic expansion provides organizations with a cost-effective solution optimized for AI inference and fine-tuning tasks, offering an alternative to larger and expensive GPU options.
Source: ionstream
“Organizations are looking for right-sized GPU solutions that match their specific AI workloads,” said Jeff Hinkle, chief executive officer at ionstream.ai. “The addition of the NVIDIA L40S to our cloud platform provides enterprises with the ideal infrastructure for inference and model refinement tasks, delivering the perfect balance of performance and cost-efficiency.”
Enterprise-Grade AI Infrastructure, On Demand
The NVIDIA L40S GPU, powered by the Ada Lovelace architecture, represents a breakthrough in AI infrastructure accessibility. ionstream.ai’s implementation delivers:
Advanced AI Capabilities:
Optimized for AI inference and fine-tuning workflows
Ideal for production-scale model deployment
Cost-effective alternative to H100 and H200 GPUs for inference tasks
Multi-user support for enterprise workloads
Revolutionary Cost Economics:
Right-sized infrastructure for inference workloads
Improved energy efficiency for sustainable operations
Zero upfront capital expenditure
Pay-as-you-go pricing with per-minute billing
Transforming Enterprise AI Capabilities
The L40S platform enables efficient AI model deployment across industrial domains:
Oil & Gas Exploration: Process complex seismic data through high-performance computing capabilities, enabling rapid subsurface imaging and reservoir characterization. The L40S accelerates traditional seismic processing workflows while supporting emerging AI-enhanced interpretation methods, reducing time-to-insight for critical exploration decisions.
Healthcare & Life Sciences: Deploy medical imaging models and fine-tune diagnostic systems
Financial Services: Run real-time inference for fraud detection and risk analysis
Automotive & Manufacturing: Power production-ready computer vision applications
NVIDIA announced that Toyota, Aurora and Continental have joined the list of global mobility leaders developing and building their consumer and commercial vehicle fleets on NVIDIA accelerated computing and AI.
Toyota, the world’s largest automaker, will build its next-generation vehicles on NVIDIA DRIVE AGX Orin, running the safety-certified NVIDIA DriveOS operating system. These vehicles will offer functionally safe, advanced driving assistance capabilities.
The majority of present day auto manufacturers, truckmakers, robotaxi, and autonomous delivery vehicle companies, tier-one suppliers and mobility startups are developing on NVIDIA DRIVE AGX platform and technologies. With cutting-edge platforms spanning training in the cloud to simulation to compute in the car, NVIDIA’s automotive vertical business is expected to grow to approximately $5 billion in fiscal year 2026.
“The autonomous vehicle revolution has arrived, and automotive will be one of the largest AI and robotics industries,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA is bringing two decades of automotive computing, safety expertise and its CUDA AV platform to transform the multitrillion dollar auto industry.”
Aurora, Continental and NVIDIA also announced a long-term strategic partnership to deploy driverless trucks at scale, powered by NVIDIA DRIVE. NVIDIA’s accelerated compute running DriveOS will be integrated into the Aurora Driver, an SAE level 4 autonomous-driving system that Continental plans to mass-manufacture in 2027.
Other mobility companies adopting NVIDIA DRIVE AGX for their next-generation advanced driver-assistance systems and autonomous vehicle roadmaps include BYD, JLR, Li Auto, Lucid, Mercedes-Benz, NIO, Nuro, Rivian, Volvo Cars, Waabi, Wayve, Xiaomi, ZEEKR, Zoox and many more.
NVIDIA offers three core computing systems and the AI software essential for end-to-end autonomous vehicle development. NVIDIA DRIVE AGX is the in-vehicle computer. NVIDIA DGX processes the data from the fleet and trains AI models, and NVIDIA Omniverse and NVIDIA Cosmos running on NVIDIA OVX systems test and validate self-driving systems in simulation.
LAS VEGAS, NV, Jan 13, 2025 – NVIDIA announced NVIDIA Cosmos, a platform comprising state-of-the-art generative world foundation models, advanced tokenizers, guardrails and an accelerated video processing pipeline built to advance the development of physical AI systems such as autonomous vehicles (AVs) and robots.
Physical AI models are costly to develop, and require vast amounts of real-world data and testing. Cosmos world foundation models, or WFMs, offer developers an easy way to generate massive amounts of photoreal, physics-based synthetic data to train and evaluate their existing models. Developers can also build custom models by fine-tuning Cosmos WFMs.
Cosmos models will be available under an open model license to accelerate the work of the robotics and AV community. Developers can preview the first models on the NVIDIA API catalog, or download the family of models and fine-tuning framework from the NVIDIA NGC catalog or Hugging Face.
Leading robotics and automotive companies, including 1X, Agile Robots, Agility, Figure AI, Foretellix, Fourier, Galbot, Hillbot, IntBot, Neura Robotics, Skild AI, Virtual Incision, Waabi and XPENG, along with ridesharing giant Uber, are among the first to adopt Cosmos.
“The ChatGPT moment for robotics is coming. Like large language models, world foundation models are fundamental to advancing robot and AV development, yet not all developers have the expertise and resources to train their own,” said Jensen Huang, founder and CEO of NVIDIA. “We created Cosmos to democratize physical AI and put general robotics in reach of every developer.”
Open World Foundation Models to Accelerate the Next Wave of AI
NVIDIA Cosmos’ suite of open models means developers can customize the WFMs with datasets, such as video recordings of AV trips or robots navigating a warehouse, according to the needs of their target application.
Cosmos WFMs are purpose-built for physical AI research and development, and can generate physics-based videos from a combination of inputs, like text, image and video, as well as robot sensor or motion data. The models are built for physically based interactions, object permanence, and high-quality generation of simulated industrial environments — like warehouses or factories — and of driving environments, including various road conditions.
In his opening keynote at CES, NVIDIA founder and CEO Jensen Huang showcased ways physical AI developers can use Cosmos models, including for:
Video search and understanding, enabling developers to easily find specific training scenarios, like snowy road conditions or warehouse congestion, from video data.
Physics-based photoreal synthetic data generation, using Cosmos models to generate photoreal videos from controlled 3D scenarios developed in the NVIDIA Omniverse platform.
Physical AI model development and evaluation, whether building a custom model on the foundation models, improving the models using Cosmos for reinforcement learning or testing how they perform given a specific simulated scenario.
Foresight and “multiverse” simulation, using Cosmos and Omniverse to generate every possible future outcome an AI model could take to help it select the best and most accurate path.
Advanced World Model Development Tools
Building physical AI models requires petabytes of video data and tens of thousands of compute hours to process, curate and label that data. To help save enormous costs in data curation, training and model customization, Cosmos features:
An NVIDIA AI and CUDA-accelerated data processing pipeline, powered by NVIDIA NeMo Curator, that enables developers to process, curate and label 20 million hours of videos in 14 days using the NVIDIA Blackwell platform, instead of over three years using a CPU-only pipeline.
NVIDIA Cosmos Tokenizer, a state-of-the-art visual tokenizer for converting images and videos into tokens. It delivers 8x more total compression and 12x faster processing than today’s leading tokenizers.
The NVIDIA NeMo framework for highly efficient model training, customization and optimization.
World’s Largest Physical AI Industries Adopt Cosmos
Pioneers across the physical AI industry are already adopting Cosmos technologies.
1X, an AI and humanoid robot company, launched the 1X World Model Challenge dataset using Cosmos Tokenizer. XPENG will use Cosmos to accelerate the development of its humanoid robot. And Hillbot and Skild AI are using Cosmos to fast-track the development of their general-purpose robots.
“Data scarcity and variability are key challenges to successful learning in robot environments,” said Pras Velagapudi, chief technology officer at Agility. “Cosmos’ text-, image- and video-to-world capabilities allow us to generate and augment photorealistic scenarios for a variety of tasks that we can use to train models without needing as much expensive, real-world data capture.”
Transportation leaders are also using Cosmos to build physical AI for AVs:
Waabi, a company pioneering generative AI for the physical world starting with autonomous vehicles, is evaluating Cosmos in the context of data curation for AV software development and simulation.
Wayve, which is developing AI foundation models for autonomous driving, is evaluating Cosmos as a tool to search for edge and corner case driving scenarios used for safety and validation.
AV toolchain provider Foretellix will use Cosmos, alongside NVIDIA Omniverse Sensor RTX APIs, to evaluate and generate high-fidelity testing scenarios and training data at scale.
Global ridesharing giant Uber is partnering with NVIDIA to accelerate autonomous mobility. Rich driving datasets from Uber, combined with the features of the Cosmos platform and NVIDIA DGX Cloud, can help AV partners build stronger AI models even more efficiently.
“Generative AI will power the future of mobility, requiring both rich data and very powerful compute,” said Dara Khosrowshahi, CEO of Uber. “By working with NVIDIA, we are confident that we can help supercharge the timeline for safe and scalable autonomous driving solutions for the industry.”
Developing Open, Safe and Responsible AI
NVIDIA Cosmos was developed in line with NVIDIA’s trustworthy AI principles, which prioritize privacy, safety, security, transparency and reducing unwanted bias.
Trustworthy AI is essential for fostering innovation within the developer community and maintaining user trust. NVIDIA is committed to safe and trustworthy AI, in line with the White House’s voluntary AI commitments and other global AI safety initiatives.
The open Cosmos platform includes guardrails designed to mitigate harmful text and images, and features a tool to enhance text prompts for accuracy. Videos generated with Cosmos autoregressive and diffusion models on the NVIDIA API catalog include invisible watermarks to identify AI-generated content, helping reduce the chances of misinformation and misattribution.
NVIDIA encourages developers to adopt trustworthy AI practices and further enhance guardrail and watermarking solutions for their applications.
Cosmos WFMs are now available under NVIDIA’s open model license on Hugging Face and the NVIDIA NGC catalog. Cosmos models will soon be available as fully optimized NVIDIA NIM microservices.
Developers can access NVIDIA NeMo Curator for accelerated video processing and customize their own world models with NVIDIA NeMo. NVIDIA DGX Cloud offers a fast and easy way to deploy these models, with enterprise support available through the NVIDIA AI Enterprise software platform.
LAS VEGAS, NV, Jan 13, 2025 – NVIDIA unveiled NVIDIA Project DIGITS, a personal AI supercomputer that provides AI researchers, data scientists and students worldwide with access to the power of the NVIDIA Grace Blackwell platform.
Project DIGITS features the new NVIDIA GB10 Grace Blackwell Superchip, offering a petaflop of AI computing performance for prototyping, fine-tuning and running large AI models.
With Project DIGITS, users can develop and run inference on models using their own desktop system, then seamlessly deploy the models on accelerated cloud or data center infrastructure.
“AI will be mainstream in every application for every industry. With Project DIGITS, the Grace Blackwell Superchip comes to millions of developers,” said Jensen Huang, founder and CEO of NVIDIA. “Placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the age of AI.”
GB10 Superchip Provides a Petaflop of Power-Efficient AI Performance
The GB10 Superchip is a system-on-a-chip (SoC) based on the NVIDIA Grace Blackwell architecture and delivers up to 1 petaflop of AI performance at FP4 precision.
GB10 features an NVIDIA Blackwell GPU with latest-generation CUDA cores and fifth-generation Tensor Cores, connected via NVLink-C2C chip-to-chip interconnect to a high-performance NVIDIA Grace CPU, that includes 20 power-efficient cores built with the Arm architecture. MediaTek, known for its Arm-based SoC designs, collaborated on the design of GB10, contributing to its power efficiency, performance and connectivity.
The GB10 Superchip enables Project DIGITS to deliver powerful performance using only a standard electrical outlet. Each Project DIGITS features 128GB of unified, coherent memory and up to 4TB of NVMe storage. With the supercomputer, developers can run up to 200-billion-parameter large language models to supercharge AI innovation. In addition, using NVIDIA ConnectX networking, two Project DIGITS AI supercomputers can be linked to run up to 405-billion-parameter models.
Grace Blackwell AI Supercomputing Within Reach
With the Grace Blackwell architecture, enterprises and researchers can prototype, fine-tune and test models on local Project DIGITS systems running Linux-based NVIDIA DGX OS, and then deploy them seamlessly on NVIDIA DGX Cloud, accelerated cloud instances or data center infrastructure.
This allows developers to prototype AI on Project DIGITS and then scale on cloud or data center infrastructure, using the same Grace Blackwell architecture and the NVIDIA AI Enterprise software platform.
Project DIGITS users can access an extensive library of NVIDIA AI software for experimentation and prototyping, including software development kits, orchestration tools, frameworks and models available in the NVIDIA NGC catalog and on the NVIDIA Developer portal. Developers can fine-tune models with the NVIDIA NeMo framework, accelerate data science with NVIDIA RAPIDS libraries and run common frameworks such as PyTorch, Python and Jupyter notebooks.
To build agentic AI applications, users can also harness NVIDIA Blueprints and NVIDIA NIM microservices, that are available for research, development and testing via the NVIDIA Developer Program. When AI applications are ready to move from experimentation to production environments, the NVIDIA AI Enterprise license provides enterprise-grade security, support and product releases of NVIDIA AI software.
Project DIGITS will be available from May, 2025 from NVIDIA and top partners, starting at $3,000.
LAS VEGAS, NV, and SEOUL, South Korea, Jan 10, 2025 – Hyundai Motor Group has announced a strategic partnership with NVIDIA to accelerate the development of advanced AI technologies that will drive the future of mobility.
In the AI era, Hyundai Motor Group is driving innovation through strategic AI integration, positioning itself at the forefront of smart mobility solutions. The Group operates a variety of AI initiatives and through this partnership aims to further enhance the application of intelligence to its core mobility products, such as software-defined vehicles and robotics, and across its business operations.
(from left) Heung-Soo Kim, executive vice president and head of global strategy office at Hyundai Motor Group and Rishi Dhall, vice president of automotive at NVIDIA
“Hyundai Motor Group is exploring innovative approaches with AI technologies in various fields such as robotics, autonomous driving, and smart factory,” said Heung-Soo Kim, executive vice president and head of global strategy office at Hyundai Motor Group. “This partnership is set to accelerate our progress, positioning the Group as a frontrunner in driving AI-empowered mobility innovation.”
As part of the agreement, Hyundai Motor Group will harness NVIDIA accelerated computing and AI Enterprise software to help manage the massive amounts of data required to safely develop and train its AI models for various applications.
The Group will also utilize the NVIDIA Omniverse platform to develop physical AI and digital twin applications to simulate its factories, helping improve manufacturing efficiencies and quality, and streamline costs. In addition, the Group will use the NVIDIA Isaac robot development platform to develop and safely deploy AI robots.
Both the companies will also work closely to create virtual simulation environments for safe and reliable autonomous driving technology and robotics systems.
“Accelerated computing, generative AI, and Omniverse are unlocking a new era of mobility,” said Rishi Dhall, vice president of automotive at NVIDIA. “This partnership will drive the creation of safer, more intelligent vehicles, supercharge manufacturing with greater efficiency and quality, and deploy cutting-edge robotics to help build a smarter, more connected digital workplace.” The initiatives pave the way for groundbreaking innovations in the partnership’s future plans, with more announcements expected soon.
LAS VEGAS, NV (CES), Jan 7, 2025 – NVIDIA announced generative AI models and blueprints that expand NVIDIA Omniverse integration further into physical AI applications such as robotics, autonomous vehicles and vision AI. Global leaders in software development and professional services are using Omniverse to develop new products and services that will accelerate the next era of industrial AI.
Accenture, Altair, Ansys, Cadence, Foretellix, Microsoft and Neural Concept are among the first to integrate Omniverse into their next-generation software products and professional services. Siemens, a leader in industrial automation, announced today at the CES trade show the availability of Teamcenter Digital Reality Viewer — the first Siemens Xcelerator application powered by NVIDIA Omniverse libraries.
“Physical AI will revolutionize the $50 trillion manufacturing and logistics industries. Everything that moves — from cars and trucks to factories and warehouses — will be robotic and embodied by AI,” said Jensen Huang, founder and CEO at NVIDIA. “NVIDIA’s Omniverse digital twin operating system and Cosmos physical AI serve as the foundational libraries for digitalizing the world’s physical industries.”
New Models and Frameworks Accelerate World Building for Physical AI
Creating 3D worlds for physical AI simulation requires three steps: world building, labeling the world with physical attributes and making it photoreal.
NVIDIA offers generative AI models that accelerate each step. The USD Code and USD Search NVIDIA NIM microservices are now available, letting developers use text prompts to generate or search for OpenUSD assets. A new NVIDIA Edify SimReady generative AI model can automatically label existing 3D assets with attributes like physics or materials, enabling developers to process 1000 3D objects in minutes instead of over 40 hours manually.
NVIDIA Omniverse, paired with new NVIDIA Cosmos world foundation models, creates a synthetic data multiplication engine — letting developers easily generate massive amounts of controllable, photoreal synthetic data. Developers can compose 3D scenarios in Omniverse and render images or videos as outputs. These can then be used with text prompts to condition Cosmos models to generate countless synthetic virtual environments for physical AI training.
NVIDIA Omniverse Blueprints Speed Up Industrial, Robotic Workflows
During the CES keynote, NVIDIA also announced four new blueprints that make it easier for developers to build Universal Scene Description (OpenUSD)-based Omniverse digital twins for physical AI. The blueprints include:
Mega, powered by Omniverse Sensor RTX APIs, for developing and testing robot fleets at scale in an industrial factory or warehouse digital twin before deployment in real-world facilities.
Autonomous Vehicle (AV) Simulation, also powered by Omniverse Sensor RTX APIs, that lets AV developers replay driving data, generate new ground-truth data and perform closed-loop testing to accelerate their development pipelines.
Omniverse Spatial Streaming to Apple Vision Pro that helps developers create applications for immersive streaming of large-scale industrial digital twins to Apple Vision Pro.
Real-Time Digital Twins for CAE, a reference workflow built on NVIDIA CUDA-X acceleration, physics AI and Omniverse libraries that enables real-time physics visualization.
Market Leaders Supercharge Industrial AI Using NVIDIA Omniverse
Global leaders in software development and professional services are using Omniverse to develop new products and services that are poised to accelerate the next era of industrial AI.
Building on its adoption of Omniverse libraries in its Reality Digital Twin data center digital twin platform, Cadence, a leader in electronic systems design, announced further integration of Omniverse into Allegro, its leading electronic computer-aided design application used by the world’s largest semiconductor companies.
Altair, a leader in computational intelligence, is adopting the Omniverse blueprint for real-time CAE digital twins for interactive computational fluid dynamics (CFD). Ansys is adopting Omniverse into Ansys Fluent, a leading CAE application. Neural Concept is integrating Omniverse libraries into its next-generation software products, enabling real-time CFD and enhancing engineering workflows.
Accenture, a leading global professional services company, is using Mega to help German supply chain solutions leader KION by building next-generation autonomous warehouses and robotic fleets for their network of global warehousing and distribution customers.
AV toolchain provider Foretellix, a leader in data-driven autonomy development, is using the AV simulation blueprint to enable full 3D sensor simulation for optimized AV testing and validation. Research organization MITRE is also deploying the blueprint, in collaboration with the University of Michigan’s Mcity testing facility, to create an industry-wide AV validation platform.
Katana Studio is using the Omniverse spatial streaming workflow to create custom car configurators for Nissan and Volkswagen, allowing them to design and review car models in an immersive experience while improving the customer decision-making process.
Innoactive, an XR streaming platform for enterprises, used the workflow to add platform support for spatial streaming to Apple Vision Pro. The solution enables Volkswagen Group to conduct design and engineering project reviews at human-eye resolution. Innoactive also collaborated with Syntegon, a provider of processing and packaging technology solutions for pharmaceutical production, to enable Syntegon’s customers to walk through and review digital twins of custom installations before they are built.