From the company that owns South Korea’s dashcam market, Vueroids S1 dashcam. Image: Vueroid.
You may not think you don’t need a dashcam. Until the moment you need one desperately. Until a car cuts you off, slams on the brakes and you slam into it. Good luck convincing your insurance company, or anyone else for that matter, that it was not your fault.
Luck was definitely on the side of Ashpia Natasha, a Queens resident, who was driving alone on a New York’s Belt Parkway on October 16, 2024 at approximately 11:11 am. Luckily, her car was equipped with an after-market dashcam. When a grey Honda swerved into her lane ahead of her and suddenly slammed on its brakes, the dashcam was recording. But just as Natasha was thanking her lucky stars that she was able to brake in time to avoid running into the Honda, it got worse. The driver of the Honda puts the car in reverse and rams into the front of Natasha’s vehicle. The four occupants stream out of the Honda, doing their best to appear hurt while also recording the damage to the vehicles.
The plot thickens. Off camera, Natasha talks to a man who has come out of the front passenger door because “the driver did not speak English.” However, WABC news, analyzed the video to show the rear seat passengers attempting to hide the driver exchanging seats with the passenger, then emerging from the passenger side to claim injury.
The National Insurance Crime Bureau cited in WABC report says criminals prey on women driving alone.
Natasha uploaded her dashcam recording to TikTok where it went viral. By the time of this writing, it has been viewed 79 million times. The story has been covered by People magazine and picked up by local news stations. TikTokkers took it upon themselves to locate the Honda and reported it to the police.
The story is almost over. The driver of the Honda was picked up by police and was charged with staging a motor vehicle accident, criminal mischief, reckless endangerment, conspiracy and insurance fraud. He pleaded not guilty and returned on January 7 this year. If convicted, he faces up to seven years in prison.
Natasha may well be the poster child of the dashcam industry, no doubt giving nascent US dashcam sales a boost. But insurance fraud is real, because rear-ended, even if you do it yourself, is almost a surefire payoff.
Vueroid kit. For best results, Vueroid suggests that the vehicle’s electrical system power its dashcam. Image: Vueroid.
Why Don’t I Have Dashcams Already?
Teslas have outward-facing cameras that could function as dashcams, but among passenger vehicles, they are an exception. Also, Tesla is more intent on keeping the video for their purposes and disinclined to share it with drivers, making their cameras essentially useless in protection against insurance fraud. It seems as if consumers will have to resort to after-market dashcams that store video continuously on a SD card, which you can play back when needed.
While most cars in the U.S. don’ t have aftermarket dashcams, most vehicles in Russia and Korea do. Why the difference, we ask at the Vueroid booth?
It’s all about insurance, we are told. In the US, most drivers are protected against uninsured drivers. If the driver at fault has no insurance, the victim is reimbursed by their own insurance. In the U.S., insurance companies are less likely to squabble over who’s at fault. In other countries, it’s a different situation. And a dashcam may be the only way to clearly establish wrongdoing.
Vuroid, one of the major dashcam manufacturers in South Korea, where dashcams are installed in 80% of all vehicles. Yet, most Koreans would know Vueroid by name since the company has let other companies put their label on the dashcams. At CES 2025, we see the S1 infinity, the first Vueroid-branded dashcam to be sold.
Want one? You’ll have to wait. It is expected to be on sale in U.S. later this year.
Vueroid enhances the image for license plate detection. Image: Vueroid
Vueroid has used this time to possibly come up with the best dashcam ever. The S1 infinity has 4K resolution and a 150-degree field of view. Using AI and ISP (image signal processing) tuning technology to enhance the clarity and visibility of letters and numbers, the Vueroid Hub app is able to make out license plates, which with normal dashcams can be too grainy and unreadable.
The S1 is available in 1, 2 or 3 camera configurations, depending on whether you want to add rear and interior cameras.
The S1 has a parking mode with which it uses motion detection to record moving objects around the vehicle. It is equipped with temperature sensors that turn on cooling if it’s too hot and can also sense when the car battery is low and turn itself off so it doesn’t drain the battery.
LAS VEGAS, NV, Jan 24, 2025 – Oshkosh Corp. announced that it was selected as a winner in this year’s CES Picks Awards. Oshkosh was recognized for its Hail-able Autonomous Refuse Robot – Electric or HARR-E.
With HARR-E, Oshkosh is pioneering a new way to tackle a weekly chore — taking out the trash. HARR-E is an autonomous, electric refuse collection robot that offers on-demand refuse and recycling pickup via a smartphone app or virtual at-home assistant. Using AI-enabled navigation and advanced sensors, HARR-E autonomously collects trash and then returns to a central collection area.
Using advanced AI and robotics, Oshkosh’s Pratt Miller unit created a prototype system to improve efficiency and streamline logistics. The system enhances traditional refuse collection services, offering modern solutions for planned communities and businesses.
“Self-driving technology will play an increasing role in our daily lives. HARR-E is a great example of how autonomous technology can make chores like taking out the trash a thing of the past,” said Jay Iyengar, executive vice president, chief technology and strategic sourcing officer, Oshkosh Corp. “The Oshkosh team is proud to be recognized with this prestigious honor, as we continue to develop innovative solutions in electrification, AI, autonomy and connectivity to help keep communities clean and support a sustainable future.”
The awards’ editorial team shared, “The Picks Awards recognize outstanding products across consumer technology, the custom installation industry and innovative new technology that can truly help businesses of all sizes across various industries. Our team was highly impressed by the excellence and scope of this year’s entrants. All the winners should be proud of their achievements – a well-deserved congratulations from the entire awards team.”
Making its debut at CES 2025, Oshkosh announced electrification, AI, autonomy and connectivity solutions to help everyday heroes — such as firefighters, soldiers, postal carriers, construction workers and people doing tough work on the tarmac of an airport — and the communities they serve. These solutions include:
A first-of-its-kind, purpose-built, all-electric refuse and recycling front-loader vehicle along with AI and electrified technologies that improve refuse and recycling collection in neighborhoods.
Autonomous robot designed for on-demand refuse collection to help manage weekly chores.
AI-enabled Collison Avoidance Mitigation System (CAMS) for fire and emergency vehicles to provide critical advance notice of an impending collision to first responders.
Self-driving vehicles and connected solutions like iOPS and ClearSky Smart Fleet technologies to improve operations at airports and on job sites.
About Oshkosh
Founded in 1917 and headquartered in Oshkosh, Wisconsin, Oshkosh Corporation is a global industrial technology company specializing in the design and manufacture of purpose-built vehicles and equipment. The company operates through four primary business segments: Access Equipment, Defense, Fire & Emergency, and Commercial. Its diverse product portfolio includes aerial work platforms, tactical military vehicles, firefighting apparatus, and refuse collection trucks. Serving industries such as construction, defense, emergency services, and waste management, Oshkosh Corporation employs approximately 17,300 people worldwide. In 2023, the company reported revenues of $9.7 billion, a 17% increase from the previous year. Recent innovations include the development of electric vehicles and AI-powered systems, such as plug-in fire engines and autonomous garbage collection robots, underscoring the company’s commitment to advancing technology in specialty vehicles.
LAS VEGAS, NV, Jan 22, 2025 – Denvix, a consumer electronics brand, made its debut at CES, showcasing its solutions for diverse use cases, including automotive, outdoor, and everyday mobility. Denvix has gained positive feedback from consumers and industry professionals through innovative products backed by performance.
Image Source: Denvix
The Denvix PowerX, a power bank with 250W fast-charging capabilities and wireless charging functionality, impressed attendees with its advanced design and performance. PowerX lead its category as the most innovative power bank and has achieved sales figures beyond the target at CES.
Image Source: Denvix
Another product was Denvix’s upgraded portable tire inflator, Denvix MotorX. It had its official launch at CES on the event’s opening day. This device integrates powerful large-tire-inflator-level inflation speed, power-bank-level fast charging function, and multifunctional lightings into one compact tool. During live demos, the inflator proved more efficient and stable, surpassing traditional bulky models and receiving positive feedback.
Denvix secured preliminary agreements with global retailers, increasing the reach of its products to consumers worldwide. Distributors from regions like the U.S., Canada, Europe, Japan, and the Middle East showed interest in forming long-term partnerships. With the principle “Technology Leads Life, Innovation Defines Mobility,” Denvix remains focused on creating innovative products and delivering exceptional experiences to customers everywhere.
VANCOUVER, Canada, Jan 22, 2025 – Human in Motion Robotics announced that XoMotion, the world’s most advanced medical exoskeleton, was named the Top Robot at CES 2025 by USA Today.
At CES 2025, XoMotion received recognition from attendees, media, and industry leaders for its transformative potential to assist individuals with mobility impairments caused by spinal cord injuries, strokes, and other neurological conditions.
Other Highlights from CES 2025
As CES 2025 concluded, Human in Motion Robotics celebrated a productive and well-received exhibition, highlighted by:
Recognition from visitors, media, and industry leaders: XoMotion earned recognition, including a CES 2025 Innovations Award, highlighting its role as a leader in user-focused technology.
Product demonstrations: Attendees, including people with mobility issues, and doctors, tested XoMotion’s self-balancing, hands-free, and natural movement features. They saw how the exoskeleton increases user independence and improves care.
Investor and partner engagement: The platform technology and multidisciplinary expertise of Human in Motion Robotics impressed investors and partners, positioning XoMotion as a useful solution.
Inspiration for the future: Meeting exhibitors and discovering new technologies at CES created valuable opportunities to collaborate and improve XoMotion.
The event was another key step for Human in Motion Robotics, as the exposure and connections at CES will support future growth and innovation.
LAS VEGAS, NV, Jan 14, 2025 – Creality showcased the latest innovations at the CES 2025, with a focus on creativity and advancing 3D printing technology. Building on the theme of “Time-Tested, Future-Ready”, Creality made a splash with a trade-in campaign featuring the K2 Plus Combo showcased alongside a debut for the new Creality Hi, the maker’s first-ever multi-filament CoreXY model. Creality also introduced the Creality Filament System (CFS) for the K1 series of printers.
K2 Plus Combo Trade-in Campaign
One of the highlights of Creality’s CES presence was the K2 Plus Combo, featuring an exciting trade-in offer. The K2 Plus Combo, first released last year as a large-format multi-color 3D printer, received feedback from users seeking to upgrade to multi-color capabilities. During the event, Creality invited customers to trade in their old 3D printers for discounts of up to $500 towards the K2 Plus Combo or K2 Plus model. This offer was available for all 3D printer brands, including resin and FFF machines, providing an opportunity for users to know-how Creality’s advanced 3D printing technology.
Built with an all-metal frame and five servo stepper motors, the K2 Plus ensures stability, low noise (48dB), and a print speed of up to 600mm/s, making it perfect for high-intensity printing environments. With a heated chamber temperature capable of 60°C, the K2 Plus supports a wide range of materials, from PLA and TPU to high-strength and heat-resistant filaments like PA and PPS-CF. The K2 Plus can easily handle both regular and specialty filaments, useful for both professional and industrial-grade applications.
Creality Hi: A Family-Friendly 3D Printing Experience
Creality also unveiled the Creality Hi, a budget-friendly, multi-color 3D printer designed with families and hobbyists in mind. Offering an intuitive user experience and high-quality prints, the Creality Hi is a perfect entry point for users interested in 3D printing at home.
Color Your Fun Deftly: Equipped with the innovative CFS, users can achieve multi-color prints without additional post-processing. The CFS system automatically detects and switches filaments, ensuring minimal waste.
User-Friendly Setup: The Creality Hi is easy to assemble, requiring only minimal effort (around 8 minutes) to get started. Its pre-assembled design and auto-leveling system reduce the complexities often associated with 3D printing.
Enhanced Print Quality: Built with a sturdy all-metal body and linear rail systems, the Creality Hi ensures consistent print quality and durability, elevating the 3D printing experience for all users.
K1 Series CFS Upgrade Kit: Upgrading for Versatility
Creality also introduced the K1 Series CFS upgrade accessory kit, designed to elevate the multi-color capabilities of the K1 Series printers. This new accessory kit allows users to expand their creative possibilities by adding a high-performance filament system compatible with all K1 Series printers.
In addition, Co-Print, Creality’s partner for providing wider multi-filament capability, was also seen at Creality’s booth. Both the Ender-3 V3 Plus and Ender-3 V3 will soon be integrating Co-Print components for multi-filament printing.
New Ecosystem Offerings
At CES, Creality also unveiled new products and filaments that enhance its 3D printing ecosystem. Notable among them was the launch of Soleyin Ultra PLA, a vibrant, environmentally-friendly filament ideal for fashion and design applications. Additionally, Creality introduced high-speed Rainbow PLA, Hyper PETG, PPA-CF, and PLA-CF that expand the creative possibilities for users allowing them to experiment with new textures, colors, and performance characteristics.
Creality also showcased two recent additions to its ecosystem hardware lineup, including the Falcon A1 Laser Engraving Machine, the first smart CoreXY engraver with auto material parameter filling and proprietary AI camera technology designed for home and professional use featuring no-assembly operation and speeds of up to 600mm/s, and Creality RaptorX 3D Scanner, a high-precision scanning solution offering accuracy of up to 0.02 mm and perfect for professionals needing detailed scans of objects ranging from small to large-scale projects.
LAS VEGAS, NV, Jan 14, 2025 – TCL showcased smart home security products like the Smart Lock D1 Pro, the Smart Lock D1 Ultra, and the Solar Security Camera Cam B1 at CES 2025. The solutions have earned recognition for AI features, eco-conscious designs, and exceptional user convenience. TCL’s smart home security solutions claimed Best-of CES awards by The Ambient, Trusted Reviews, and Android Headlines, highlighting the company’s contribution to research and development.
TCL Smart Lock D1 Ultra: World’s 4-in-1 Smart Video Deadbolt
TCL Smart Lock D1 integrates a smart lock, security camera, video doorbell, and built-in display screen into one device. Its AI-powered security capabilities, such as dual-motion sensors and human detection algorithms, provide real-time alerts with impressive accuracy. Flexible unlocking methods, including fingerprint recognition, mobile app control, and voice commands via Google Assistant and Amazon Alexa, make it a versatile choice for any home. Additionally, the D1 Ultra captures crystal-clear 2K video with a 172° wide field of view, ensuring superior surveillance and detailed tracking of visitors. Built with an IP65 weatherproof design and a robust 10,000mAh battery, it offers reliable year-round performance, even in extreme conditions.
TCL Smart Lock D1 Pro: Award-Winning AI Palm Vein Recognition
TCL Smart Lock D1 Pro offers a blend of advanced security and ease of use. The features include its AI-powered palm vein recognition, a contactless unlocking method that provides unmatched accuracy and security. This device supports seven unlocking methods, including keypad entry, app control, and mechanical keys, catering to a wide range of user preferences.
The D1 Pro allows users to manage and monitor access remotely through real-time alerts and user permissions via its dedicated app. Designed with durability in mind, it features an aluminum alloy body with an IP54 rating for weather resistance, ensuring reliable performance in various environments. With a 7,800mAh rechargeable battery and voice assistant compatibility with Google Home and Amazon Alexa, it redefines convenience and reliability in modern home security.
TCL Solar Security Camera Cam B1: Eco-Friendly Security Redefined
TCL Security Cam B1 combines powerful security features with an eco-conscious design. The camera is equipped with a 10,000mAh rechargeable battery and comes with an integrated solar panel. It delivers stunning 2K video clarity and full-color night vision, even in low-light conditions, due to its integrated infrared and spotlight technologies. With a wide 153° viewing angle, the Cam B1 provides extensive coverage and minimizes blind spots. Advanced PIR motion sensors ensure human detection while reducing false alarms, enhancing the reliability of the system. Users can customize detection zones to monitor key areas and manage alerts efficiently.
The camera integrates with Google Assistant and Amazon Alexa, offering hands-free control and to real-time notifications via the mobile app.
“CES 2025 is the perfect stage for us to showcase our dedication to advancing smart home technology,” said Haifeng Bu, general manager of Smart Home Security Business Unit, TCL. “Our latest products, including the D1 Ultra, D1 Pro, and B1, represent our commitment to creating solutions that are not only innovative but also accessible and eco-friendly, ensuring a visible sense of security for everyone.”
LAS VEGAS, NV, Jan 14, 2025 – Innatera has showcased Spiking Neural Processor (SNP) that transforms the way battery-powered devices make sense of the physical world at CES 2025.
Delft University’s Spiking Neural Processor uses a unique architecture for brain-like cognition within ultra-low power envelope.
“At this pivotal moment in computing, Innatera’s breakthrough Spiking Neural Processor delivers unmatched energy-efficient, brain-inspired cognition for sensors, unlocking the promise of ambient intelligence,” said Sumeet Kumar, CEO of Innatera. “This revolutionary processor provides an all-in-one solution that simplifies and optimizes sensor data processing at the edge.”
Innatera’s SNP combines a Spiking Neural Network (SNN) engine with a RISC-V processor core and other accelerators to deliver a complete solution in energy-constrained environments. The single-chip solution brings intelligence closer to sensors, enabling next-generation AI and signal processing for applications in consumer electronics, smart homes, and industrial IoT, such as audio interfaces, touch-free interfaces, presence detection, activity recognition, and ECG recognition.
The SNP achieves high-performance pattern recognition at the sensor edge and enables real-time analysis of sensor data to detect and identify embedded patterns, with sub-milliwatt power dissipation and sub-millisecond latency.
Ambient Intelligence marks a major departure from computing technology as we know it, paving the way for a future where digital interactions are as natural as breathing.
At CES 2025, Innatera demonstrated how the SNP can transform computing in several real-world applications:
Audio Scene Classification: Audio scene classification allows devices to be aware of the environment they operate in and use this information to adapt their operation. For example, noise-canceling headphones adapting to ambient noise like airplanes or city buses.
Robust Human Presence Sensing: The detection of human presence is important in a wide range of indoor and outdoor applications, such as security cameras, smart lighting, video doorbells and smart TVs. Using a radar sensor, this demo showcases always-on, privacy-preserving human presence detection with accuracy and power efficiency.
Robust People Counting Using Far Infrared Sensors: Innatera showcased how its SNP enables advanced people counting and human presence detection with passive infrared sensors. Infrared technology is a non-intrusive, low-light, and privacy-preserving method for people counting and human presence detection.
Innatera’s presence at CES 2025 follows a remarkable year of growth and development for the innovation-driven Delft University of Technology spin-off. Earlier this year, the company announced the oversubscription of a Series A $21-million funding round that is accelerating the development of neuromorphic processors.
It is brain surgery. In the world imagined by NVIDIA, AI is everywhere and helping everyone. Image: NVIDIA
NVIDIA CEO Jensen Huang delivered the first keynote at CES 2025 and set the stage for the future of AI. With continuing advances in GPU hardware, the chips that most AI runs on these days, NVIDIA reaffirmed its position as the hardware leader in the tech industry.
Huang first announcement was the GeForce RTX 5090, the “most powerful graphics card NVIDIA has ever developed,” perhaps paying homage to NVIDIA’s beginnings as a maker of graphics cards for gamers. Built on the Ada Lovelace 2 architecture, the RTX 5090 introduces significant advancements in performance and efficiency and may indeed raise the bar for gaming and content creation.
Is that alligator skin, Jensen? Huang highlights the mechanical design of the RTX. “This is just a big fan,” he says, showing the circuit board with the GPU chips inside.
“The GeForce RTX 5090 is not just an upgrade; it’s a revolution for gamers and creators alike,” Huang declared. “It blurs the line between the virtual and the real, making experiences more immersive than ever.”
Graphics Imagined Mostly, Some Computed
Rendering by AI. Of 33 million pixels generated in an image, only two million are computed, the rest “inferred” by AI, making real-time rendering of complex 3D scenes possible.
Huang emphasized the rapid advancement of artificial intelligence, stating that AI is progressing at an “incredible pace. He outlined the evolution of AI from perception AI—understanding images, words and sounds—to generative AI, which creates text, images, video and sound.
Huang introduced the concept of “physical AI,” describing it as AI that can perceive, reason, plan and act but is deeply rooted in the physical world. NVIDIA’s GPUs and platforms are central to enabling breakthroughs across industries, including gaming, robots and autonomous vehicles, said Huang (more on that later).
Huang underscored the exponential rate of data creation.
“In the next couple of years, humanity will produce more data than it has produced since the beginning,” he said.
Huang announced that the company’s latest generation AI processor series, Blackwell, is now in full production. and claimed that every major cloud service provider has systems up and running with Blackwell and showcased systems from 15 computer manufacturers at the event. Huang emphasized that Blackwell is the engine of AI, bringing significant advancements to PC gamers, developers, and creatives.
Gladiator Huang? You had to be there. The shield is a representation of the Grace Blackwell NVLink72, a 1.5 ton supercomputer that is assembled onsite.
A Token Investment
Huang told of how his new GPUs, which manage better performance and use less energy, will allow data centers to make more money. Not only would it cut cooling costs, a major expense for data centers, but they will enable data centers to generate more AI tokens—a critical metric for monetizing AI services.
NVIDIA’s latest Blackwell-based GPUs, such as the RTX 5090 and the Grace Blackwell NVLink 72 systems, deliver significantly better energy efficiency so data centers can achieve the same computational output with far less electricity.
Or datacenter could run at full tilt and generate more AI tokens— the key unit of output in the AI-driven economy. Tokens, the blocks of AI-generated text or other outputs, can be produced faster and at lower costs.
For example, suppose a data center previously generated 1 billion tokens daily at a specific energy cost. An improved efficiency might now allow them to generate 1.5 billion tokens using the same energy, directly increasing revenue potential.
Agentic AI
An agent to assist with every job. “The IT department of the future will be like the HR of AI agents,” said Huang.
“The age of AI agentics is here,” said Huang, signaling a transformative shift in artificial intelligence. He described agentic AI as a “multitrillion-dollar opportunity” that will revolutionize work across industries. Huang emphasized that AI agents are becoming the new digital workforce, capable of reasoning, planning, and acting autonomously.
NVIDIA Cosmos: A platform designed to advance physical AI by providing new models and video data processing pipelines for robots, autonomous vehicles, and vision AI.
AI Foundation Models for RTX PCs: These models feature NVIDIA NIM microservices and AI Blueprints for crafting digital humans, podcasts, images, and videos, enabling the development of specialized AI agents to automate tasks.
Companies will develop the agents into assistants for many of their roles, predicts Huang. Perhaps the best example is developers, for whom AI-code generation is already being widely used.
There are 30 million developers who could use an AI agent, says Huang.
Foundation for the World
Digital twin of a warehouse.
Huang thinks the next big AI (after LLMs, which work only with text) is physical AI, which works with physics. In the keynote, Huang introduced NVIDIA’s Cosmos, the world’s first World Foundation Model, which uses physical AI.
This foundational knowledge allows Cosmos to create simulations that reflect real-world behavior, making it highly suitable for industries like robotics, autonomous vehicles, and industrial AI.
Cosmos will then be able to model the physical world. It has learned all its physics not from sitting in class or reading books but from “20 million hours of video” of objects in motion, interactions between objects, and physical environments. This ought to allow it to predict interactions and motion for industries such as robotics and autonomous vehicles where real-world data is limited or costly to capture, factory automations, warehouse operations and their optimization.
Huang announced that Cosmos will be freely available, open-licensed and available on GitHub.
The Cosmos World Foundation Model is a sophisticated AI system that ingests and understands multimodal data, including text, images, and video, to generate realistic simulations and predictions about the physical world. Unlike LLMs that process text-based tokens, the World Foundation Model generates “action tokens,” enabling it to predict and simulate real-world behavior based on actual Newtonian physics.
With COSMOS physical AI, we will be able to generate real worlds from sketches and models and depict their variations fully rendered using Omniverse (below).
Huang discussed autonomous vehicles (AVs) as one of the most significant applications of physical AI and NVIDIA’s advanced computing platforms. He outlined the current state of AV technology, NVIDIA’s contributions, and how physical AI plays a critical role in testing, training, and advancing the capabilities of autonomous systems (more on that later).
DIGITS – Your Own Personal Supercomputer
Huang introduced DIGITS, a personal supercomputer that will be available later this year (May). The diminutive computer (see image above) is the productization of the project called “Deep Learning GPU Intelligence Training System” (DIGITS), which NVIDIA shortened to DGX and introduced as a graphics platform of the same name in 2016.
DIGITS is meant to be used for the development of AI applications. Users can access the whole of the NVIDIA AI software library for experimentation and prototyping, including software development kits, orchestration tools, frameworks and models available in the NVIDIA NGC catalog and on the NVIDIA Developer portal. Developers can fine-tune models with the NVIDIA NeMo framework, accelerate data science with NVIDIA RAPIDS libraries and run common frameworks such as PyTorch, Python and Jupyter notebooks, according to NVIDIA.
That NVIDIA is making a desktop supercomputer that can run Windows applications (with WSL 2, a Windows subsystem for Linux machines) DIGITS should be a wake-up call, if not a fire alarm, for every PC manufacturer. Here is the 2nd most valuable computer hardware company no longer content to make chips and boards for PC manufacturers but is stepping out to make its own PCs. It’s as if makers of jet engines decided to make airplanes. No specifics, like benchmark comparisons, were given at CES but since DIGITS will be operating at a scale and speed of DGX systems, thanks to the advanced architecture of the GV110 chip and its integration with NVIDIA’s AI software stack, we expect it to blow the doors off any traditional Intel-based PC running AI-assisted software… and what software will not be within the year?
An AI-Based Windows?
Huang outlined a vision for a future version of Windows that would be deeply integrated with AI capabilities. He referenced the revolutionary impact of Windows 95, which introduced multimedia APIs that transformed the software development landscape. He likened this transformative potential to what he envisions for the future of AI on Windows PCs.
Huang introduced the concept of “generative APIs,” which would allow developers to integrate AI directly into applications on Windows PCs. These APIs would enable:
Generative AI for Language: Advanced natural language processing for creating text and responding to queries.
Generative AI for Graphics: Tools for producing 3D models, animations, and video content.
Generative AI for Sound: Capabilities for audio synthesis and manipulation.
These APIs would extend the functionality of traditional computing by bringing AI-assisted tools directly into everyday applications.
Currently, NVIDIA is using Windows Subsystem for Linux (WSL) 2, which provides a dual-operating-system environment optimized for AI development.
Autonomous Vehicles
NVIDIA’s automotive vertical business is currently at $4 billion and is expected to grow to approximately $5 billion in fiscal year 2026, according to NVIDIA.
Huang highlighted that autonomous vehicle, after years of development, are now becoming mainstream, citing the success of companies like Waymo, Tesla, and Aurora (AV trucks). He characterized the AV industry as likely to become the first multi-trillion-dollar robotics market, driven by the massive demand for self-driving cars and trucks.
Each year, 100 million cars are built, said Huang. A billion cars are on the road globally, collectively driving a trillion miles annually. Autonomous capabilities will revolutionize how these vehicles operate.
NVIDIA is working with major automakers like Mercedes, BYD, Toyota and startups like Zoox and Waymo to develop next-generation autonomous systems.
Instead of a 3 Body Problem, we have a 3 Computer Solution, said Jensen Huang at CES, invoking the sci-fi series. Image: Netflix.
Autonomous vehicles require a “three-computer solution” tailored for different stages of development and deployment:
Training AI Models (DGX Systems) for training AV models using vast datasets and simulations.
Simulation and Synthetic Data Generation (Omniverse & Cosmos) to create environments and situations to enable extensive tests before real-world deployment.
In-Car AI Systems (Drive AGX Thor), VIDIA’s latest AI supercomputer for cars, processes massive amounts of sensory data, from cameras to LiDAR and radar, to make real-time driving decisions.
Thor offers 20 times the processing power of its predecessor, making it suitable not only for AVs but also for robotics and other high-performance applications.
Huang emphasized the importance of testing AV systems with synthetic data and simulations powered by physical AI. These technologies allow NVIDIA to simulate real-world driving conditions at an unprecedented scale and fidelity. NVIDIA’s Cosmos platform generates lifelike driving scenarios based on real-world data, including weather, lighting, and road conditions. This allows AV models to train on edge cases that are rare or dangerous to capture in real life. Using NVIDIA Omniverse, AV systems can simulate billions of miles of driving by replaying and altering existing driving logs. For example, recorded real-world footage of a drive can add rain and snow, change the time of day, increase traffic, and more. This would allow the AV industry to turn thousands of real-world drives into billions of simulated miles, amplifying the training data exponentially.
Huang stressed that the ability to simulate edge cases—such as unpredictable pedestrian behavior or hazardous weather—is critical for reducing risk with autonomous vehicles.
Huang concluded his keynote with a forward-looking statement about the transformative role of AI and computing in society. “We are entering an era where AI will become as ubiquitous as electricity,” he said. “From healthcare to entertainment, from scientific research to autonomous systems, NVIDIA’s technologies will drive the next wave of innovation.”
NVIDIA announced that Toyota, Aurora and Continental have joined the list of global mobility leaders developing and building their consumer and commercial vehicle fleets on NVIDIA accelerated computing and AI.
Toyota, the world’s largest automaker, will build its next-generation vehicles on NVIDIA DRIVE AGX Orin, running the safety-certified NVIDIA DriveOS operating system. These vehicles will offer functionally safe, advanced driving assistance capabilities.
The majority of present day auto manufacturers, truckmakers, robotaxi, and autonomous delivery vehicle companies, tier-one suppliers and mobility startups are developing on NVIDIA DRIVE AGX platform and technologies. With cutting-edge platforms spanning training in the cloud to simulation to compute in the car, NVIDIA’s automotive vertical business is expected to grow to approximately $5 billion in fiscal year 2026.
“The autonomous vehicle revolution has arrived, and automotive will be one of the largest AI and robotics industries,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA is bringing two decades of automotive computing, safety expertise and its CUDA AV platform to transform the multitrillion dollar auto industry.”
Aurora, Continental and NVIDIA also announced a long-term strategic partnership to deploy driverless trucks at scale, powered by NVIDIA DRIVE. NVIDIA’s accelerated compute running DriveOS will be integrated into the Aurora Driver, an SAE level 4 autonomous-driving system that Continental plans to mass-manufacture in 2027.
Other mobility companies adopting NVIDIA DRIVE AGX for their next-generation advanced driver-assistance systems and autonomous vehicle roadmaps include BYD, JLR, Li Auto, Lucid, Mercedes-Benz, NIO, Nuro, Rivian, Volvo Cars, Waabi, Wayve, Xiaomi, ZEEKR, Zoox and many more.
NVIDIA offers three core computing systems and the AI software essential for end-to-end autonomous vehicle development. NVIDIA DRIVE AGX is the in-vehicle computer. NVIDIA DGX processes the data from the fleet and trains AI models, and NVIDIA Omniverse and NVIDIA Cosmos running on NVIDIA OVX systems test and validate self-driving systems in simulation.
LAS VEGAS, NV, Jan 13, 2025 – NVIDIA announced NVIDIA Cosmos, a platform comprising state-of-the-art generative world foundation models, advanced tokenizers, guardrails and an accelerated video processing pipeline built to advance the development of physical AI systems such as autonomous vehicles (AVs) and robots.
Physical AI models are costly to develop, and require vast amounts of real-world data and testing. Cosmos world foundation models, or WFMs, offer developers an easy way to generate massive amounts of photoreal, physics-based synthetic data to train and evaluate their existing models. Developers can also build custom models by fine-tuning Cosmos WFMs.
Cosmos models will be available under an open model license to accelerate the work of the robotics and AV community. Developers can preview the first models on the NVIDIA API catalog, or download the family of models and fine-tuning framework from the NVIDIA NGC catalog or Hugging Face.
Leading robotics and automotive companies, including 1X, Agile Robots, Agility, Figure AI, Foretellix, Fourier, Galbot, Hillbot, IntBot, Neura Robotics, Skild AI, Virtual Incision, Waabi and XPENG, along with ridesharing giant Uber, are among the first to adopt Cosmos.
“The ChatGPT moment for robotics is coming. Like large language models, world foundation models are fundamental to advancing robot and AV development, yet not all developers have the expertise and resources to train their own,” said Jensen Huang, founder and CEO of NVIDIA. “We created Cosmos to democratize physical AI and put general robotics in reach of every developer.”
Open World Foundation Models to Accelerate the Next Wave of AI
NVIDIA Cosmos’ suite of open models means developers can customize the WFMs with datasets, such as video recordings of AV trips or robots navigating a warehouse, according to the needs of their target application.
Cosmos WFMs are purpose-built for physical AI research and development, and can generate physics-based videos from a combination of inputs, like text, image and video, as well as robot sensor or motion data. The models are built for physically based interactions, object permanence, and high-quality generation of simulated industrial environments — like warehouses or factories — and of driving environments, including various road conditions.
In his opening keynote at CES, NVIDIA founder and CEO Jensen Huang showcased ways physical AI developers can use Cosmos models, including for:
Video search and understanding, enabling developers to easily find specific training scenarios, like snowy road conditions or warehouse congestion, from video data.
Physics-based photoreal synthetic data generation, using Cosmos models to generate photoreal videos from controlled 3D scenarios developed in the NVIDIA Omniverse platform.
Physical AI model development and evaluation, whether building a custom model on the foundation models, improving the models using Cosmos for reinforcement learning or testing how they perform given a specific simulated scenario.
Foresight and “multiverse” simulation, using Cosmos and Omniverse to generate every possible future outcome an AI model could take to help it select the best and most accurate path.
Advanced World Model Development Tools
Building physical AI models requires petabytes of video data and tens of thousands of compute hours to process, curate and label that data. To help save enormous costs in data curation, training and model customization, Cosmos features:
An NVIDIA AI and CUDA-accelerated data processing pipeline, powered by NVIDIA NeMo Curator, that enables developers to process, curate and label 20 million hours of videos in 14 days using the NVIDIA Blackwell platform, instead of over three years using a CPU-only pipeline.
NVIDIA Cosmos Tokenizer, a state-of-the-art visual tokenizer for converting images and videos into tokens. It delivers 8x more total compression and 12x faster processing than today’s leading tokenizers.
The NVIDIA NeMo framework for highly efficient model training, customization and optimization.
World’s Largest Physical AI Industries Adopt Cosmos
Pioneers across the physical AI industry are already adopting Cosmos technologies.
1X, an AI and humanoid robot company, launched the 1X World Model Challenge dataset using Cosmos Tokenizer. XPENG will use Cosmos to accelerate the development of its humanoid robot. And Hillbot and Skild AI are using Cosmos to fast-track the development of their general-purpose robots.
“Data scarcity and variability are key challenges to successful learning in robot environments,” said Pras Velagapudi, chief technology officer at Agility. “Cosmos’ text-, image- and video-to-world capabilities allow us to generate and augment photorealistic scenarios for a variety of tasks that we can use to train models without needing as much expensive, real-world data capture.”
Transportation leaders are also using Cosmos to build physical AI for AVs:
Waabi, a company pioneering generative AI for the physical world starting with autonomous vehicles, is evaluating Cosmos in the context of data curation for AV software development and simulation.
Wayve, which is developing AI foundation models for autonomous driving, is evaluating Cosmos as a tool to search for edge and corner case driving scenarios used for safety and validation.
AV toolchain provider Foretellix will use Cosmos, alongside NVIDIA Omniverse Sensor RTX APIs, to evaluate and generate high-fidelity testing scenarios and training data at scale.
Global ridesharing giant Uber is partnering with NVIDIA to accelerate autonomous mobility. Rich driving datasets from Uber, combined with the features of the Cosmos platform and NVIDIA DGX Cloud, can help AV partners build stronger AI models even more efficiently.
“Generative AI will power the future of mobility, requiring both rich data and very powerful compute,” said Dara Khosrowshahi, CEO of Uber. “By working with NVIDIA, we are confident that we can help supercharge the timeline for safe and scalable autonomous driving solutions for the industry.”
Developing Open, Safe and Responsible AI
NVIDIA Cosmos was developed in line with NVIDIA’s trustworthy AI principles, which prioritize privacy, safety, security, transparency and reducing unwanted bias.
Trustworthy AI is essential for fostering innovation within the developer community and maintaining user trust. NVIDIA is committed to safe and trustworthy AI, in line with the White House’s voluntary AI commitments and other global AI safety initiatives.
The open Cosmos platform includes guardrails designed to mitigate harmful text and images, and features a tool to enhance text prompts for accuracy. Videos generated with Cosmos autoregressive and diffusion models on the NVIDIA API catalog include invisible watermarks to identify AI-generated content, helping reduce the chances of misinformation and misattribution.
NVIDIA encourages developers to adopt trustworthy AI practices and further enhance guardrail and watermarking solutions for their applications.
Cosmos WFMs are now available under NVIDIA’s open model license on Hugging Face and the NVIDIA NGC catalog. Cosmos models will soon be available as fully optimized NVIDIA NIM microservices.
Developers can access NVIDIA NeMo Curator for accelerated video processing and customize their own world models with NVIDIA NeMo. NVIDIA DGX Cloud offers a fast and easy way to deploy these models, with enterprise support available through the NVIDIA AI Enterprise software platform.