Category: Robotics

  • Dassault Systèmes, KUKA Drive Next-Gen Robotics Automation

    Dassault Systèmes, KUKA Drive Next-Gen Robotics Automation

    Image: Dassault Systèmes

    VELIZY-VILLACOUBLAY, France, Feb 25, 2025 – Dassault Systèmes has announced its partnership with KUKA to provide manufacturing industries with inclusive solutions that meet growing demands in robotics and automation.

    Dassault Systèmes will join mosaixx, KUKA’s platform for industrial software solutions. The partnership makes it simpler for KUKA customers to buy and use Dassault Systèmes’ 3DEXPERIENCE platform and its applications. By expanding customer access to virtual twin technology and better collaboration tools, Dassault Systèmes and KUKA, with its new segment KUKA Digital, can help companies create more flexible solutions that improve their operations.

    The global market value of industrial robot installations is estimated at $16.5 billion, driven by AI, energy efficiency, and other trends. With more than four million industrial robots operating in factories worldwide in 2024, the annual number of installations in 2026 is expected to increase to 718,000.

    The KUKA Group launched mosaixx in 2024 as an open cloud platform for industrial software. The platform supports collaboration in a growing market. It gives system integrators and engineers access to various solutions that enhance the digitalization and automation of factory floors and production machines. The approach is ecosystem-focused and works with any machine type or manufacturer.

    Dassault Systèmes’ 3DEXPERIENCE platform and applications are used across the industrial equipment industry worldwide to design, simulate, and engineer products, processes, and infrastructure virtually with real-time data before producing or implementing them physically.

    “Our collaboration with Dassault Systèmes enables us to expand our mosaixx portfolio with industry-leading virtual twin technology. Engineers can carry out simulations and analyses with real-time data while streamlined collaboration empowers system integrators with flexible applications for enhanced adaptability and innovation,” said Quirin Goerz, CEO, KUKA Digital.

    “By partnering with KUKA, we can offer streamlined access to the 3DEXPERIENCE platform and our many applications such as CATIA, DELMIA and SOLIDWORKS. This will open up new possibilities for customers to benefit from the virtual world and collaborate and innovate in diverse sectors such as automotive, aerospace, electronics, metalworking, logistics, healthcare and more,” said Gian Paolo Bassi, senior vice president, customer role experience, Dassault Systèmes.

    Source: Dassault Systèmes

    About Dassault Systèmes

    Dassault Systèmes SE, founded in 1981 and headquartered in Vélizy-Villacoublay, France, is a leading multinational software corporation specializing in 3D design, digital mock-up, and product lifecycle management (PLM) solutions. The company has pioneered virtual worlds to improve real life for consumers, patients, and citizens. It offers a comprehensive suite of software applications, including CATIA for product design, SOLIDWORKS for 3D mechanical design, DELMIA for manufacturing operations, and SIMULIA for realistic simulation. The tools serve various industries such as aerospace, automotive, consumer goods, industrial equipment, life sciences, and high-tech. As of 2023, Dassault Systèmes employs approximately 23,811 individuals across 194 global offices. In 2023, the company reported revenues of €5.67 billion, reflecting its significant presence in the software industry. As of 2024, Dassault Systèmes serves over 350,000 enterprise customers across more than 150 countries.

    About KUKA

    KUKA, founded in 1898, is a German automation company headquartered in Augsburg, Germany. KUKA specializes in intelligent automation solutions and offers a range of products, including industrial robots, automated manufacturing systems, and assembly lines. The solutions serve automotive, electronics, metalworking, plastics, consumer goods, e-commerce, and healthcare industries. As of 2022, KUKA reported approximately €4.4 billion in revenues and employed around 15,000 people. In 2016, KUKA became a subsidiary of the Chinese multinational Midea Group.

  • Stial Expands AI Robotics with STFcore Acquisition

    Stial Expands AI Robotics with STFcore Acquisition

    Stial Steven Humanoid Robot

    TORONTO, Canada, Feb 18, 2025 – Amid the rapid evolution of AI and robotics technology, Stial Technologies announced the strategic acquisition of STFcore (ZhongQing Technology. Simultaneously, Stial introduced Stial Steven, the world’s first humanoid robot designed for polishing applications. The initiative not only accelerates Stial’s commitment to intelligent polishing technology but also signifies the dual integration of robotics in industrial and humanoid forms, advancing embodied intelligence towards a higher level of autonomous adaptability.

    Stial Technologies: A Pioneer in AI-Driven Flexible Polishing Robotics

    Stial Technologies has secured a leading position in the global AI-driven polishing robotics industry with its self-developed core technologies. Its technological advantages span flexible force control systems, AI multimodal perception technology, and advanced flexible polishing algorithms. It is a preferred solution provider in high-precision industries such as automotive manufacturing, aerospace, and semiconductor electronics.

    Stial focuses on the niche field of polishing, adopting an integrated approach covering hardware, software, algorithms, and process expertise. The company has also made advancements in humanoid embodied intelligence and AI vertical models, further enhancing robotic applications in complex scenarios.

    STFcore: A Breakthrough Innovator in 6D Force Sensor Technology

    STFcore (ZhongQing Technology), founded in 2018, has become a key player in the industry with its advanced 6D force sensor technology. Its products exceed international benchmarks in size, accuracy, and sampling frequency, making them ideal for polishing and humanoid robots that need precise force feedback. Compared to major companies like Germany’s ME System and ATI in the U.S., STFcore’s sensors deliver similar performance and challenge the foreign technology hold, addressing the demand for high-end force sensors in Asia.

    6D Force Sensor: A Core Component in Polishing and Humanoid Robots

    In high-precision manufacturing, 6D force sensors play a significant role in enabling robots to achieve precise operations and adaptive control. The real-time sensors monitor the contact force between the robot and the workpiece, feeding data into Stial’s proprietary NextBrain AI system. This enables the robot to dynamically adjust polishing pressure, direction, and angle, achieving proactive, flexible control.

    6D force sensors are crucial to embodied intelligence in humanoid robots and add significant value to the system. By acquiring STFcore, Stial has the important hardware technology. When combined with its flexible force-control algorithms, Stial has improved its position in force-controlled polishing.

    Stial Steven: The Future Star of Polishing Robotics

    Another major highlight of this acquisition is the launch of Stial Steven, the world’s first humanoid robot dedicated to polishing applications. Unlike traditional industrial robotic arms, Stial Steven is designed with motion characteristics that closely resemble human movement, providing superior elasticity and adaptability. This makes it suitable for fine polishing in constrained spaces and on complex workpieces.

    Equipped with Stial’s proprietary process database, 6D force sensors, and a multimodal AI model integrating vision, haptics, and auditory perception, Stial Steven can adjust force, angles, and trajectories in real time during polishing tasks. Its flexibility allows it to handle intricate polishing operations that traditional robots struggle with, enhancing both production efficiency and processing precision. Whether working on complex curved surfaces or performing precision polishing on small components, Stial Steven can achieve a level of detail comparable to human workmanship while operating continuously, boosting enterprise productivity and competitiveness.

    Industrial Synergy and Future Prospects

    Through the acquisition, Stial gains advanced 6D force sensor technology and achieves robust integration from essential hardware to intelligent software. In the future, Stial will focus on improving AI multimodal technology. The company will use its knowledge of force, vision, sound algorithms, and data accumulation to make polishing robotics effective and intelligent.

    Whether in industrial robotics or humanoid robotics, Stial is leveraging its technological advantages and industrial synergies to become a benchmark in the polishing sector and lead the global intelligent manufacturing industry to new heights.

    Conclusion

    With the acquisition of STFcore, Stial has reinforced its technological position in AI-driven polishing robotics, making innovative advancements in integrating humanoid robots and 6D force sensors.

    Stial’s Founder & CEO, Hongbo Wang, stated: “Looking ahead, we will continue to advance AI and robotics, reshaping industrial production as a vertical ecosystem builder, and leveraging internationalization to create a new paradigm in global intelligent manufacturing.”

    Source: Stial

    About Stial

    Stial Tech, headquartered in Canada, has been a leader in AI-driven flexible polishing robots for over 15 years. Their advanced robotic systems simulate the touch of skilled craftsmen, offering solutions like the Magic Series and Model Series robots, the PolishX flexible force control system, and the NextBrain AI Mushroom Cloud System. The innovations serve industries such as automotive, aerospace, medical, and household goods, enhancing manufacturing processes with precision and efficiency.

  • RRD Invests in Robotics, HP Print Presses to Upgrade GA Facility

    RRD Invests in Robotics, HP Print Presses to Upgrade GA Facility

    Image: Business Wire

    CHICAGO, IL, Feb 6, 2025 – RRD is investing in robotic technologies and advanced HP digital print presses to upgrade its Austell, GA facility into a modern commercial print hub.The investment doubles the site’s workforce and sets a benchmark for rapid, high-volume digital production and automation.

    “RRD is driven by a passion for innovation,” said Craig Roberton, president of commercial print, RRD. “Our latest investment exemplifies a commitment to lead the industry and stay at the forefront of emerging technologies and trends that benefit our clients. We’re not just working to enhance RRD’s capabilities — we’re empowering our clients to achieve measurable results and thrive in an ever-evolving industry.”

    The investment enables RRD to deliver high-speed, high-quality, variable print-on-demand services to meet evolving client demand and anticipated industry shifts. The Austell site features HP’s new Indigo 120K Digital Press and PageWide Advantage 2200 with HP Brilliant Ink. The equipment is designed to empower high-volume production businesses by delivering speed, reliability, and cost-efficiency. The equipment also elevates RRD’s capabilities and delivers benefits such as:

    • higher ROI customer acquisition products
    • hyper-personalization, which enables brands to tailor messaging to each consumer based on their unique relationship with the brand
    • productivity and quality that rival offset printing
    • added versatility for short, medium and long runs on a variety of substrates
    • accelerated job production with AI-driven automation
    • Pantone color matching to ensure brand consistency
    • aqueous and ultraviolet coating options
    • increased sheet and saddle stitch booklet options
    • minimized waste

    RRD’s Austell location will also be among the first commercial print manufacturing facilities in America to implement industry-first robot technology.

    RRD Austell Facility. Image: RRD

    Through an ongoing partnership with HP, RRD will deploy two autonomous mobile robots designed specifically to bolster efficiency. The robots feature intelligent software that communicates with HP Indigo presses. The communication allows for interfacing with presses, manages pallet locations in the warehouse and near presses and finishers. The robots can load and unload media directly into and out of presses, streamline workflows, eliminate safety risks, and keep the presses continuously operational to bring client projects to life faster.

    “Our longstanding and deep partnership with RRD demonstrates our mutual commitment to delivering innovative solutions that create new business opportunities in an industry where speed and personalization are increasingly important,” said Oran Sokol, VP & global head, Strategic Accounts, HP.

    RRD’s Austell facility is expanding production space and introducing new features for clients. These upgrades include rooms where customers can observe the production process firsthand. The improvements are set to finish in early 2025, with more digital updates planned at other RRD locations in the coming years.

    About RRD

    R.R. Donnelley & Sons Company (RRD), founded in 1864, is a global provider of multichannel solutions for marketing and business communications. Headquartered at Chicago, IL, RRD offers a comprehensive portfolio of proficiencies, including commercial printing, logistics, supply chain management, and digital marketing services. The company serves a diverse range of industries, such as retail, financial services, healthcare, and technology, assisting organizations worldwide in creating, managing, and executing their multichannel communications strategies. As of 2022, RRD reported net sales of $5.37 billion. The company employs approximately 28,000 individuals across various locations globally.

  • Jon Kirchner Joins DVIRC Board of Directors

    Jon Kirchner Joins DVIRC Board of Directors

    PHILADELPHIA, PA, Feb 3, 2025 – The Delaware Valley Industrial Resource Center (DVIRC) has announced that Jon Kirchner has joined its Board of Directors. Kirchner’s experience in operational strategy, product development, and growth initiatives will help DVIRC step forward to its mission of supporting regional manufacturers.

    Jon Kirchner, President, Perfero Advisory and DVIRC Board Member

    Kirchner serves as president of Perfero Advisory, advising founder/family-owned, private equity firms, public companies, and VC startups. His track record includes guiding transformative projects in industrial automation, robotics, satellite infrastructure, and data-driven operations. Kirchner focuses on delivering measurable outcomes and has helped organizations address challenges, apply targeted solutions, and enhance their competitive edge.

    Kirchner sees potential everywhere. “I’m excited to roll up my sleeves and work with the DVIRC leadership to deliver tangible results for manufacturers across the region,” he said. “It’s an opportunity to do very meaningful work that drives progress and strengthens our industrial base.”

    Kirchner’s appointment shows DVIRC’s commitment to creating solutions and using technology to deliver measurable growth. His expertise will help DVIRC grow its services, enabling local businesses to remain competitive in a changing market.

    “When we think about what the future holds for manufacturing, it’s leaders like Jon Kirchner who will make it happen,” said Chris Scafario, DVIRC’s president & CEO. “His ability to harness the power of technology and translate it into actionable growth strategies is unmatched.”

    About DVIRC

    The Delaware Valley Industrial Resource Center (DVIRC), established in 1988, is a private, non-profit organization headquartered in Philadelphia, Pennsylvania. Its mission is to support the profitable growth of small and mid-sized U.S. manufacturers. DVIRC offers services including business management consulting, operational excellence strategies, and top-line growth initiatives. The organization serves various industries such as contract manufacturing, defense, food and beverage, life sciences, and metal fabrication. As an affiliate of the National Institute of Standards and Technology’s Manufacturing Extension Partnership (NIST MEP) and the Pennsylvania Department of Community and Economic Development, DVIRC has access to a broad network of resources. Since its inception, DVIRC has assisted over 2,000 manufacturers, generating more than $2 billion in client impact.

  • InOrbit Unveils Space Intelligence to Connect Robotics and Enterprise

    InOrbit Unveils Space Intelligence to Connect Robotics and Enterprise

    MOUNTAIN VIEW, CA, Jan 29, 2025 – Launched by InOrbit.AI, Space Intelligence is a product suite that acts as the central nervous system for smart operations. By integrating autonomous robots, line-of-business data and spatial computing, Space Intelligence leverages physical AI to unlock a new level of performance and flexibility in warehouses, manufacturing facilities, and other demanding environments to drive smarter operations.

    “Traditional automation offers rigid and static operations,” said Florian Pestoni, CEO of InOrbit. “InOrbit Space Intelligence enables enterprises to leap forward with software-defined orchestration and leverage the latest development in Physical and AI and Reinforcement Learning to drive dynamic process optimization. Now enterprise customers can drive productivity and improve operational resilience.”

    Space Intelligence combines business, robotic, and spatial data to streamline real-world operations. It offers flexibility and clear insights, supporting ongoing improvements and achieving balance.

    InOrbit Space Intelligence offers enterprises:

    • Unified Command: Seamlessly connecting enterprise systems (WMS, ERP, WES) with robot operations for complete process visibility and control.
    • Intelligent Orchestration: Mapping and managing autonomous robots, IoT devices, and non-robotic equipment for synchronized and optimized facility operations.
    • Proactive Operations: Real-time traffic management, path planning and incident response to execute business workflows and improve utilization.
    • AI-Powered Optimization: Continuously learning and adapting, dynamically optimizing operations for peak efficiency and improved ROI.

    “We are excited to introduce Space Intelligence to enterprises,” continued Pestoni. “This transformative technology enables businesses to truly unlock the full potential of a facility with smart operations.”

    About InOrbit.AI

    InOrbit, headquartered in Mountain View, CA, specializes in cloud-based robot operations (RobOps) management. Founded in 2017, the company offers an AI-powered platform that helps enterprises and robot developers optimize autonomous robot fleets at scale. Serving industries like logistics, manufacturing, and smart cities, InOrbit streamlines deployment, monitoring, and optimization of robotic systems. Its platform enables seamless integration across multi-vendor fleets, improving efficiency and reliability.

    Source: InOrbit

  • ABB Robotics, Agilent Partner for Automated Lab Solutions

    ABB Robotics, Agilent Partner for Automated Lab Solutions

    ZURICH, Switzerland, Jan 27, 2025 – ABB Robotics and Agilent Technologies have signed a collaboration agreement to deliver automated laboratory solutions. Working together, ABB and Agilent will combine the benefits of their technologies to enable companies across multiple sectors including pharma, biotechnology, energy and food and beverage transform their laboratory operations by making processes such as research and quality control faster and more efficient.

    “By combining Agilent’s state-of-the-art analytical instrumentation and laboratory software solutions with ABB’s advanced robotics, we will empower laboratories to operate with greater speed, precision, and flexibility,” said Marc Segura, president ABB Robotics Division. “By using our robots to automate key processes, this partnership will enable companies to introduce new approaches to carrying out key laboratory tasks. This will improve efficiency and free up laboratory teams to take on more rewarding work.”

    Laboratories are facing growing demands to speed up research and bring products to market faster while maintaining quality. ABB and Agilent will collaborate to develop solutions that will automate manual tasks such as sample handling, testing, and data processing. These tools aim to improve efficiency, reduce delays, and boost productivity.

    “This collaboration with ABB Robotics underscores Agilent’s dedication to fostering an open ecosystem within laboratories, representing a significant advancement in our mission to revolutionize lab operations across diverse markets such as pharmaceuticals, biotechnology, environmental testing, and food safety,” said Tom Lillig, vice president and general manager of the Software Informatics Division at Agilent. “By integrating Agilent’s cutting-edge analytical technologies with ABB’s state-of-the-art robotics, we empower our customers to achieve faster, more reliable results, driving innovation and enhancing scientific outcomes. Seamless interoperability of all instruments, robots, and software is crucial to significantly boosting productivity,” Lillig added.

    About ABB Robotics

    ABB Robotics, a division of ABB Ltd., is headquartered in Zurich, Switzerland. The division is a front-runner in robotics, machine automation, and digital services, providing innovative solutions for a diverse range of industries, from automotive to electronics to logistics. As one of the world’s leading robotics and machine automation suppliers, ABB Robotics has shipped more than 500,000 robot solutions. The company is headquartered in Zurich, Switzerland, and employs approximately 105,000 people worldwide. In 2023, ABB Ltd. reported revenues of $32.2 billion.

    About Agilent Technologies

    Agilent Technologies, established in 1999, is a global leader in life sciences, diagnostics, and applied chemical markets. The company provides laboratories worldwide with instruments, software, services, and consumables, enabling customers to gain the insights they seek. Agilent focuses its expertise on six key markets; food, environmental and forensics, pharmaceutical, diagnostics, research, and chemical and energy. Headquartered in Santa Clara, California, Agilent employs approximately 18,000 people and reported revenues of $6.51 billion in fiscal year 2024.

    Source: ABB

  • XoMotion Recognized as ’Top Robot’ at CES by USA Today

    XoMotion Recognized as ’Top Robot’ at CES by USA Today

    VANCOUVER, Canada, Jan 22, 2025 – Human in Motion Robotics announced that XoMotion, the world’s most advanced medical exoskeleton, was named the Top Robot at CES 2025 by USA Today.

    USA TODAY’s 50 Top Picks for CES 2025
    (CNW Group/Human in Motion Robotics Inc.)

    At CES 2025, XoMotion received recognition from attendees, media, and industry leaders for its transformative potential to assist individuals with mobility impairments caused by spinal cord injuries, strokes, and other neurological conditions.

    Other Highlights from CES 2025

    As CES 2025 concluded, Human in Motion Robotics celebrated a productive and well-received exhibition, highlighted by:

    • Recognition from visitors, media, and industry leaders: XoMotion earned recognition, including a CES 2025 Innovations Award, highlighting its role as a leader in user-focused technology.
    • Product demonstrations: Attendees, including people with mobility issues, and doctors, tested XoMotion’s self-balancing, hands-free, and natural movement features. They saw how the exoskeleton increases user independence and improves care.
    • Investor and partner engagement: The platform technology and multidisciplinary expertise of Human in Motion Robotics impressed investors and partners, positioning XoMotion as a useful solution.
    • Inspiration for the future: Meeting exhibitors and discovering new technologies at CES created valuable opportunities to collaborate and improve XoMotion.

    The event was another key step for Human in Motion Robotics, as the exposure and connections at CES will support future growth and innovation.

  • Innatera Unveils Spiking Neural Processor for Battery-Powered Sensing at CES

    Innatera Unveils Spiking Neural Processor for Battery-Powered Sensing at CES

    LAS VEGAS, NV, Jan 14, 2025 – Innatera has showcased Spiking Neural Processor (SNP) that transforms the way battery-powered devices make sense of the physical world at CES 2025.

    Delft University’s Spiking Neural Processor uses a unique architecture for brain-like cognition within ultra-low power envelope.

    “At this pivotal moment in computing, Innatera’s breakthrough Spiking Neural Processor delivers unmatched energy-efficient, brain-inspired cognition for sensors, unlocking the promise of ambient intelligence,” said Sumeet Kumar, CEO of Innatera. “This revolutionary processor provides an all-in-one solution that simplifies and optimizes sensor data processing at the edge.”

    Innatera’s SNP combines a Spiking Neural Network (SNN) engine with a RISC-V processor core and other accelerators to deliver a complete solution in energy-constrained environments. The single-chip solution brings intelligence closer to sensors, enabling next-generation AI and signal processing for applications in consumer electronics, smart homes, and industrial IoT, such as audio interfaces, touch-free interfaces, presence detection, activity recognition, and ECG recognition.

    The SNP achieves high-performance pattern recognition at the sensor edge and enables real-time analysis of sensor data to detect and identify embedded patterns, with sub-milliwatt power dissipation and sub-millisecond latency.

    Ambient Intelligence marks a major departure from computing technology as we know it, paving the way for a future where digital interactions are as natural as breathing.

    At CES 2025, Innatera demonstrated how the SNP can transform computing in several real-world applications:

    • Audio Scene Classification: Audio scene classification allows devices to be aware of the environment they operate in and use this information to adapt their operation. For example, noise-canceling headphones adapting to ambient noise like airplanes or city buses.
    • Robust Human Presence Sensing: The detection of human presence is important in a wide range of indoor and outdoor applications, such as security cameras, smart lighting, video doorbells and smart TVs. Using a radar sensor, this demo showcases always-on, privacy-preserving human presence detection with accuracy and power efficiency.
    • Robust People Counting Using Far Infrared Sensors: Innatera showcased how its SNP enables advanced people counting and human presence detection with passive infrared sensors. Infrared technology is a non-intrusive, low-light, and privacy-preserving method for people counting and human presence detection.

    Innatera’s presence at CES 2025 follows a remarkable year of growth and development for the innovation-driven Delft University of Technology spin-off. Earlier this year, the company announced the oversubscription of a Series A $21-million funding round that is accelerating the development of neuromorphic processors.

    Source: Innatera

  • NVIDIA Unveils Personal Super Computer, Predicts Robot-filled Future at CES

    NVIDIA Unveils Personal Super Computer, Predicts Robot-filled Future at CES

    It is brain surgery. In the world imagined by NVIDIA, AI is everywhere and helping everyone. Image: NVIDIA

    NVIDIA CEO Jensen Huang delivered the first keynote at CES 2025 and set the stage for the future of AI. With continuing advances in GPU hardware, the chips that most AI runs on these days, NVIDIA reaffirmed its position as the hardware leader in the tech industry.

    Huang first announcement was the GeForce RTX 5090, the “most powerful graphics card NVIDIA has ever developed,” perhaps paying homage to NVIDIA’s beginnings as a maker of graphics cards for gamers. Built on the Ada Lovelace 2 architecture, the RTX 5090 introduces significant advancements in performance and efficiency and may indeed raise the bar for gaming and content creation.

    Is that alligator skin, Jensen? Huang highlights the mechanical design of the RTX. “This is just a big fan,” he says, showing the circuit board with the GPU chips inside.

    “The GeForce RTX 5090 is not just an upgrade; it’s a revolution for gamers and creators alike,” Huang declared. “It blurs the line between the virtual and the real, making experiences more immersive than ever.”

    Graphics Imagined Mostly, Some Computed

    Rendering by AI. Of 33 million pixels generated in an image, only two million are computed, the rest “inferred” by AI, making real-time rendering of complex 3D scenes possible.

    Huang emphasized the rapid advancement of artificial intelligence, stating that AI is progressing at an “incredible pace. He outlined the evolution of AI from perception AI—understanding images, words and sounds—to generative AI, which creates text, images, video and sound.

    Huang introduced the concept of “physical AI,” describing it as AI that can perceive, reason, plan and act but is deeply rooted in the physical world. NVIDIA’s GPUs and platforms are central to enabling breakthroughs across industries, including gaming, robots and autonomous vehicles, said Huang (more on that later).

    Huang underscored the exponential rate of data creation.

    “In the next couple of years, humanity will produce more data than it has produced since the beginning,” he said.

    Huang announced that the company’s latest generation AI processor series, Blackwell, is now in full production. and claimed that every major cloud service provider has systems up and running with Blackwell and showcased systems from 15 computer manufacturers at the event. Huang emphasized that Blackwell is the engine of AI, bringing significant advancements to PC gamers, developers, and creatives.

    Gladiator Huang? You had to be there. The shield is a representation of the Grace Blackwell NVLink72, a 1.5 ton supercomputer that is assembled onsite.

    A Token Investment

    Huang told of how his new GPUs, which manage better performance and use less energy, will allow data centers to make more money. Not only would it cut cooling costs, a major expense for data centers, but they will enable data centers to generate more AI tokens—a critical metric for monetizing AI services.

    NVIDIA’s latest Blackwell-based GPUs, such as the RTX 5090 and the Grace Blackwell NVLink 72 systems, deliver significantly better energy efficiency so data centers can achieve the same computational output with far less electricity.

    Or datacenter could run at full tilt and generate more AI tokens— the key unit of output in the AI-driven economy. Tokens, the blocks of AI-generated text or other outputs, can be produced faster and at lower costs.

    For example, suppose a data center previously generated 1 billion tokens daily at a specific energy cost. An improved efficiency might now allow them to generate 1.5 billion tokens using the same energy, directly increasing revenue potential.

    Agentic AI

    An agent  to assist with every job. “The IT department of the future will be like the HR of AI agents,” said Huang.

    “The age of AI agentics is here,” said Huang, signaling a transformative shift in artificial intelligence. He described agentic AI as a “multitrillion-dollar opportunity” that will revolutionize work across industries. Huang emphasized that AI agents are becoming the new digital workforce, capable of reasoning, planning, and acting autonomously.

    • NVIDIA Cosmos: A platform designed to advance physical AI by providing new models and video data processing pipelines for robots, autonomous vehicles, and vision AI.
    • AI Foundation Models for RTX PCs: These models feature NVIDIA NIM microservices and AI Blueprints for crafting digital humans, podcasts, images, and videos, enabling the development of specialized AI agents to automate tasks.

    Companies will develop the agents into assistants for many of their roles, predicts Huang. Perhaps the best example is developers, for whom AI-code generation is already being widely used.

    There are 30 million developers who could use an AI agent, says Huang.

    Foundation for the World

    Digital twin of a warehouse.

    Huang thinks the next big AI (after LLMs, which work only with text) is physical AI, which works with physics. In the keynote, Huang introduced NVIDIA’s Cosmos, the world’s first World Foundation Model, which uses physical AI.

    This foundational knowledge allows Cosmos to create simulations that reflect real-world behavior, making it highly suitable for industries like robotics, autonomous vehicles, and industrial AI.

    Cosmos will then be able to model the physical world. It has learned all its physics not from sitting in class or reading books but from “20 million hours of video” of objects in motion, interactions between objects, and physical environments. This ought to allow it to predict interactions and motion for industries such as robotics and autonomous vehicles where real-world data is limited or costly to capture, factory automations, warehouse operations and their optimization.

    Huang announced that Cosmos will be freely available, open-licensed and available on GitHub.

    The Cosmos World Foundation Model is a sophisticated AI system that ingests and understands multimodal data, including text, images, and video, to generate realistic simulations and predictions about the physical world. Unlike LLMs that process text-based tokens, the World Foundation Model generates “action tokens,” enabling it to predict and simulate real-world behavior based on actual Newtonian physics.

    With COSMOS physical AI, we will be able to generate real worlds from sketches and models and depict their variations fully rendered using Omniverse (below).

    Huang discussed autonomous vehicles (AVs) as one of the most significant applications of physical AI and NVIDIA’s advanced computing platforms. He outlined the current state of AV technology, NVIDIA’s contributions, and how physical AI plays a critical role in testing, training, and advancing the capabilities of autonomous systems (more on that later).

    DIGITS – Your Own Personal Supercomputer

    Huang introduced DIGITS, a personal supercomputer that will be available later this year (May). The diminutive computer (see image above) is the productization of the project called “Deep Learning GPU Intelligence Training System” (DIGITS), which NVIDIA shortened to DGX and introduced as a graphics platform of the same name in 2016.

    DIGITS is meant to be used for the development of AI applications. Users can access the whole of the NVIDIA AI software library for experimentation and prototyping, including software development kits, orchestration tools, frameworks and models available in the NVIDIA NGC catalog and on the NVIDIA Developer portal. Developers can fine-tune models with the NVIDIA NeMo framework, accelerate data science with NVIDIA RAPIDS libraries and run common frameworks such as PyTorch, Python and Jupyter notebooks, according to NVIDIA.

    That NVIDIA is making a desktop supercomputer that can run Windows applications (with WSL 2, a Windows subsystem for Linux machines) DIGITS should be a wake-up call, if not a fire alarm, for every PC manufacturer.  Here is the 2nd most valuable computer hardware company no longer content to make chips and boards for PC manufacturers but is stepping out to make its own PCs. It’s as if makers of jet engines decided to make airplanes. No specifics, like benchmark comparisons, were given at CES but since DIGITS will be operating at a scale and speed of DGX systems, thanks to the advanced architecture of the GV110 chip and its integration with NVIDIA’s AI software stack, we expect it to blow the doors off any traditional Intel-based PC running AI-assisted software… and what software will not be within the year?

    An AI-Based Windows?

    Huang outlined a vision for a future version of Windows that would be deeply integrated with AI capabilities. He referenced the revolutionary impact of Windows 95, which introduced multimedia APIs that transformed the software development landscape. He likened this transformative potential to what he envisions for the future of AI on Windows PCs.

    Huang introduced the concept of “generative APIs,” which would allow developers to integrate AI directly into applications on Windows PCs. These APIs would enable:

    • Generative AI for Language: Advanced natural language processing for creating text and responding to queries.
    • Generative AI for Graphics: Tools for producing 3D models, animations, and video content.
    • Generative AI for Sound: Capabilities for audio synthesis and manipulation.

    These APIs would extend the functionality of traditional computing by bringing AI-assisted tools directly into everyday applications.

    Currently, NVIDIA is using Windows Subsystem for Linux (WSL) 2, which provides a dual-operating-system environment optimized for AI development.

    Autonomous Vehicles

    NVIDIA’s automotive vertical business is currently at $4 billion and is expected to grow to approximately $5 billion in fiscal year 2026, according to NVIDIA.

    Huang highlighted that autonomous vehicle, after years of development, are now becoming mainstream, citing the success of companies like Waymo, Tesla, and Aurora (AV trucks). He characterized the AV industry as likely to become the first multi-trillion-dollar robotics market, driven by the massive demand for self-driving cars and trucks.

    Each year, 100 million cars are built, said Huang. A billion cars are on the road globally, collectively driving a trillion miles annually. Autonomous capabilities will revolutionize how these vehicles operate.

    NVIDIA is working with major automakers like Mercedes, BYD, Toyota and startups like Zoox and Waymo to develop next-generation autonomous systems.

    Instead of a 3 Body Problem, we have a 3 Computer Solution, said Jensen Huang at CES, invoking the sci-fi series. Image: Netflix.

    Autonomous vehicles require a “three-computer solution” tailored for different stages of development and deployment:

    1. Training AI Models (DGX Systems) for training AV models using vast datasets and simulations.
    2. Simulation and Synthetic Data Generation (Omniverse & Cosmos) to create environments and situations to enable extensive tests before real-world deployment.
    3. In-Car AI Systems (Drive AGX Thor), VIDIA’s latest AI supercomputer for cars, processes massive amounts of sensory data, from cameras to LiDAR and radar, to make real-time driving decisions.

    Thor offers 20 times the processing power of its predecessor, making it suitable not only for AVs but also for robotics and other high-performance applications.

    Huang emphasized the importance of testing AV systems with synthetic data and simulations powered by physical AI. These technologies allow NVIDIA to simulate real-world driving conditions at an unprecedented scale and fidelity. NVIDIA’s Cosmos platform generates lifelike driving scenarios based on real-world data, including weather, lighting, and road conditions. This allows AV models to train on edge cases that are rare or dangerous to capture in real life. Using NVIDIA Omniverse, AV systems can simulate billions of miles of driving by replaying and altering existing driving logs. For example, recorded real-world footage of a drive can add rain and snow, change the time of day, increase traffic, and more. This would allow the AV industry to turn thousands of real-world drives into billions of simulated miles, amplifying the training data exponentially.

    Huang stressed that the ability to simulate edge cases—such as unpredictable pedestrian behavior or hazardous weather—is critical for reducing risk with autonomous vehicles.

    Huang concluded his keynote with a forward-looking statement about the transformative role of AI and computing in society. “We are entering an era where AI will become as ubiquitous as electricity,” he said. “From healthcare to entertainment, from scientific research to autonomous systems, NVIDIA’s technologies will drive the next wave of innovation.”

  • PxE Next-Gen 3D Camera to Debut at CES 2025

    PxE Next-Gen 3D Camera to Debut at CES 2025

    REHOVOT, ISRAEL, Jan 1, 2025- PxE Holographic Imaging has announced it will debut its Holographic RGB-IR-Depth Camera at CES 2025. PxE’s technology enables 3D imaging on standard 2D digital cameras — from smartphones, laptops, cars and other camera-based systems — and turns them into 3D cameras that seamlessly merge the physical and digital world. A CES 2025 Innovation Awards honoree, PxE Holographic Imaging, will showcase its Holographic RGB-IR-Depth Camera technology in Las Vegas from Jan 5 – 10, 2025.

    View a short video about PxE here.

    Whether using film photography or digital cameras, the fundamental physics of photography has remained virtually unchanged since its advent in the 1880s. Scientists and engineers have tried to achieve holographic imaging with digital cameras for decades. Still, no one has realized it to date, without degrading image quality or relying on cost-prohibitive technology such as lasers — until now. PxE’s Holographic RGB-IR-Depth Camera ushers in one of the most meaningful transformations in imaging since the invention of film photography- enabling 2D color, infrared and depth images per frame and on a single sensor. PxE’s technology not only has the potential to replace all existing 2D cameras on the market; it can also revolutionize how consumers and machines experience and capture the world through their everyday devices.

     “Our RGB-IR-Depth Camera reduces the size and number of sensors needed along with the cost and complexity associated with today’s perception solutions,” added PxE Holographic Imaging CEO and co-founder Yoav Berlatzky. “We imagine a world where our 3D camera technology is ubiquitous: embedded in all cars with autonomous driving and obstacle avoidance features, used by consumers as facial ID to unlock their smartphone or laptop, to make payments via banking or shopping apps, or used for robotic navigation through advanced machine vision systems.”

     How PxE’s Holographic RGB-IR-Depth Camera Works


    PxE’s 3D imaging technology leverages the wave-like nature of light to simultaneously capture color, infrared and depth while improving the light sensitivity of cameras by 4x. The technology captures a light’s wavefront — its wavelength and curvature — to generate a “white light hologram” and then simultaneously decodes the hologram to output color, infrared and depth images without degrading image resolution. The result is an incredibly clear, radically upgraded three-dimensional image alongside a high-quality 2D color image and infrared image, all from a single frame and sensor. Utilizing its hardware and software, PxE upgrades standard 2D cameras into a multi-functional 3D system while maintaining the size, cost structure and image quality of standard cameras.

    The PxE Holographic RGB-IR-Depth camera is not only multi-functional, it also addresses limitations of perception solutions that have been unsolved until now. For example, 2D cameras have inherent limitations on low-light performance. The Holographic RGB-IR-Depth camera dramatically improves upon the status quo, offering enhanced performance of cameras in the dark. It enhances the light sensitivity of cameras by transmitting 100 percent of the available light to the sensor (compared to 25 percent in 2D cameras), providing much better low-light performance than today’s cameras. Additionally, it can provide natively-fused color and depth images, mitigating synchronization issues and enabling a single line-of-sight. 

     “Today’s ever-connected, AI-driven world demands more and more from the underlying perception-based hardware systems they utilize, which are essentially the ‘eyes’ of the system — whether they’re used to drive our cars or keep people safe on factory floors. By enabling high-resolution images and accurate depth maps from a single sensor, PxE facilitates a safer world and is available for all machine vision systems and applications and at all price points,” said Yanir Hainick, chief technology officer, PxE Holographic Imaging.

     PxE’s RGB-IR-Depth Camera technology will be sold to OEMs for use inside their devices — smartphones, laptops, cars, drones, robots, security, and precision imaging equipment.

    Source: PxE