IRVINE, CA, Jan 14, 2025 – Roland DGA Corp. has announced the addition of the next-generation XP-640 – a Pro-level 64-inch eco-solvent inkjet printer that combines unmatched image quality and outstanding productivity – to Roland DG’s award-winning TrueVIS series.
“Our fastest TrueVIS printer to date, the XP-640 is uniquely engineered to maximize production capability while maintaining the amazing color and print quality that has become the hallmark of the TrueVIS product family,” said Roland DGA’s product manager of Digital Print, Daniel Valade. “This next-generation inkjet incorporates several exciting innovations, including new printheads, upgraded control technology, and new inks, to provide users with the best of both worlds – outstanding speed and stunning output.”
The XP-640’s newly designed dual staggered printheads eject finer ink droplets at higher densities to produce vibrant graphics with fewer passes. New high-speed data control functionality further enhances the printheads’ performance, enabling the XP-640 to print at speeds of up to 818 square feet per hour while achieving high-definition image quality.
New TH eco-solvent inks formulated for XP-640, deliver an expanded scope that opens up creative opportunities for print professionals. In addition to CMYK, Orange and Green, the TH ink set includes a new Red ink option, allowing users to achieve the highest accuracy with red and orange expression –frequently used colors in sign graphics. The wider color gamut also allows for incredibly precise color matching, making it simple to reproduce brand and corporate colors. TH inks are also extremely cost-effective, with an ink price per milliliter that’s lower than that of conventional ink. Improved profiling reduces ink usage as well, allowing for maximum high-volume production profitability.
True Rich Color, a print setting that optimizes the device’s ability to create realistic graphics that combine vivid colors with neutral grays, smooth gradations, and natural skin tones, has evolved further with the XP-640. Based on the True Rich Color Standard setting, the user can select the ideal color variation depending the client’s requirements.
The XP-640 features a user-friendly 7-inch color LCD touch panel, a media setup function that automatically performs accurate media gap and compensation adjustment, a media take-up that reduces media skewing for stable production, and a perforated sheet cutter for quick and efficient post-processing work.
Like all Roland DG devices, the XP-640 is built to provide superior performance over the long haul, with minimal maintenance required. To minimize downtime, standard maintenance such as replacing caps, wipers, wiper cleaners, and other parts around the printheads has been made easier than ever – users can perform these operations independently without needing a service engineer. XP-640 users also have access to cloud based app Roland DG Connect that helps optimize output operations including, operational visibility, simple profit calculation, downloading of profiles, and more.
The XP-640 also comes bundled with powerful, intuitive VersaWorks 6 RIP software, that offers advanced features enabling to manage print output effortlessly. VersaWorks 6 boasts HARLEQUIN RIP dual-core operation for faster handling of complex files, simple-to-use drag-and-drop functionality, new tools for easier and more precise color matching, built-in Roland DG and PANTONE libraries, nesting for up to 86 jobs, a predictive ink calculator, and much more.
“While sign and graphic production has diversified with the introduction of UV and latex inks, customer demand for eco-solvent ink and the exceptional vibrancy and outdoor durability it provides remains incredibly strong,” noted Valade. “The new XP-640 takes eco-solvent printing to its highest level yet, offering print quality, productivity, cost efficiency, and ease of use that the competition just can’t match. We’re confident that the impressive capabilities of this advanced machine will enable users to increase efficiency, exceed the expectations of their customers, and significantly grow their businesses.”
LAS VEGAS, NV, Jan 14, 2025 – Innatera has showcased Spiking Neural Processor (SNP) that transforms the way battery-powered devices make sense of the physical world at CES 2025.
Delft University’s Spiking Neural Processor uses a unique architecture for brain-like cognition within ultra-low power envelope.
“At this pivotal moment in computing, Innatera’s breakthrough Spiking Neural Processor delivers unmatched energy-efficient, brain-inspired cognition for sensors, unlocking the promise of ambient intelligence,” said Sumeet Kumar, CEO of Innatera. “This revolutionary processor provides an all-in-one solution that simplifies and optimizes sensor data processing at the edge.”
Innatera’s SNP combines a Spiking Neural Network (SNN) engine with a RISC-V processor core and other accelerators to deliver a complete solution in energy-constrained environments. The single-chip solution brings intelligence closer to sensors, enabling next-generation AI and signal processing for applications in consumer electronics, smart homes, and industrial IoT, such as audio interfaces, touch-free interfaces, presence detection, activity recognition, and ECG recognition.
The SNP achieves high-performance pattern recognition at the sensor edge and enables real-time analysis of sensor data to detect and identify embedded patterns, with sub-milliwatt power dissipation and sub-millisecond latency.
Ambient Intelligence marks a major departure from computing technology as we know it, paving the way for a future where digital interactions are as natural as breathing.
At CES 2025, Innatera demonstrated how the SNP can transform computing in several real-world applications:
Audio Scene Classification: Audio scene classification allows devices to be aware of the environment they operate in and use this information to adapt their operation. For example, noise-canceling headphones adapting to ambient noise like airplanes or city buses.
Robust Human Presence Sensing: The detection of human presence is important in a wide range of indoor and outdoor applications, such as security cameras, smart lighting, video doorbells and smart TVs. Using a radar sensor, this demo showcases always-on, privacy-preserving human presence detection with accuracy and power efficiency.
Robust People Counting Using Far Infrared Sensors: Innatera showcased how its SNP enables advanced people counting and human presence detection with passive infrared sensors. Infrared technology is a non-intrusive, low-light, and privacy-preserving method for people counting and human presence detection.
Innatera’s presence at CES 2025 follows a remarkable year of growth and development for the innovation-driven Delft University of Technology spin-off. Earlier this year, the company announced the oversubscription of a Series A $21-million funding round that is accelerating the development of neuromorphic processors.
It is brain surgery. In the world imagined by NVIDIA, AI is everywhere and helping everyone. Image: NVIDIA
NVIDIA CEO Jensen Huang delivered the first keynote at CES 2025 and set the stage for the future of AI. With continuing advances in GPU hardware, the chips that most AI runs on these days, NVIDIA reaffirmed its position as the hardware leader in the tech industry.
Huang first announcement was the GeForce RTX 5090, the “most powerful graphics card NVIDIA has ever developed,” perhaps paying homage to NVIDIA’s beginnings as a maker of graphics cards for gamers. Built on the Ada Lovelace 2 architecture, the RTX 5090 introduces significant advancements in performance and efficiency and may indeed raise the bar for gaming and content creation.
Is that alligator skin, Jensen? Huang highlights the mechanical design of the RTX. “This is just a big fan,” he says, showing the circuit board with the GPU chips inside.
“The GeForce RTX 5090 is not just an upgrade; it’s a revolution for gamers and creators alike,” Huang declared. “It blurs the line between the virtual and the real, making experiences more immersive than ever.”
Graphics Imagined Mostly, Some Computed
Rendering by AI. Of 33 million pixels generated in an image, only two million are computed, the rest “inferred” by AI, making real-time rendering of complex 3D scenes possible.
Huang emphasized the rapid advancement of artificial intelligence, stating that AI is progressing at an “incredible pace. He outlined the evolution of AI from perception AI—understanding images, words and sounds—to generative AI, which creates text, images, video and sound.
Huang introduced the concept of “physical AI,” describing it as AI that can perceive, reason, plan and act but is deeply rooted in the physical world. NVIDIA’s GPUs and platforms are central to enabling breakthroughs across industries, including gaming, robots and autonomous vehicles, said Huang (more on that later).
Huang underscored the exponential rate of data creation.
“In the next couple of years, humanity will produce more data than it has produced since the beginning,” he said.
Huang announced that the company’s latest generation AI processor series, Blackwell, is now in full production. and claimed that every major cloud service provider has systems up and running with Blackwell and showcased systems from 15 computer manufacturers at the event. Huang emphasized that Blackwell is the engine of AI, bringing significant advancements to PC gamers, developers, and creatives.
Gladiator Huang? You had to be there. The shield is a representation of the Grace Blackwell NVLink72, a 1.5 ton supercomputer that is assembled onsite.
A Token Investment
Huang told of how his new GPUs, which manage better performance and use less energy, will allow data centers to make more money. Not only would it cut cooling costs, a major expense for data centers, but they will enable data centers to generate more AI tokens—a critical metric for monetizing AI services.
NVIDIA’s latest Blackwell-based GPUs, such as the RTX 5090 and the Grace Blackwell NVLink 72 systems, deliver significantly better energy efficiency so data centers can achieve the same computational output with far less electricity.
Or datacenter could run at full tilt and generate more AI tokens— the key unit of output in the AI-driven economy. Tokens, the blocks of AI-generated text or other outputs, can be produced faster and at lower costs.
For example, suppose a data center previously generated 1 billion tokens daily at a specific energy cost. An improved efficiency might now allow them to generate 1.5 billion tokens using the same energy, directly increasing revenue potential.
Agentic AI
An agent to assist with every job. “The IT department of the future will be like the HR of AI agents,” said Huang.
“The age of AI agentics is here,” said Huang, signaling a transformative shift in artificial intelligence. He described agentic AI as a “multitrillion-dollar opportunity” that will revolutionize work across industries. Huang emphasized that AI agents are becoming the new digital workforce, capable of reasoning, planning, and acting autonomously.
NVIDIA Cosmos: A platform designed to advance physical AI by providing new models and video data processing pipelines for robots, autonomous vehicles, and vision AI.
AI Foundation Models for RTX PCs: These models feature NVIDIA NIM microservices and AI Blueprints for crafting digital humans, podcasts, images, and videos, enabling the development of specialized AI agents to automate tasks.
Companies will develop the agents into assistants for many of their roles, predicts Huang. Perhaps the best example is developers, for whom AI-code generation is already being widely used.
There are 30 million developers who could use an AI agent, says Huang.
Foundation for the World
Digital twin of a warehouse.
Huang thinks the next big AI (after LLMs, which work only with text) is physical AI, which works with physics. In the keynote, Huang introduced NVIDIA’s Cosmos, the world’s first World Foundation Model, which uses physical AI.
This foundational knowledge allows Cosmos to create simulations that reflect real-world behavior, making it highly suitable for industries like robotics, autonomous vehicles, and industrial AI.
Cosmos will then be able to model the physical world. It has learned all its physics not from sitting in class or reading books but from “20 million hours of video” of objects in motion, interactions between objects, and physical environments. This ought to allow it to predict interactions and motion for industries such as robotics and autonomous vehicles where real-world data is limited or costly to capture, factory automations, warehouse operations and their optimization.
Huang announced that Cosmos will be freely available, open-licensed and available on GitHub.
The Cosmos World Foundation Model is a sophisticated AI system that ingests and understands multimodal data, including text, images, and video, to generate realistic simulations and predictions about the physical world. Unlike LLMs that process text-based tokens, the World Foundation Model generates “action tokens,” enabling it to predict and simulate real-world behavior based on actual Newtonian physics.
With COSMOS physical AI, we will be able to generate real worlds from sketches and models and depict their variations fully rendered using Omniverse (below).
Huang discussed autonomous vehicles (AVs) as one of the most significant applications of physical AI and NVIDIA’s advanced computing platforms. He outlined the current state of AV technology, NVIDIA’s contributions, and how physical AI plays a critical role in testing, training, and advancing the capabilities of autonomous systems (more on that later).
DIGITS – Your Own Personal Supercomputer
Huang introduced DIGITS, a personal supercomputer that will be available later this year (May). The diminutive computer (see image above) is the productization of the project called “Deep Learning GPU Intelligence Training System” (DIGITS), which NVIDIA shortened to DGX and introduced as a graphics platform of the same name in 2016.
DIGITS is meant to be used for the development of AI applications. Users can access the whole of the NVIDIA AI software library for experimentation and prototyping, including software development kits, orchestration tools, frameworks and models available in the NVIDIA NGC catalog and on the NVIDIA Developer portal. Developers can fine-tune models with the NVIDIA NeMo framework, accelerate data science with NVIDIA RAPIDS libraries and run common frameworks such as PyTorch, Python and Jupyter notebooks, according to NVIDIA.
That NVIDIA is making a desktop supercomputer that can run Windows applications (with WSL 2, a Windows subsystem for Linux machines) DIGITS should be a wake-up call, if not a fire alarm, for every PC manufacturer. Here is the 2nd most valuable computer hardware company no longer content to make chips and boards for PC manufacturers but is stepping out to make its own PCs. It’s as if makers of jet engines decided to make airplanes. No specifics, like benchmark comparisons, were given at CES but since DIGITS will be operating at a scale and speed of DGX systems, thanks to the advanced architecture of the GV110 chip and its integration with NVIDIA’s AI software stack, we expect it to blow the doors off any traditional Intel-based PC running AI-assisted software… and what software will not be within the year?
An AI-Based Windows?
Huang outlined a vision for a future version of Windows that would be deeply integrated with AI capabilities. He referenced the revolutionary impact of Windows 95, which introduced multimedia APIs that transformed the software development landscape. He likened this transformative potential to what he envisions for the future of AI on Windows PCs.
Huang introduced the concept of “generative APIs,” which would allow developers to integrate AI directly into applications on Windows PCs. These APIs would enable:
Generative AI for Language: Advanced natural language processing for creating text and responding to queries.
Generative AI for Graphics: Tools for producing 3D models, animations, and video content.
Generative AI for Sound: Capabilities for audio synthesis and manipulation.
These APIs would extend the functionality of traditional computing by bringing AI-assisted tools directly into everyday applications.
Currently, NVIDIA is using Windows Subsystem for Linux (WSL) 2, which provides a dual-operating-system environment optimized for AI development.
Autonomous Vehicles
NVIDIA’s automotive vertical business is currently at $4 billion and is expected to grow to approximately $5 billion in fiscal year 2026, according to NVIDIA.
Huang highlighted that autonomous vehicle, after years of development, are now becoming mainstream, citing the success of companies like Waymo, Tesla, and Aurora (AV trucks). He characterized the AV industry as likely to become the first multi-trillion-dollar robotics market, driven by the massive demand for self-driving cars and trucks.
Each year, 100 million cars are built, said Huang. A billion cars are on the road globally, collectively driving a trillion miles annually. Autonomous capabilities will revolutionize how these vehicles operate.
NVIDIA is working with major automakers like Mercedes, BYD, Toyota and startups like Zoox and Waymo to develop next-generation autonomous systems.
Instead of a 3 Body Problem, we have a 3 Computer Solution, said Jensen Huang at CES, invoking the sci-fi series. Image: Netflix.
Autonomous vehicles require a “three-computer solution” tailored for different stages of development and deployment:
Training AI Models (DGX Systems) for training AV models using vast datasets and simulations.
Simulation and Synthetic Data Generation (Omniverse & Cosmos) to create environments and situations to enable extensive tests before real-world deployment.
In-Car AI Systems (Drive AGX Thor), VIDIA’s latest AI supercomputer for cars, processes massive amounts of sensory data, from cameras to LiDAR and radar, to make real-time driving decisions.
Thor offers 20 times the processing power of its predecessor, making it suitable not only for AVs but also for robotics and other high-performance applications.
Huang emphasized the importance of testing AV systems with synthetic data and simulations powered by physical AI. These technologies allow NVIDIA to simulate real-world driving conditions at an unprecedented scale and fidelity. NVIDIA’s Cosmos platform generates lifelike driving scenarios based on real-world data, including weather, lighting, and road conditions. This allows AV models to train on edge cases that are rare or dangerous to capture in real life. Using NVIDIA Omniverse, AV systems can simulate billions of miles of driving by replaying and altering existing driving logs. For example, recorded real-world footage of a drive can add rain and snow, change the time of day, increase traffic, and more. This would allow the AV industry to turn thousands of real-world drives into billions of simulated miles, amplifying the training data exponentially.
Huang stressed that the ability to simulate edge cases—such as unpredictable pedestrian behavior or hazardous weather—is critical for reducing risk with autonomous vehicles.
Huang concluded his keynote with a forward-looking statement about the transformative role of AI and computing in society. “We are entering an era where AI will become as ubiquitous as electricity,” he said. “From healthcare to entertainment, from scientific research to autonomous systems, NVIDIA’s technologies will drive the next wave of innovation.”
LAS VEGAS, NV, Jan 13, 2025 – NVIDIA unveiled NVIDIA Project DIGITS, a personal AI supercomputer that provides AI researchers, data scientists and students worldwide with access to the power of the NVIDIA Grace Blackwell platform.
Project DIGITS features the new NVIDIA GB10 Grace Blackwell Superchip, offering a petaflop of AI computing performance for prototyping, fine-tuning and running large AI models.
With Project DIGITS, users can develop and run inference on models using their own desktop system, then seamlessly deploy the models on accelerated cloud or data center infrastructure.
“AI will be mainstream in every application for every industry. With Project DIGITS, the Grace Blackwell Superchip comes to millions of developers,” said Jensen Huang, founder and CEO of NVIDIA. “Placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the age of AI.”
GB10 Superchip Provides a Petaflop of Power-Efficient AI Performance
The GB10 Superchip is a system-on-a-chip (SoC) based on the NVIDIA Grace Blackwell architecture and delivers up to 1 petaflop of AI performance at FP4 precision.
GB10 features an NVIDIA Blackwell GPU with latest-generation CUDA cores and fifth-generation Tensor Cores, connected via NVLink-C2C chip-to-chip interconnect to a high-performance NVIDIA Grace CPU, that includes 20 power-efficient cores built with the Arm architecture. MediaTek, known for its Arm-based SoC designs, collaborated on the design of GB10, contributing to its power efficiency, performance and connectivity.
The GB10 Superchip enables Project DIGITS to deliver powerful performance using only a standard electrical outlet. Each Project DIGITS features 128GB of unified, coherent memory and up to 4TB of NVMe storage. With the supercomputer, developers can run up to 200-billion-parameter large language models to supercharge AI innovation. In addition, using NVIDIA ConnectX networking, two Project DIGITS AI supercomputers can be linked to run up to 405-billion-parameter models.
Grace Blackwell AI Supercomputing Within Reach
With the Grace Blackwell architecture, enterprises and researchers can prototype, fine-tune and test models on local Project DIGITS systems running Linux-based NVIDIA DGX OS, and then deploy them seamlessly on NVIDIA DGX Cloud, accelerated cloud instances or data center infrastructure.
This allows developers to prototype AI on Project DIGITS and then scale on cloud or data center infrastructure, using the same Grace Blackwell architecture and the NVIDIA AI Enterprise software platform.
Project DIGITS users can access an extensive library of NVIDIA AI software for experimentation and prototyping, including software development kits, orchestration tools, frameworks and models available in the NVIDIA NGC catalog and on the NVIDIA Developer portal. Developers can fine-tune models with the NVIDIA NeMo framework, accelerate data science with NVIDIA RAPIDS libraries and run common frameworks such as PyTorch, Python and Jupyter notebooks.
To build agentic AI applications, users can also harness NVIDIA Blueprints and NVIDIA NIM microservices, that are available for research, development and testing via the NVIDIA Developer Program. When AI applications are ready to move from experimentation to production environments, the NVIDIA AI Enterprise license provides enterprise-grade security, support and product releases of NVIDIA AI software.
Project DIGITS will be available from May, 2025 from NVIDIA and top partners, starting at $3,000.
LAS VEGAS, NV, Jan 13, 2025 – Lenovo unveiled a lineup of AI-powered solutions at CES 2025, showcasing innovation across its commercial, gaming, and consumer segments. At the heart of Lenovo’s innovation is its Smarter AI for All vision that integrates cutting-edge AI capabilities into its Copilot+ PCs and solutions. Leading this effort is Lenovo AI Now, a powerful on-device personalized AI assistant built on Meta’s Llama 3 model that offers natural language processing for tasks like document summarization, knowledge base retrieval, and workflow assistance. Additionally, Legion Space, Lenovo’s unified gaming platform, connects AI to provide features like personalized gameplay analytics, content creation tools, and seamless device synchronization.
Redefining Business Productivity with AI-Powered ThinkPad and ThinkBook Solutions
Lenovo introduced its latest business-focused Copilot+ PC devices, tailored for modern workplaces and professionals:
ThinkPad X9 Aura Editions: The ThinkPad X9 14” and 15” Aura Edition laptops offer top performance with Intel Core Ultra processors, Lenovo AI Now for automation, and premium OLED displays, ideal for hybrid professionals.
ThinkPad X9 14 Aura Edition
ThinkBook Plus Gen 6 Rollable: AI-enabled laptop with rollable display that expands vertically from 14” to 16.7”. It redefines productivity with split-screen functionality, virtual display options, and powerful Intel Core Ultra processors.
ThinkBook Plus Gen 6 Rollable
Lenovo also introduced new ThinkCentre desktop solutions:
ThinkCentre M90a Pro Gen 6: AI enabled QHD display desktop with AI-powered noise suppression and directional audio for virtual meetings.
ThinkCentre neo 50q QC: The world’s first Snapdragon X Series powered commercial desktop, offering AI-driven workflows in a tiny 1-liter form factor.
ThinkCentre neo Ultra Gen 2: The upgraded powerhouse has Intel Core Ultra processors and Thunderbolt 4 ports.
Additional Commercial Announcements
Lenovo further expanded its ecosystem with advanced monitors, accessories, and software designed to enhance productivity and connectivity:
ThinkVision P Series Monitors: Monitors featuring high dynamic color accuracy, AI-powered energy efficiency, and modern designs with 95% post-consumer recycled plastics.
Self-Charging Bluetooth Keyboard: A battery-free keyboard with ambient light power, customizable tilt legs, media controls, and real-time power tracking.
ThinkPad X9 Accessories: Designed for professionals on the go, including:
Lenovo X9 Charging GaN Dock, a compact hub offering versatile connectivity with HDMI 2.1, USB-C, and SD card support.
Multi-Device Wireless Mouse (X9 Edition), connecting three devices with customizable AI Now trigger button.
Lenovo TWS Earbuds (X9 Edition) with audio, auto-switching, and noise cancellation.
Origami X9 Sleeves, protective covers that double as workstation stands, made from recycled materials.
QHD and 4K Pro Webcams: Microsoft Teams-certified webcams with AI-enhanced features like participant tracking, lighting adjustments, and real-time image optimization.
This expanded portfolio showcases Lenovo’s leadership in delivering AI-powered, sustainable, and highly adaptable solutions that address the evolving needs of today’s businesses.
Expanding the Lenovo Legion Gaming Portfolio: Game without Limits
Lenovo Legion unveiled a comprehensive lineup of next-generation gaming devices, accessories, and software designed to deliver cutting-edge performance, innovation, and adaptability for all levels of gamers:
Legion Go S (8”, 1) and Legion Go S (8”, 1) – Powered by SteamOS: A gaming device powered by AMD processors and available in Windows or SteamOS versions.
Legion Go S (8”, 1)
Legion Go (8.8”, 2) Prototype: A gaming device featuring OLED PureSight display, AMD Ryzen processors, and enhanced ergonomics.
Legion Pro Series Laptops: The redesigned Legion Pro 7i, Pro 5i, and Pro 5 feature Intel Core Ultra or AMD Ryzen processors, NVIDIA GeForce RTX GPUs, and Coldfront Hyper cooling systems for competitive performance, dynamic visuals, and smooth framerates.
Legion 7i and Legion 5 Series: Portable gaming laptops with PureSight displays, sleek designs, and superior cooling, ideal for gamers handling STEM and academic tasks.
Legion Pro 7i (16”, 10)
Lenovo also introduced advanced gaming desktops, monitors, LOQ systems, and the latest Legion Tab:
Legion Tower Series: The Legion Tower 7i, Tower 5i, and Tower 5 offer upgradable components, Intel Core Ultra or AMD Ryzen processors, and NVIDIA GeForce RTX GPUs
Legion Pro 34WD-10 Gaming Monitor: A curved 800R PureSight OLED gaming monitor featuring AI-driven burn-in protection and high-performance visuals with a 240Hz refresh rate and one-cable USB Type-C solution.
Legion R34w-30 Monitor: A versatile ultrawide 34” display with a 1500R curve, 180Hz refresh rate, and vibrant color accuracy,
Lenovo LOQ portfolio: Updated Lenovo LOQ devices are available with either Intel or AMD options with NVIDIA GeForce RTX GPUs,
Lenovo Legion Tab (8.8”, 3): A portable gaming device with an 8.8” 2.5K 165Hz PureSight touch display, Snapdragon 8 Gen 3 processor supporting ray tracing and 165 FPS, 12GB RAM, and 256GB storage. Features Legion Coldfront Vapor Chamber for efficient cooling, a 6550mAh battery for extended sessions.
Legion Space software was fully redesigned, providing gamers with enhanced ecosystem control and AI-powered tools:
Game Coach: Tracks user inputs and delivers personalized recommendations to improve reaction time, accuracy, and spatial awareness.
Game Clip Master: Uses generative AI to edit hours of gameplay into shareable, social-ready highlights.
Game Companion: A customizable AI avatar that reacts to gameplay, offering encouragement and tips in real time.
Lenovo Legion Accessories round out the ecosystem, delivering performance and convenience:
Legion Go S Screen Protector: Durable 9H glass that protects the handheld device without compromising touch sensitivity.
Legion Sling Bag: Water-resistant and adjustable, designed to carry Legion Go devices and accessories on the go.
Legion Glasses 2: Updated lightweight wearable virtual screen glasses with brighter visuals, wider color gamut, and up to 120Hz refresh rate for private, immersive gaming experiences.
Legion H410 Wireless Gaming Headset: High-fidelity sound, dual connectivity (2.4GHz and Bluetooth 5.2), and a flip-to-mute microphone for seamless, high-performance audio.
With this expanded lineup of devices, monitors, accessories, and software innovations, Lenovo Legion continues to redefine what’s possible in gaming, empowering players of all levels to game their way—whether they’re at home, in the classroom, or on the move.
Empowering Creativity and Productivity with AI-Enabled Lenovo Yoga, IdeaPad, and IdeaCentre Devices.
Lenovo unveiled AI-driven Yoga, IdeaPad, and IdeaCentre devices designed to transform creativity, productivity, and entertainment experiences:
Yoga Slim 9i (14”, 10): The world’s first laptop with camera-under-display (CUD) technology, delivering a stunning 98% screen-to-body ratio and a 4K PureSight Pro OLED display. This Copilot+ PC powered by Intel Core Ultra processors saves power by up to 17 hours of battery life.
Yoga Slim 9i (14”, 10)
Yoga Book 9i (14”, 10): A dual-screen OLED laptop is a creative studio with AI tools for note-taking and book synopses, featuring Air Gestures for multitasking.
Yoga 9i 2-in-1 Aura Edition (14”, 10): The convertible Copilot+ PC has Intel Core Ultra processors, Lenovo Aura Edition features, OLED visuals, and a flexible 360-degree design for creators.
Yoga Tab Plus: Lenovo’s first on-device AI tablet, featuring Snapdragon 8 Gen 3 processor, and AI tools like AI Note and AI Transcript.
Lenovo also introduced more new Yoga and IdeaPad laptops for versatile users:
Yoga Slim 7i Aura Edition: AI-powered canvas with a 2.8K OLED display, Lenovo Aura Edition features, including Smart Modes for personalized productivity.
Yoga 7i 2-in-1 Series: Convertible laptops featuring OLED PureSight displays and versatile form factors
IdeaPad Pro 5i (16”, 10): A laptop featuring Intel Core Ultra processors, NVIDIA GeForce RTX graphics, and an OLED display with up to 135W TDP
IdeaCentre Desktops deliver compact performance and AI-driven features for creative and home users:
IdeaCentre Mini x (1L, 10): The world’s first consumer desktop powered by Snapdragon X Series processors.
IdeaCentre Tower (17L, 10): A customizable desktop with Intel Core Ultra processors and NVIDIA GeForce RTX graphics.
Lenovo further expanded its tablet lineup to empower students, creatives, and families:
Idea Tab Pro: A tablet with Circle to Search and Google Gemini, 3K display, and boosted performance by MediaTek Dimensity 8300.
Lenovo Tab: A 10.1” tablet with customizable cases
A Vision for the Future: AI-Driven Innovation and Concept Devices
Lenovo showcased visionary AI concepts that highlight the future of smart, adaptive technology:
AI Display: AI improves user interaction through screen rotation, elevation, tilt, posture alerts, and voice control.
AI Travel Set: AI enabled wearables track activity, offer productivity insights, and provide real-time language translation for travelers.
Lenovo Action Assistant: An AI assistant utilizing Action Model to automate tasks through language processing.
AdaptX Mouse: A multi-purpose mouse that becomes a tool, hub, or power bank, offering practicality and portability.
AI Headphones: High-tech headphones for transparent, effective communication.
Lenovo is committed to blend AI tech and user-centric design to create solutions that redefine productivity, collaboration, and adaptability.
LAS VEGAS, NV, Jan 10, 2025 – Pimax has launched its Crystal Super headset at CES 2025, giving global media the opportunity to experience the 29 million-pixel VR device for the first time.
The World Meets the Crystal Super
The Crystal Super is Pimax’s latest flagship headset, featuring a resolution of 3840×3840 per eye, meaning it’s the first VR headset reaching retina levels of clarity. Glass aspheric lenses provide 57 PPD at over 120° horizontal field of view, with a brightness of 280 nits.
Pimax brought the Crystal Super DVT2 model (Design Validation Test #2), with all features working except eye-tracking, and final optimizations still to be made. International media outlets such as Linus Tech Tips, CNN, TechRadar, PC Gamer, PC Mag, Heise and Gaming Nexus have been among the first to test these innovations, making remarks on the new clarity, brightness and wide field of view of the new headset.
The Crystal Super comes with different optical engines, meaning the whole panels and lenses can be swapped out within seconds. The 57 PPD QLED (demoed at CES) is shipping soon, with the 50 PPD QLED and micro-OLED shipping also in Q1 2025.
Partners & Other Headsets
Joining the Crystal Super’s debut was the streamlined Pimax Crystal Light; a more accessible high-end PCVR headset, that is extremely popular among flight & race simmers. It was demoed in such a use scenario by partners such as Next Level Racing, MOZA Racing, Podium1, and Apevie — as well as in other use cases by Moxi (sports training) as well as CAD FORGE and NEED Immersive Reality (professional design).
Completing the lineup of Pimax at CES 2025 is the 60G Airlink, enabling ultra-low-latency wireless PCVR with the original Pimax Crystal. The 60G Airlink runs at a 90 Hz refresh rate, so that users can enjoy complete freedom without compromising on performance or visual fidelity.
“It’s always exciting to be at CES to meet users and see them use our headsets, and this year especially being the first time we publicly demoed the Crystal Super,” said Robin Weng, founder of Pimax: “The positive reactions from attendees affirm our dedication to pushing VR technology to new heights.”
LAS VEGAS, NV (CES), Jan 7, 2025 – NVIDIA announced generative AI models and blueprints that expand NVIDIA Omniverse integration further into physical AI applications such as robotics, autonomous vehicles and vision AI. Global leaders in software development and professional services are using Omniverse to develop new products and services that will accelerate the next era of industrial AI.
Accenture, Altair, Ansys, Cadence, Foretellix, Microsoft and Neural Concept are among the first to integrate Omniverse into their next-generation software products and professional services. Siemens, a leader in industrial automation, announced today at the CES trade show the availability of Teamcenter Digital Reality Viewer — the first Siemens Xcelerator application powered by NVIDIA Omniverse libraries.
“Physical AI will revolutionize the $50 trillion manufacturing and logistics industries. Everything that moves — from cars and trucks to factories and warehouses — will be robotic and embodied by AI,” said Jensen Huang, founder and CEO at NVIDIA. “NVIDIA’s Omniverse digital twin operating system and Cosmos physical AI serve as the foundational libraries for digitalizing the world’s physical industries.”
New Models and Frameworks Accelerate World Building for Physical AI
Creating 3D worlds for physical AI simulation requires three steps: world building, labeling the world with physical attributes and making it photoreal.
NVIDIA offers generative AI models that accelerate each step. The USD Code and USD Search NVIDIA NIM microservices are now available, letting developers use text prompts to generate or search for OpenUSD assets. A new NVIDIA Edify SimReady generative AI model can automatically label existing 3D assets with attributes like physics or materials, enabling developers to process 1000 3D objects in minutes instead of over 40 hours manually.
NVIDIA Omniverse, paired with new NVIDIA Cosmos world foundation models, creates a synthetic data multiplication engine — letting developers easily generate massive amounts of controllable, photoreal synthetic data. Developers can compose 3D scenarios in Omniverse and render images or videos as outputs. These can then be used with text prompts to condition Cosmos models to generate countless synthetic virtual environments for physical AI training.
NVIDIA Omniverse Blueprints Speed Up Industrial, Robotic Workflows
During the CES keynote, NVIDIA also announced four new blueprints that make it easier for developers to build Universal Scene Description (OpenUSD)-based Omniverse digital twins for physical AI. The blueprints include:
Mega, powered by Omniverse Sensor RTX APIs, for developing and testing robot fleets at scale in an industrial factory or warehouse digital twin before deployment in real-world facilities.
Autonomous Vehicle (AV) Simulation, also powered by Omniverse Sensor RTX APIs, that lets AV developers replay driving data, generate new ground-truth data and perform closed-loop testing to accelerate their development pipelines.
Omniverse Spatial Streaming to Apple Vision Pro that helps developers create applications for immersive streaming of large-scale industrial digital twins to Apple Vision Pro.
Real-Time Digital Twins for CAE, a reference workflow built on NVIDIA CUDA-X acceleration, physics AI and Omniverse libraries that enables real-time physics visualization.
Market Leaders Supercharge Industrial AI Using NVIDIA Omniverse
Global leaders in software development and professional services are using Omniverse to develop new products and services that are poised to accelerate the next era of industrial AI.
Building on its adoption of Omniverse libraries in its Reality Digital Twin data center digital twin platform, Cadence, a leader in electronic systems design, announced further integration of Omniverse into Allegro, its leading electronic computer-aided design application used by the world’s largest semiconductor companies.
Altair, a leader in computational intelligence, is adopting the Omniverse blueprint for real-time CAE digital twins for interactive computational fluid dynamics (CFD). Ansys is adopting Omniverse into Ansys Fluent, a leading CAE application. Neural Concept is integrating Omniverse libraries into its next-generation software products, enabling real-time CFD and enhancing engineering workflows.
Accenture, a leading global professional services company, is using Mega to help German supply chain solutions leader KION by building next-generation autonomous warehouses and robotic fleets for their network of global warehousing and distribution customers.
AV toolchain provider Foretellix, a leader in data-driven autonomy development, is using the AV simulation blueprint to enable full 3D sensor simulation for optimized AV testing and validation. Research organization MITRE is also deploying the blueprint, in collaboration with the University of Michigan’s Mcity testing facility, to create an industry-wide AV validation platform.
Katana Studio is using the Omniverse spatial streaming workflow to create custom car configurators for Nissan and Volkswagen, allowing them to design and review car models in an immersive experience while improving the customer decision-making process.
Innoactive, an XR streaming platform for enterprises, used the workflow to add platform support for spatial streaming to Apple Vision Pro. The solution enables Volkswagen Group to conduct design and engineering project reviews at human-eye resolution. Innoactive also collaborated with Syntegon, a provider of processing and packaging technology solutions for pharmaceutical production, to enable Syntegon’s customers to walk through and review digital twins of custom installations before they are built.
How long before a million monkeys at typewriters could produce the works of Shakespeare? There’s not enough time in the world, concluded researchers in “A Numerical Evaluation of the Finite Monkey Theorem,” published earlier this month in Franklin Open, a peer-reviewed journal by Philadelphia’s Franklin Institute. The researchers calculated the odds of the entire works of Shakespeare (approximately 885,000 words) before the heat death of the universe (10100 years off, don’t worry) as being about 6.4 x 10 -7448254.
That is a ten with over 7 million zeros.
This has, of course, some really big IFs. If the monkeys survive. IF their population remains constant. IF their bananas are plentiful until their demise. IF they can keep pressing keys at the rate of one every second. IF they never need rest.
Clearly, monkeys would not be a good fit for this job. A fast human typist could increase the probability by a couple of orders of magnitude. Still not likely, though.
Enter the Quantum Computer
But what if the flesh and blood monkey was replaced by its digital twin and was powered by a quantum computer? How would that change things? (Skip ahead for the answer).
In what may be a relief to playwrights everywhere, Google’s quantum computer is not yet ready to create the next big play. In fact, it may be useful for only a few esoteric tasks, according to the New York Times, and none of them of any practical use, such as drug discovery, code-cracking, etc.
A fanciful depiction of the Google chip. Image: Google.
On Monday, December 9, Google unveiled Willow, a “state of the art” quantum chip and claimed itself #1 in the race to create a quantum computer. The chip has 105 qubits or quantum bits.
On hand was Julian Kelly, Director of Quantum Hardware at Google Quantum AI. Computing. Kelly has PhD in physics from UC Santa Barbara and has worked for Google for the last 9 years.
Surface code logical qubits of increasing sizes, each able to correct more errors than the last. The encoded quantum state is stored on the array of data qubits (gold). Measure qubits (red, cyan, blue) check for errors on the neighboring data qubits. Image and caption: Google
Kelly explains that the advances made are less in the hardware and more in the error correction. Quantum computing, because of the temperamental nature of qubits, is apparently quite error-prone, making bits flip from 0 to 1 or vice versa. But Google, in a paper published in Nature magazine, says it has found a way to correct errors as more qubits are added to the Willow chip.
The error correction appears to be Google’s real breakthrough. With a sophisticated error-checking scheme, each qubit does a parity check on its neighbors.
Google claims it can run 10 billion cycles of error correction without seeing an error. However, it admits to having a long way to go before quantum computing can achieve the trillions of error-free cycles needed to solve “tomorrow’s problems.” Though the team expresses confidence they can get past the error threshold, as it did with Willow’s predecessor (Sycamore), they are aware they may encounter physics that is not yet understood.
The Answer
Even a 10 to tenth power speed improvement, as quantum computing promises, is insufficient to solve the Finite Monkey Problem. As pointed out above, the likelihood of monkeys generating the works of Shakespeare is on the order of 10 to the millionth power. However, there are certainly more practical problems that quantum computing could help with than duplicating Shakespeare’s plays. May we suggest creating the play Shakespeare would have written next, or a play that is written for our time?