Tag: AIInfrastructure

  • Lenovo Unveils ThinkSystem V4 Servers with Intel Xeon 6 Processor

    Lenovo Unveils ThinkSystem V4 Servers with Intel Xeon 6 Processor

    Image: Lenovo

    RESEARCH TRIANGLE PARK, NC, Feb 25, 2025 – Lenovo announced three new infrastructure solutions powered by Intel Xeon 6 processors designed to modernize and elevate data centers of any size to AI-enabled powerhouses. The solutions include next-generation Lenovo ThinkSystem V4 servers that deliver advanced-level performance and unique versatility to handle any workload while enabling AI capabilities in compact, high-density designs. Lenovo provides appropriate solutions, whether using edge computing, co-locating, or adopting a hybrid cloud. These options efficiently enable intelligence and bring AI wherever it is needed.

    ThinkSystem SR630 V4 Servers. Image: Lenovo

    The new Lenovo ThinkSystem servers are purpose-built to run the widest range of workloads, including the most compute-intensive – from algorithmic trading to web serving, astrophysics to email, and CRM to CAE. Organizations can streamline management and boost productivity with the new systems, achieving up to 6.1x higher compute performance than previous generation CPUs1 with Intel Xeon 6 with P-cores and up to 2x the memory bandwidth2 when using new MRDIMM technology to scale and accelerate AI everywhere.

    ThinkSystem SR650 V4 Servers. Image: Lenovo

    The systems are designed to address critical power limitations while enhancing the performance needed for demanding AI tasks. With ongoing improvements in Lenovo Neptune liquid cooling innovation, Lenovo is creating an effective approach to data center design that reshapes how power is utilized in IT. Lenovo Neptune water cooling boosts thermal efficiency by 3.5x, reducing power consumption used for cooling IT equipment while increasing its processing power 3. With increased density and efficiency, the new Lenovo ThinkSystem V4 servers with Intel Xeon 6 with E-Cores enable up to 3:1 rack consolidation of 5-year-old infrastructure, freeing up space and power for new AI projects4.

    “Lenovo is reimagining what’s possible in the data center by delivering intelligent and versatile infrastructure solutions that simplify and accelerate IT modernization,” said Scott Tease, vice president of Lenovo Infrastructure Solutions Group, products. “The new Lenovo ThinkSystem V4 servers represent the next generation of performance and innovation, achieving higher compute with less energy consumption and delivering AI-powered management that empowers businesses with fast and protected AI deployment across any environment. With Intel, we’re enabling our customers to scale smarter and evolve faster to achieve AI-powered transformation.”

     Flexibility and Performance without Compromise

    Lenovo ThinkSystem V4 servers equipped with Intel Xeon 6 processors and P-cores offer superior performance and productivity. They are designed to handle demanding AI challenges across various compute-intensive tasks. The servers are built for reliability and can adapt to different settings, including colocation services. This is essential for organizations needing high-performance private AI without the necessary data center space or liquid cooling infrastructure. Enhanced security features safeguard data effectively, regardless of its location. A new locking bezel option secures physical assets in remote environments.

    The new Lenovo ThinkSystem V4 servers include:

    • SR630 V4data center powerhouse that is high-density, space-efficient, and designed to optimize performance in compact environments while delivering unique computing power and scalability for demanding workloads. The super dense 1U system is ideal for cloud service providers (CSPs), telcos and fintech operations, enabling them to manage real-time transactions requiring low latency and high throughout performance with limited floor space.
    • SR650 V4delivers versatility for any workload with up to 25% more software GPU capacity for up to 2x computation performance in a compact 2U form factor5.  It is ideal for engineering, modeling & simulation, and AI with rapid time to value at up to 50% cost savings for GPU-intensive workloads.
    • SR650a V4: purpose-built to deliver AI power in a dense package that enables GPU-intensive workloads like machine learning, virtual desktop infrastructure (VDI), and media analytics. The 2U2S platform supports up to four double-wide GPUs with front GPU access, ensuring unmatched performance and ease of maintenance without compromising memory capacity. This server is ideal for organizations looking to drive AI innovation in a dense, efficient form factor.

    ThinkSystem SR650a V4 Servers. Image: Lenovo

    The Lenovo ThinkSystem V4 solutions also extend Lenovo Neptune liquid cooling from the CPU to the memory with the new Neptune Core Compute Complex Module, supporting faster workloads with reduced fan speeds, quieter operation, and lower power consumption.  Fans can consume as much as 18% of the power used by servers. The new Neptune module is precisely engineered to reduce air flow requirements, yielding lower fan speeds and power consumption while keeping the parts cooler for improved system health and lifespan. The module also expands cooling to four SW GPUs in the new ThinkSystem SR650a.

    Neptune Core Compute Complex Module. Image: Lenovo

    AI-Powered Management for Smarter AI Everywhere

    Businesses can deploy complex AI applications with integrated Lenovo systems management that saves time and resources across the ThinkSystem V4 portfolio. As a central command, XClarity One provides an insightful user interface that ensures fast, efficient, and protected deployment from one system to a thousand. New integrated enterprise remote control enables remote access and management of enterprise infrastructure no matter where it exists. Additionally, new AI-powered analytics offer server SSD predictive failure analysis (PFA) on XClarity One, helping to eliminate downtime by identifying potential problems with DIMMs before they fail. Finally, XClarity One now offers a Federated Directory that centrally manages system access across multiple applications through a unified registry and account.

    Lenovo continues to redefine innovation with adaptable and responsible AI solutions that make AI accessible to everyone. Their technology, from edge computing to data centers, helps organizations discover new opportunities, enhance operations, and keep pace in an AI-driven world.

    1 Up to 6.1x higher performance for compute-intensive workloads such as HPC, AI and database vs. 2nd Gen Intel Xeon CPUs. See 9G10, 9H10, 9A210 ATat intel.com/processor claims: Intel Xeon 6. Results may vary.

    2 Comparing the SR650a V4 to the SR650 V4, The SR650a V4 with 4x H100 GPUs vs the Sr650 V4 with 2x H100 GPUs provides 2x AI Computation performance.

    3 Lenovo Internal data ESG DECK

    4 See [7T1] at intel.com/processorclaims: Intel Xeon 6. Results may vary.

    5 Lenovo Internal data SR650 v4 based on internal benchmark testing.

    Source: Lenovo

    About Lenovo

    Lenovo Group Limited, founded in 1984, is a multinational tech company specializing in designing, manufacturing, and marketing consumer electronics, personal computers, software, servers, and related services. Serving industries such as education, healthcare, retail, and manufacturing, Lenovo offers a diverse product portfolio that includes laptops, desktops, tablets, smartphones, workstations, servers, storage devices, and accessories. The company markets its products under renowned brands like ThinkPad, ThinkBook, IdeaPad, Yoga, Motorola, and Legion. Headquartered in Beijing, China, and Morrisville, NC, Lenovo operates in over 60 countries and sells its products in approximately 160 countries worldwide. In the fiscal year ending March 2023, Lenovo reported revenues of $61.9 billion, maintaining its position as a global leader in the personal computer market with a 24.4% market share.

  • KULR, EDOM Tech Announce AI Supply Chain Collaboration

    KULR, EDOM Tech Announce AI Supply Chain Collaboration

    HOUSTON, TX, Feb 4, 2025 – KULR Technology Group, Inc. announced its partnership with EDOM Technology , a long-standing NVIDIA Channel Partner and a integration and distribution company. This collaboration positions KULR to deliver KULR Xero Vibe (KXV) and KULR ONE product lines to Taiwan, a global epicenter of AI supply chain development, by leveraging its suite of energy management products and solutions to address the need for large-scale systems cooling within the AI ecosystem.

    Image: KULR Technology Group, Inc.

    The partnership will enable KULR to service both server and edge computing devices within the AI supply chain while deploying its suite of energy management products and solutions to meet the needs of the entire AI ecosystem. By aligning with EDOM, KULR is positioning itself to address the global surge in demand for AI infrastructure, fueled by initiatives like The Stargate Project making a recent $500 billion push to accelerate AI infrastructure expansion in the United States.

    “Our partnership with EDOM underscores our commitment to scaling our AI solutions to meet the growing demands of the industry,” said Michael Mo, CEO of KULR Technology Group. “EDOM’s deep-rooted relationship with NVIDIA and extensive expertise in the AI supply chain make them an ideal partner to integrate and distribute our technologies, such as the KXV and KULR ONE, across the region.”

    Taiwan plays a pivotal role in the global AI supply chain, driving advancements that shape the future of AI infrastructure. Highlighting this prominence, Bloomberg featured Taiwan’s importance in the AI ecosystem. With EDOM as a key partner, KULR plans to grow its AI business across Taiwan and the broader Asian market by tapping into EDOM’s market knowledge.

    In recent months, the company has made progress advancing its infrastructure buildout to support the AI ecosystem, including:

    KXV Licensing Partnership for Data Center Cooling: KULR secured a licensing agreement with a Japanese company specializing in systems integration and semiconductor solutions. Their KXV technology will help optimize large-scale fan systems for data center cooling, HVAC, and other industrial applications.

    KXV with NVIDIA Jetson: KULR introduced KXV, integrated with NVIDIA Jetson, to improve vibration control for edge AI systems. This integration combines strong vibration management with AI capabilities, ensuring high performance and reliable operation in edge AI environments.

    Carbon Fiber Cathode Licensing Agreement in Nuclear Reactor Systems: KULR has signed a licensing agreement with a technology partner in Japan for advanced carbon fiber cathode use in nuclear reactor systems. The license focuses on supporting laser-based nuclear fusion systems and small modular reactors (SMRs), offering a cost-effective and reliable method for producing fusion energy with high-powered lasers.  According to Goldman Sachs Research, nuclear power will be a key part of a suite of new energy infrastructure built to meet data-center power demand driven by AI.

    Mo concluded, “With our shared focus on innovation and a commitment to driving progress, this collaboration with EDOM empowers us to deliver cutting-edge technologies, from thermal management solutions to AI-optimized products like the Jetson AI platform, to the rapidly expanding AI supply chain.”

    KULR and EDOM are focused working together to advance AI and energy management, creating a supply chain ecosystem that supports future AI technologies.

    About KULR Technology Group Inc.

    KULR Technology Group Inc., founded in 2013 and headquartered in San Diego, CA, specializes in developing and commercializing high-performance thermal management technologies for electronics, batteries, and other components. The company’s products, lithium-ion battery thermal runaway shields, automated battery cell screening systems, and fiber thermal interface materials, serve industries including space, aerospace, defense, electric vehicles, energy storage, battery recycling transportation, cloud computing, and 5G communication devices. As of 2023, KULR employs approximately 60 individuals. In 2023, the company reported revenues of $10 million.

    About EDOM Technology

    EDOM Technology Co., Ltd., established in 1996 and headquartered in Taipei, Taiwan, is a distributor of integrated circuits (ICs) and electronic components. The company offers a diverse range of products, including analog ICs, batteries, connectors, embedded modules, memory, microcomponents, sensors, and optoelectronics. The products serve various industries such as automotive, computing, consumer electronics, Internet of Things (IoT), medical, mobile, networking and data centers. With over 800 employees worldwide, EDOM operates 32 offices across Greater China, Southeast Asia, Japan, Korea, and India. In 2023, the company reported revenues of approximately US$3.43 billion.

    Source: KULR Technology Group, Inc.

  • ionstream.ai Expands Cloud GPU Platform with NVIDIA L40S

    ionstream.ai Expands Cloud GPU Platform with NVIDIA L40S

    HOUSTON, TX, Jan 16, 2025 – ionstream.ai has announced the immediate availability of NVIDIA L40S GPUs on its GPU as a Service (GaaS) platform. This strategic expansion provides organizations with a cost-effective solution optimized for AI inference and fine-tuning tasks, offering an alternative to larger and expensive GPU options.

    Source: ionstream

    “Organizations are looking for right-sized GPU solutions that match their specific AI workloads,” said Jeff Hinkle, chief executive officer at ionstream.ai. “The addition of the NVIDIA L40S to our cloud platform provides enterprises with the ideal infrastructure for inference and model refinement tasks, delivering the perfect balance of performance and cost-efficiency.”

    Enterprise-Grade AI Infrastructure, On Demand

    The NVIDIA L40S GPU, powered by the Ada Lovelace architecture, represents a breakthrough in AI infrastructure accessibility. ionstream.ai’s implementation delivers:

    • Advanced AI Capabilities:
      • Optimized for AI inference and fine-tuning workflows
      • Ideal for production-scale model deployment
      • Cost-effective alternative to H100 and H200 GPUs for inference tasks
      • Multi-user support for enterprise workloads
    • Revolutionary Cost Economics:
      • Right-sized infrastructure for inference workloads
      • Improved energy efficiency for sustainable operations
      • Zero upfront capital expenditure
      • Pay-as-you-go pricing with per-minute billing

    Transforming Enterprise AI Capabilities

    The L40S platform enables efficient AI model deployment across industrial domains:

    • Oil & Gas Exploration: Process complex seismic data through high-performance computing capabilities, enabling rapid subsurface imaging and reservoir characterization. The L40S accelerates traditional seismic processing workflows while supporting emerging AI-enhanced interpretation methods, reducing time-to-insight for critical exploration decisions.
    • Healthcare & Life Sciences: Deploy medical imaging models and fine-tune diagnostic systems
    • Financial Services: Run real-time inference for fraud detection and risk analysis
    • Automotive & Manufacturing: Power production-ready computer vision applications

    Flexible Deployment Options Meet Enterprise Needs

    ionstream.ai’s platform offers deployment flexibility:

    • Instant Provisioning: Deploy L40S instances in under 60 seconds
    • Flexible Acquisition Options: Available for purchase or lease to meet varying business needs
    • Enterprise-Grade Infrastructure: Hosted in a Tier 4 designed data center in Spring, Texas to provide optimal uptime
    • 24/7 Expert Support: Direct access to GPU infrastructure specialists

    Availability and Special Launch Offer

    The NVIDIA L40S is available on the ionstream.ai platform. For a limited time, new customers can receive:

    • Complimentary one-month proof of concept available for qualified enterprises
    • Complimentary AI infrastructure optimization consultation