Tag: CloudComputing

  • Lenovo Unveils ThinkSystem V4 Servers with Intel Xeon 6 Processor

    Lenovo Unveils ThinkSystem V4 Servers with Intel Xeon 6 Processor

    Image: Lenovo

    RESEARCH TRIANGLE PARK, NC, Feb 25, 2025 – Lenovo announced three new infrastructure solutions powered by Intel Xeon 6 processors designed to modernize and elevate data centers of any size to AI-enabled powerhouses. The solutions include next-generation Lenovo ThinkSystem V4 servers that deliver advanced-level performance and unique versatility to handle any workload while enabling AI capabilities in compact, high-density designs. Lenovo provides appropriate solutions, whether using edge computing, co-locating, or adopting a hybrid cloud. These options efficiently enable intelligence and bring AI wherever it is needed.

    ThinkSystem SR630 V4 Servers. Image: Lenovo

    The new Lenovo ThinkSystem servers are purpose-built to run the widest range of workloads, including the most compute-intensive – from algorithmic trading to web serving, astrophysics to email, and CRM to CAE. Organizations can streamline management and boost productivity with the new systems, achieving up to 6.1x higher compute performance than previous generation CPUs1 with Intel Xeon 6 with P-cores and up to 2x the memory bandwidth2 when using new MRDIMM technology to scale and accelerate AI everywhere.

    ThinkSystem SR650 V4 Servers. Image: Lenovo

    The systems are designed to address critical power limitations while enhancing the performance needed for demanding AI tasks. With ongoing improvements in Lenovo Neptune liquid cooling innovation, Lenovo is creating an effective approach to data center design that reshapes how power is utilized in IT. Lenovo Neptune water cooling boosts thermal efficiency by 3.5x, reducing power consumption used for cooling IT equipment while increasing its processing power 3. With increased density and efficiency, the new Lenovo ThinkSystem V4 servers with Intel Xeon 6 with E-Cores enable up to 3:1 rack consolidation of 5-year-old infrastructure, freeing up space and power for new AI projects4.

    “Lenovo is reimagining what’s possible in the data center by delivering intelligent and versatile infrastructure solutions that simplify and accelerate IT modernization,” said Scott Tease, vice president of Lenovo Infrastructure Solutions Group, products. “The new Lenovo ThinkSystem V4 servers represent the next generation of performance and innovation, achieving higher compute with less energy consumption and delivering AI-powered management that empowers businesses with fast and protected AI deployment across any environment. With Intel, we’re enabling our customers to scale smarter and evolve faster to achieve AI-powered transformation.”

     Flexibility and Performance without Compromise

    Lenovo ThinkSystem V4 servers equipped with Intel Xeon 6 processors and P-cores offer superior performance and productivity. They are designed to handle demanding AI challenges across various compute-intensive tasks. The servers are built for reliability and can adapt to different settings, including colocation services. This is essential for organizations needing high-performance private AI without the necessary data center space or liquid cooling infrastructure. Enhanced security features safeguard data effectively, regardless of its location. A new locking bezel option secures physical assets in remote environments.

    The new Lenovo ThinkSystem V4 servers include:

    • SR630 V4data center powerhouse that is high-density, space-efficient, and designed to optimize performance in compact environments while delivering unique computing power and scalability for demanding workloads. The super dense 1U system is ideal for cloud service providers (CSPs), telcos and fintech operations, enabling them to manage real-time transactions requiring low latency and high throughout performance with limited floor space.
    • SR650 V4delivers versatility for any workload with up to 25% more software GPU capacity for up to 2x computation performance in a compact 2U form factor5.  It is ideal for engineering, modeling & simulation, and AI with rapid time to value at up to 50% cost savings for GPU-intensive workloads.
    • SR650a V4: purpose-built to deliver AI power in a dense package that enables GPU-intensive workloads like machine learning, virtual desktop infrastructure (VDI), and media analytics. The 2U2S platform supports up to four double-wide GPUs with front GPU access, ensuring unmatched performance and ease of maintenance without compromising memory capacity. This server is ideal for organizations looking to drive AI innovation in a dense, efficient form factor.

    ThinkSystem SR650a V4 Servers. Image: Lenovo

    The Lenovo ThinkSystem V4 solutions also extend Lenovo Neptune liquid cooling from the CPU to the memory with the new Neptune Core Compute Complex Module, supporting faster workloads with reduced fan speeds, quieter operation, and lower power consumption.  Fans can consume as much as 18% of the power used by servers. The new Neptune module is precisely engineered to reduce air flow requirements, yielding lower fan speeds and power consumption while keeping the parts cooler for improved system health and lifespan. The module also expands cooling to four SW GPUs in the new ThinkSystem SR650a.

    Neptune Core Compute Complex Module. Image: Lenovo

    AI-Powered Management for Smarter AI Everywhere

    Businesses can deploy complex AI applications with integrated Lenovo systems management that saves time and resources across the ThinkSystem V4 portfolio. As a central command, XClarity One provides an insightful user interface that ensures fast, efficient, and protected deployment from one system to a thousand. New integrated enterprise remote control enables remote access and management of enterprise infrastructure no matter where it exists. Additionally, new AI-powered analytics offer server SSD predictive failure analysis (PFA) on XClarity One, helping to eliminate downtime by identifying potential problems with DIMMs before they fail. Finally, XClarity One now offers a Federated Directory that centrally manages system access across multiple applications through a unified registry and account.

    Lenovo continues to redefine innovation with adaptable and responsible AI solutions that make AI accessible to everyone. Their technology, from edge computing to data centers, helps organizations discover new opportunities, enhance operations, and keep pace in an AI-driven world.

    1 Up to 6.1x higher performance for compute-intensive workloads such as HPC, AI and database vs. 2nd Gen Intel Xeon CPUs. See 9G10, 9H10, 9A210 ATat intel.com/processor claims: Intel Xeon 6. Results may vary.

    2 Comparing the SR650a V4 to the SR650 V4, The SR650a V4 with 4x H100 GPUs vs the Sr650 V4 with 2x H100 GPUs provides 2x AI Computation performance.

    3 Lenovo Internal data ESG DECK

    4 See [7T1] at intel.com/processorclaims: Intel Xeon 6. Results may vary.

    5 Lenovo Internal data SR650 v4 based on internal benchmark testing.

    Source: Lenovo

    About Lenovo

    Lenovo Group Limited, founded in 1984, is a multinational tech company specializing in designing, manufacturing, and marketing consumer electronics, personal computers, software, servers, and related services. Serving industries such as education, healthcare, retail, and manufacturing, Lenovo offers a diverse product portfolio that includes laptops, desktops, tablets, smartphones, workstations, servers, storage devices, and accessories. The company markets its products under renowned brands like ThinkPad, ThinkBook, IdeaPad, Yoga, Motorola, and Legion. Headquartered in Beijing, China, and Morrisville, NC, Lenovo operates in over 60 countries and sells its products in approximately 160 countries worldwide. In the fiscal year ending March 2023, Lenovo reported revenues of $61.9 billion, maintaining its position as a global leader in the personal computer market with a 24.4% market share.

  • HPE Unveils ProLiant Servers with AI Enhancements

    HPE Unveils ProLiant Servers with AI Enhancements

    HPE ProLiant Compute Gen12. Image: hpe.com

    HOUSTON, TX, Feb 13, 2025 – Hewlett Packard Enterprise has announced eight new HPE ProLiant Compute Gen12 servers, the latest additions to a new generation of enterprise servers that introduce industry-first security capabilities, optimize performance for complex workloads and boost productivity with management features enhanced by AI. The new servers will feature upcoming Intel Xeon 6 processors for data center and edge environments.

    “Our customers are tackling workloads that are overwhelmingly data-intensive and growing ever-more demanding,” said Krista Satterthwaite, senior vice president and general manager, Compute at HPE. “The new HPE ProLiant Compute Gen12 servers give organizations – spanning public sector, enterprise and vertical industries like finance, healthcare and more – the horsepower and management insights they need to thrive while balancing their sustainability goals and managing costs. This is a modern enterprise platform engineered for the hybrid world, designed with innovative security and control capabilities to help companies prevail over the evolving threat landscape and performance challenges that their legacy hardware cannot address.”

    Chip-to-Cloud and Full Lifecycle Security

    The HPE ProLiant Compute Gen12 portfolio sets a new standard for enterprise security with built-in safeguards at every layer – from the chip to the cloud – and every phase of the server lifecycle. HPE Integrated Lights Out (iLO) 7 introduces an enhanced and dedicated security processor called secure enclave that is engineered from the ground up as HPE intellectual property. HPE ProLiant Compute servers with HPE iLO 7 will help organizations safeguard against future threats as the first server with quantum computing-resistant readiness and to meet the requirements for a high-level cryptographic security standard, the FIPS 140-3 Level 3 certification1.

    The chip-enhanced security features of HPE iLO 7 distinguish HPE ProLiant servers from other vendors. Embedded into the server hardware, secure enclave establishes an unbreakable chain of trust to protect against firmware attacks and creates full line-of-sight from the factory and throughout HPE’s trusted supply chain. This extends to the end of the product lifecycle with HPE Onsite Decommission Services that collects equipment and transports it to an authorized sorting and recycling facility.

    AI-Driven Insights Improve Operations Management, Automation and Power Efficiency

    HPE Compute Ops Management is a cloud-based software platform that helps customers secure and automate server environments. Automating energy management with AI insights helps businesses improve energy efficiency. By predicting power usage, organizations can set limits and manage costs and carbon emissions globally. A new global map view makes it easier to manage systems, allowing users to spot server health problems in diverse IT setups. Integrating tools from different vendors reduces downtime by up to 4.8 hours per server each year2. Automated on-boarding simplifies server set-up and ongoing management, particularly in remote or branch-office deployments where local IT resources are not available.

    All new HPE Compute Ops Management features, including AI-informed insights, new map-based visibility and third-party tool integration, will be available to HPE ProLiant Compute Gen10 servers and newer.

    To aid customers evaluating future purchases, a standalone tool called HPE Power Advisor estimates environment performance metrics such as energy costs and greenhouse gas emissions.

    Servers Optimized for Performance, Energy Efficiency and Available with Direct Liquid Cooling

    New additions to the HPE ProLiant Compute Gen12 portfolio are right sized to address demanding workloads that include AI, data analytics, edge computing, hybrid cloud and virtual desktop infrastructure (VDI) solutions. Addressing the exponential growth in power demands placed on data centers, the HPE ProLiant Compute Gen12 portfolio is engineered to optimize performance, energy efficiency and cost with up to 41% better performance per watt compared to legacy enterprise systems3. HPE ProLiant Compute Gen12 servers deliver up to 65% in power savings per year4 and enable organizations to free up data center capacity with one Gen12 server providing the same compute performance as seven Gen10 servers5.

    “Partnering with reliable, innovative hardware vendors like HPE helps us meet the evolving needs of our clients and empower them with comprehensive, workload-optimized IT infrastructure solutions,” said William Bell, executive vice president, products at phoenixNAP. “We were the first customer in the world to order HPE ProLiant Compute Gen12 servers and the benefits of the upgrade were immediate. By delivering these advanced technologies as a service, phoenixNAP enables organizations of all sizes to tackle challenges related to performance, energy efficiency, data security, and infrastructure management at scale.”

    To meet customer demand for more energy efficient data centers, HPE is offering optional direct liquid cooling (DLC) on Intel-based HPE ProLiant Compute Gen12 one-socket and two-socket rack servers. Liquid removes heat more efficiently than air, removing more than 3,000 times more heat based on volume6. HPE has built the world’s fastest direct-liquid cooled supercomputers7 and with more than 300 DLC patents and over 50 years of experience, HPE is a leader in deploying direct liquid-cooled servers and data centers.

    Availability

    Six of the eight new HPE ProLiant Compute Gen12 servers featuring upcoming Intel Xeon 6 processors will be available Q1 2025. This includes HPE ProLiant Compute DL320, DL340, DL360, DL380, DL380a and ML350 Gen12 servers. HPE Synergy 480 and HPE ProLiant Compute DL580 Gen12 servers are expected Summer 2025.

    The HPE ProLiant Compute Gen12 portfolio will be available standalone or via HPE GreenLake, offering scalability, cost efficiency and service agility. These solutions can be purchased through an authorized channel partner. HPE Services helps customers make the most of the HPE ProLiant Compute Gen12 portfolio by providing advisory, professional, operational, managed, financial and asset management assistance to accelerate business operations.

    1 FIPS 140-3 Level 3 certification is a standard adopted by National Institute of Standards and Technology (NIST) and Commercial National Security Algorithm (CNSA) 2.0 to verify cryptographic modules.
    2 Forrester Consulting, New Technology: The Projected Total Economic Impact of HPE Compute Ops Management, commissioned by HPE (June 2024) 
    https://www.hpe.com/psnow/doc/a00141308enw
    3 Based on internal power and performance measurements comparing an 86-core HPE ProLiant Compute Gen12 server compared to a similarly configured Gen10 server.
    4 Reflects results posted on spec.org SPECrate2017_int_base: 
    #20893 published as of 01-01-2025. The performance per watt advantage is based on internal power and performance measurements on similar configured high energy efficient servers and compared against an estimated 86-core Gen12 system. SPEC and SPECrates are registered trademarks of the Standard Performance Evaluation Corporation (SPEC).
    5 Reflects results posted on spec.org SPECrate2017_int_base: 
    #20893 published as of 01-01-2025 that compares estimated thermal design power of a 48-core HPE ProLiant Compute Gen12 server. SPEC and SPECrates are registered trademarks of the Standard Performance Evaluation Corporation (SPEC).
    6 A propylene glycol-based liquid coolant cools 3.94X the heat as compared to an equal mass of air and is 869.9X denser than air, making this liquid coolant capable of handling 3427.5X the heat of the same volume of air.
    7 Per the November 2024 
    TOP500 list of the world’s fastest supercomputers.

    Source: HPE

    About HPE

    Hewlett Packard Enterprise (HPE), established in 2015 following its split from Hewlett-Packard, is a global information technology company headquartered in Spring, Texas. HPE offers a comprehensive portfolio that includes servers, storage solutions, networking products, and cloud-based services, all designed to help organizations connect, protect, analyze, and act upon their data from edge to cloud. The company serves a diverse range of industries, including financial services, healthcare, manufacturing, and telecommunications. In the fiscal year 2024, HPE reported revenues of approximately $30.1 billion. As of 2024, Hewlett Packard Enterprise (HPE) employs approximately 61,000 individuals worldwide.

  • atNorth Wins Iceland’s Top ICT Award at UTmessan

    atNorth Wins Iceland’s Top ICT Award at UTmessan

    (From L to R): Erling Guðmundsson, COO, atNorth with Iceland’s President, Halla Tómasdóttir. Image: atNorth

    REYKJAVÍK, Iceland, Feb 11, 2025 – atNorth has announced its recognition at one of Iceland’s prestigious information technology events, UTmessan, along with other leading Icelandic data center operators for their role in facilitating the development of Iceland’s infrastructure.

    The Information Technology Award of the Icelandic Computer Society (Ský) is an honorary prize awarded to organizations for outstanding information technology contributions in Iceland. The accolade highlights the positive impact of data centers on Iceland’s digital, power and economic infrastructure. The award was presented by Iceland’s President, Halla Tómasdóttir, at a ceremony on Feb 7, Friday in Reykjavik.

    The data center industry is thriving in Iceland, alongside its Nordic neighbors due to its cool climate and abundance of renewable energy that enables energy efficient infrastructure cooling techniques.

    The demand for data center capacity in Iceland facilitated the country’s investment in digital connectivity and the national power infrastructure to ensure long term sustainability of the supply. Iceland boasts multiple undersea fiber optic cables connecting the country to the UK, North America and mainland Scandinavia and has a robust domestic fiber optic network, with multiple providers offering high-speed internet connectivity throughout Iceland – factors that have accelerated the growth of other businesses in the country.

    “We believe that data centers can be pivotal to a thriving economy and are committed to supporting the countries in which we operate”, says Erling Guðmundsson, COO, atNorth. “We are proud to be recognized alongside our peers as having contributed to putting Iceland on the map as a perfect location for data centers. By collaborating with local governments and likeminded organizations we hope to create data center ecosystems that operate with environmental responsibility, energy efficiency, and community integration.”

    atNorth has won many awards for its services, including the ‘Top Energy Efficient HPC Achievements’ award at the HPCwire Reader’s Choice Awards, the ‘Digital Infrastructure Project of the Year’ prize at the Tech Capital Awards, the ‘Colocation Provider of the Year’ award at the Electrical Review & Data Centre Review Excellence Awards. The business also won the ‘Location Award’ for Iceland at the Tech Capital Awards in 2023 and has been included in TechRound’s Sustainability60 list that celebrates the sustainability companies across the UK and Europe.

    About atNorth

    atNorth is a leading Nordic data center services company that offers sustainable, cost-effective, scalable colocation and high-performance computing services trusted by industry-leading organizations. The business acquired leading high performance computing (HPC) provider, Gompute, in 2023 enabling a compelling full stack offering tailored to AI and other critical high performance workloads.With sustainability at its core, atNorth’s data centers run on renewable energy resources and support circular economy principles. All atNorth sites leverage innovative design, power efficiency, and intelligent operations to provide long-term infrastructure and flexible colocation deployments. The tailor-made solutions enable businesses to calculate, simulate, train and visualize data workloads in an efficient, cost-optimized way. atNorth is headquartered in Reykjavik, Iceland and operates seven data centers in strategic locations across the Nordics, with additional sites to open in Helsinki, Finland in Q1 2025 and Ballerup, Denmark in Q2 2025, as well as its tenth under construction in Kouvola, Finland and its eleventh site in Ølgod, Denmark. The business has also secured land for a future mega site in the Sollefteå Municipality in Sweden.

    Source: atNorth

  • 7EDGE Selects AWS Lambda for Serverless Innovation

    7EDGE Selects AWS Lambda for Serverless Innovation

    7EDGE achieves AWS Lambda Service Delivery. Photo: 7EDGE

    BENGALURU, India, Feb 11, 2025 – 7EDGE has announced that it received the Amazon Web Services (AWS) Service Delivery designation for AWS Lambda. This designation shows that 7EDGE adheres to best practices and has successfully delivered AWS services to clients.

    Achieving the AWS Service Delivery designation differentiates 7EDGE as an AWS Partner Network (APN), recognizing that 7EDGE follows best practices and has proven successful in delivering AWS services to end customers through AWS’s serverless computing technology.

    Cedan Christopher Misquith, manager of engineering management at 7EDGE, says, “AWS Lambda propels our clients into the future, offering scalable, efficient, and event-driven solutions. Its serverless architecture eliminates infrastructure burdens, empowering 7EDGE to deliver applications rapidly. This accelerates client innovation, reduces time-to-market, optimizes costs, and positions them for long-term success.”

    7EDGE leverages the power of AWS Lambda to drive digital transformation and modernize applications. AWS Lambda is a serverless computing service that eliminates the need for traditional infrastructure management, allowing 7EDGE and the customer to focus on innovation and development. By harnessing features like automatic scaling, pay-as-you-go pricing, and built-in fault tolerance, 7EDGE provides scalable, affordable, and dependable solutions. The options fulfill the needs of today’s technology-driven world.

    Building on past successes, including the AWS well-architected partner program and AWS SaaS consulting competency, 7EDGE is committed to delivering innovative, cloud-native solutions.

    About 7EDGE

    7EDGE, founded in 2010 and headquartered in Bengaluru, India, is an Internet-first company specializing in digital transformation for brands and businesses. Their services cover technology consulting, UI/UX design, application development, DevOps, managed cloud services, big data analytics, and AI. They cater to industries including healthcare, manufacturing, media, retail, and consumer products. As of March 31, 2022, 7EDGE reported an annual revenue of approximately $450,000. Over the past decade, they have completed over 500 web and mobile application projects for clients across the USA, UK, UAE, Germany, Seychelles, India, and Singapore.

    Source: 7EDGE

  • Supermicro Unveils Full Production of AI Data Center Solutions with NVIDIA Blackwell

    Supermicro Unveils Full Production of AI Data Center Solutions with NVIDIA Blackwell

    Supermicro Ramps Full Production of NVIDIA Blackwell Rack-Scale Solutions with NVIDIA HGX B200.

    SAN JOSE, CA, Feb 6, 2025 – Supermicro, Inc. has announced the production availability of its end-to-end AI data center building block solutions accelerated by the NVIDIA Blackwell platform. The Supermicro building block portfolio provides the core infrastructure elements necessary to scale Blackwell solutions with exceptional time to deployment. The portfolio includes a broad range of air-cooled and liquid-cooled systems with multiple CPU options. These include superior thermal design supporting traditional air cooling, liquid-to-liquid (L2L) and liquid-to-air (L2A) cooling. In addition, a full data center management software suite, rack-level integration, including full network switching and cabling and cluster-level L12 solution validation, can be delivered as a turn-key offering with global delivery, professional support, and service.

    “In this transformative moment of AI, where scaling laws are pushing the limits of data center capabilities, our latest NVIDIA Blackwell-powered solutions, developed through close collaboration with NVIDIA, deliver outstanding computational power,” said Charles Liang, president and CEO of Supermicro. “Supermicro’s NVIDIA Blackwell GPU offerings in plug-and-play scalable units with advanced liquid cooling and air cooling are empowering customers to deploy an infrastructure that supports increasingly complex AI workloads while maintaining exceptional efficiency. This reinforces our commitment to providing sustainable, cutting-edge solutions that accelerate AI innovation.”

    Supermicro’s NVIDIA HGX B200 8-GPU systems utilize next-gen liquid-cooling and air-cooling technology. The newly developed cold plates and the new 250kW coolant distribution unit (CDU) more than double the cooling capacity of the previous generation in the same 4U form factor. Available in 42U, 48U, or 52U configurations, the rack-scale design with the new vertical coolant distribution manifolds (CDM) no longer occupy valuable rack units. This enables 8 systems, comprising 64 NVIDIA Blackwell GPUs in a 42U rack, and all the way up to 12 systems with 96 NVIDIA Blackwell GPUs in a 52U rack.

    The new air-cooled 10U NVIDIA HGX B200 system features a redesigned chassis with expanded thermal headroom to accommodate eight 1000W TDP Blackwell GPUs. Up to 4 of the new 10U air-cooled systems can be installed and fully integrated in a rack, the same density as the previous generation, while providing up to 15x inference and 3x training performance.

    The new SuperCluster designs incorporate NVIDIA Quantum-2 InfiniBand or NVIDIA Spectrum-X Ethernet networking in a centralized rack, enabling a non-blocking, 256-GPU scalable unit in five racks or an extended 768-GPU scalable unit in nine racks. The architecture — purpose-built for NVIDIA HGX B200 systems with native support for the NVIDIA AI Enterprise software platform for developing and deploying production-grade, end-to-end agentic AI pipelines — combined with Supermicro’s expertise in deploying the world’s largest liquid-cooled data centers delivers exceptional efficiency and time-to-online for today’s most ambitious AI data center projects.

    Liquid-Cooled or Air-Cooled: Supermicro NVIDIA HGX B200 Systems

    Liquid-cooled NVIDIA HGX B200 Systems and Racks. Image: Supermicro

    The new liquid-cooled 4U NVIDIA HGX B200 8-GPU system features newly developed cold plates and improved tubing design that further enhance the efficiency and serviceability of the predecessor that was used for the NVIDIA HGX H100/H200 8-GPU system. Complemented by a new 250kW cooling distribution unit, more than doubling the cooling capacity of the previous generation while maintaining the same 4U form factor, the new rack-scale design with the new vertical coolant distribution manifolds (CDM) enables denser architecture with flexible configuration scenarios used for various data center environments. Supermicro offers 42U, 48U, or 52U rack configurations for liquid-cooled data centers. The 42U or 48U configuration provides 8 systems and 64-GPU in a rack, and 256-GPU scalable unit in five racks. The 52U rack configuration allows 96-GPU in a rack and enables 768-GPU scalable unit in nine racks for the most advanced AI data center deployments. Supermicro also offers an in-row CDU option for large deployments, as well as liquid-to-air cooling rack solution that doesn’t require facility water.

    Supermicro’s NVIDIA HGX B200 systems natively support NVIDIA AI Enterprise software to accelerate time to production AI. NVIDIA NIM microservices allow organizations to access the latest AI models for fast, secure, and reliable deployment on NVIDIA accelerated infrastructure anywhere – whether in data centers, the cloud or workstations.

    For traditional data centers, the new 10U air-cooled NVIDIA B200 8-GPU system is also available, with a redesigned modular GPU tray to house the NVIDIA Blackwell GPUs in an air-cooled environment. The air-cooled rack design follows the proven, industry-leading architecture of the previous generation, four systems and 32 GPUs in a 48U rack, while providing NVIDIA Blackwell performance. All Supermicro NVIDIA HGX B200 systems are equipped with a 1:1 GPU-to-NIC ratio supporting NVIDIA BlueField-3 SuperNICs or NVIDIA ConnectX-7 NICs for scaling across a high-performance compute fabric.

    Supermicro provides support for systems included in the NVIDIA-Certified Systems program. This program incorporates NVIDIA GPUs, CPUs, and high-speed, secure networking technologies into systems from leading NVIDIA partners, ensuring configurations that are validated for optimal performance, reliability, and scalability. By choosing an NVIDIA-Certified System, enterprises can confidently select hardware solutions to power their accelerated computing workloads. NVIDIA has certified Supermicro systems with NVIDIA H100 and H200 GPUs.

    End-to-end Liquid-Cooling Solution for NVIDIA GB200 NVL72

    Supermicro NVIDIA GB200 NVL72 SuperCluster features the
    new advanced in-rack coolant distribution unit. Image: Supermicro

    Supermicro’s SuperCluster solution, based on the NVIDIA GB200 NVL72 system, represents a breakthrough in AI computing infrastructure, combining Supermicro’s end-to-end liquid-cooling technology. The system integrates 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs in a single rack, delivering exascale computing capabilities through NVIDIA’s most extensive NVLink network to date, achieving 130 TB/s of GPU communications.

    The 48U solution’s versatility supports both liquid-to-air and liquid-to-liquid cooling configurations, accommodating various data center environments. Additionally, Supermicro’s SuperCloud Composer software provides management tools for monitoring and optimizing liquid-cooled infrastructure, delivering a complete solution from proof of concept to full-scale deployment.

    End-to-end Data Center Solution and Deployment Services for NVIDIA Blackwell

    From proof-of-concept (PoC) to full-scale deployment, Supermicro serves as a comprehensive one-stop solution provider with global manufacturing scale, delivering all necessary components, data center-level solution design, liquid-cooling technologies, networking solutions, cabling, management software, testing and validation, and onsite installation services. Its in-house liquid-cooling ecosystem offers a complete, custom-designed thermal management solution, featuring optimized cold plates for GPUs, CPUs, and memory modules, along with versatile coolant distribution unit form factors and capacities, manifolds, hoses, connectors, cooling towers, and sophisticated monitoring and management software. With production facilities across San Jose, Europe, and Asia, Supermicro offers unmatched manufacturing capacity for liquid-cooled rack systems, ensuring timely delivery, reduced total cost of ownership (TCO) and environmental impact, and consistent quality.

    About Super Micro Computer, Inc.

    Super Micro Computer Inc., or Supermicro, is a leading provider of high-performance server technology and green computing solutions. Founded in 1993 by Charles Liang and Sara Liu, the company is headquartered in San Jose, California. Supermicro offers a complete range of products, including servers, storage systems, networking devices, and server management software, serving industries such as enterprise data centers, cloud computing, artificial intelligence, 5G, and edge computing. As of June 2023, the company employs approximately 5,126 individuals globally. In the fiscal year 2024, Supermicro reported revenues of approximately $15 billion, reflecting significant growth driven by its innovative solutions and expanding market presence.

    Source: Super Micro Computer, Inc.

  • Ansys 2025 R1 Enhances Product Design with Digital Engineering Tools

    Ansys 2025 R1 Enhances Product Design with Digital Engineering Tools

    PITTSBURGH, PA, Feb 5, 2025– Ansys 2025 R1 features refined digital engineering-enabling technologies that integrate with existing infrastructure, minimizing disruption and empowering teams to collaborate on innovative products. Powered by AI, cloud computing, GPUs, and HPC, Ansys R1 enhancements support faster decisions, collaborative decision-making, wider design explorations, and shorter product design times.

    “Ansys 2025 R1 offers more integration capabilities than ever, helping teams carve a digital path through the entire lifecycle of a product, with tools and solutions to help expertly manage data pre- and post-development,” said Shane Emswiler, senior vice president of products at Ansys. “This release highlights that our solutions can serve as guideposts, helping disconnected teams stay the course and work collaboratively from a single, accessible source of truth. This not only significantly cuts costs, but it also accelerates time-to-market, which helps our customers stay competitive.”

    Advanced Physics Solvers

    Ensuring product performance begins with understanding the multiphysics involved, from the components to the system. The latest release from Ansys highlights new products and capabilities that deliver fast, high-fidelity, physics-based results, helping teams make informed decisions earlier in the design cycle:

    Accurate thermal management simulation. Image: Ansys
    • Ansys Discovery 3D simulation software significantly expands thermal modeling with the addition of electrothermal analysis, orthotropic conductivity, and internal fans while maintaining speed and ease of use
    • The structural analysis suite features an  integrated solution for noise, vibration, harshness (NVH), delivering 10x faster frequency response function (FRF) calculator, vibro-acoustics mapping, optimized meshing, and mode contribution analysis
    • Ansys Electronics connects to other Ansys software products, enabling improved meshing that is crucial for 3D integrated circuits, automated workflow capabilities, and boosted simulation performance
    • A new Polymer FEM product utilizes high-fidelity models to capture real-world materials behavior, addressing customers’ evolving materials simulation requirements

    “The Ansys platform offers key advantages for Firefly as we rapidly innovate to support responsive space services,” said Brigette Oakes, vice president of engineering at Firefly Aerospace. “CFD is one area where Ansys shines – Fluent accurately models combustion dynamics and complex thermal interactions in our engine designs. Its integration of thermal and structural analysis simplifies workflows, and its user-friendly interface and responsive support team make it a critical tool for a fast-paced company like ours.”

    Cloud, HPC and GPUs

    Cloud computing, HPC, and GPUs are speeding up how modern products are developed. Key features like accessibility, interoperability, and scalability drive this progress, enabling users to move beyond traditional desktop tools and collaborate on more innovative products. Ansys R1 introduces improvements to its GPU solvers and adds web-based, on-demand features for multiple applications:

    New use cases for the Fluent GPU solver Image: Ansys
    • The Ansys Fluent multi-GPU fluid simulation solver supports applications with large mesh cell counts, such as automotive external aerodynamics. This enables designers to include more variables to improve accuracy without compromising simulation speedAnsys CFD HPC Ultimate is a new product that enables enterprise-level CFD capabilities for one job on multiple CPU cores or GPUs without the need for additional HPC licenses
    • New GPU-accelerated simulations in Ansys Lumerical FDTD advanced 3D electromagnetic simulation software uses 50% less GPU memory and provides a 20% reduction in meshing time compared to CPUs  
    • The Ansys Mechanical GPU-accelerated direct structural finite element analysis solver is up to 6x faster than alternative solutions and the iterative solver is 6x faster than CPU-only versions
    • Ansys Cloud Burst Compute with Discovery empowers designers to solve 1,000 design variations in 10 minutes. Parametric studies in Discovery are accelerated by 100x or more by leveraging NVIDIA GPUs
    • The Ansys Cloud Burst Compute capability provides elastic, flexible, on-demand HPC capacity for Ansys Mechanical, Fluent, and Ansys HFSS high-frequency electromagnetic simulation software

    Artificial Intelligence

    Ansys is expanding its tools with AI-powered technologies to improve speed, innovation, and usability in CAE industry. Its AI technology helps teams process new or existing data to evaluate designs in minutes, train custom AI models within minutes, accelerate product development, and and reduce costs:

    AC magnetics (Eddy current) A-Phi solver Image: Ansys
    • Ansys has developed an intuitive, interactive tool to streamline data preparation for SimAI modeling
    • SimAI allows users to expand the training data to gain insight during post-processing, such as honing analysis around a specific component within a larger design
    • Ansys Electronics AI+ uses AI-driven techniques to predict resources and runtime for electronics simulations in Ansys Maxwell advanced electromagnetic field solver, Ansys Icepak electronics cooling simulation software, and HFSS
    • Advanced synthetic radar simulation within Ansys RF Channel Modeler high-fidelity wireless channel modeling software empowers the digital mission engineering community with a comprehensive training and validation dataset for ground-based AI target identification

    “Ansys’ industry-leading simulation solutions will help drive Vertiv’s business model as we design solutions for the future,” said Steve Blackwell, vice president of engineering at Vertiv. “Our mission is to revolutionize the way the world conceptualizes and develops data centers — from cooling and power technologies through implementing AI in the design of the data center itself. With Ansys, we will more quickly meet critical milestones that will help us deliver the most optimal infrastructure to support our customers’ AI-based projects with energy-efficient and reliable future-forward designs.”

    Connected Ecosystem

    Cutting-edge R&D involves design methods like model-based systems engineering (MBSE) and automation to maintain efficient workflows. Ansys tools are flexible and scalable, allowing easy integration of new technologies into current infrastructure without disrupting product design. The Ansys 2025 R1 release includes updates that improve MBSE features and data management, simplifying the shift to digital processes:

    • Ansys ModelCenter MBSE software and SAM support SysML v2, allowing enhanced product designs. This update improves team collaboration and makes product requirements simpler to access and use across engineering organization, saving time and boosting productivity.
    • ModelCenter now has improved MBSE connectivity for better compatibility, including an enhanced Capella connector and deeper integration with Ansys SAM for intuitive search, save, and modification
    • Ansys Minerva simulation process and data management software generic connector improvements help reduce the time and cost of implementation by standardizing how external data is brought into Minerva, allowing users to verify and resolve any conflicts before uploading. The connector also helps improve engineer productivity with new asynchronous job launch capabilities

    Additional R1 Announcements also Include:

    Unified CAD, CAE and PLM experience Image: Ansys
    • Ansys optiSLang process integration and design optimization software include enhancements across interfaces, distributed computing, and more advanced algorithms, adding flexibility and performance to the design workflow
    • Ansys Granta Materials Intelligence (MI) product collection’s integrations with CAE, computer-aided design, and product lifecycle management software now feature a unified user experience between the Granta end-user interface and the integration interfaces
    • Task-based performance improvements made to the fault tolerant meshing and watertight meshing workflows in Fluent improve meshing speeds
    • Ansys PowerX, a new tool for power field-effect transistor (FET) and power management integrated circuit (PMIC) analysis, simulation, and optimization.

    About Ansys

    Ansys, founded in 1970, specialized in engineering simulation software. It offers a comprehensive suite of tools for structural analysis, fluid dynamics, electromagnetic field simulation, and more, enabling industries to design and test products virtually. Ansys software has enabled innovators across industries to push boundaries by using the predictive power of simulation. Serving sectors such as aerospace and defense, automotive, energy, industrial equipment, materials and chemicals, consumer products, healthcare, and construction, Ansys supports innovation across diverse fields. As of 2023, the company reported annual revenues exceeding $2.3 billion and employed over 6,200 people worldwide. Headquartered in Canonsburg, Pennsylvania, Ansys continues to advance engineering simulation technologies, empowering organizations to enhance product development processes.

    Source: ANSYS, Inc.

  • Altair Names Sistemi HS as Channel Partner for Italy

    Altair Names Sistemi HS as Channel Partner for Italy

    TROY, MI, Jan 31, 2025 – Altair has named Sistemi HS as a channel partner for Italy. Sistemi HS will offer Altair’s comprehensive portfolio of electronics, data analytics, and simulation solutions to customers throughout Italy.

    “Altair’s mission is to help customers transform their businesses by leveraging world-leading computational intelligence,” said Kimon Afsaridis, managing director of Eastern Europe and vice president of indirect EMEA sales, Altair. “By partnering with Sistemi HS, we further expand our reach in Italy and provide even more organizations with advanced technologies that accelerate innovation and drive meaningful outcomes.”

    “We are thrilled to announce our partnership with Altair strengthening our commitment to delivering cutting-edge technology solutions,” said Domenico Condelli, general manager, Sistemi HS. “This collaboration helps us combine our expertise with Altair’s best-in-class technologies, creating exceptional value for our clients and driving digital transformation across key industries.”

    About Altair

    Altair Engineering Inc., founded in 1985 and headquartered in Troy, Michigan, is a global leader in computational science and AI. The company offers software and cloud solutions across various domains, including product development, high-performance computing (HPC), simulation, AI, and data analytics. Altair’s comprehensive, open-architecture platforms empower organizations to design more efficient and sustainable products and processes. Serving industries such as automotive, aerospace, and manufacturing, Altair has established itself as a key player in engineering and enterprise analytics. In 2024, Siemens announced its agreement to acquire Altair for $10.6 billion, aiming to strengthen its position in industrial software.

    About Sistemi HS

    Sistemi HS, is an Italian system integrator headquartered in Collegno, Italy. The company specializes in providing IT infrastructure and managed services to a diverse clientele, including large corporations, small and medium-sized enterprises, professional firms, and public administration. Their offerings encompass servers, storage solutions, software, networking, security, cloud services, connectivity, and telecommunications. Sistemi HS has formed strategic partnerships with leading global IT vendors and holds significant certifications, such as being a Platinum Partner with HP and Hewlett Packard Enterprise for over two decades. As of recent data, the company employs approximately 200 individuals.

    Source: Altair

  • ionstream.ai Expands Cloud GPU Platform with NVIDIA L40S

    ionstream.ai Expands Cloud GPU Platform with NVIDIA L40S

    HOUSTON, TX, Jan 16, 2025 – ionstream.ai has announced the immediate availability of NVIDIA L40S GPUs on its GPU as a Service (GaaS) platform. This strategic expansion provides organizations with a cost-effective solution optimized for AI inference and fine-tuning tasks, offering an alternative to larger and expensive GPU options.

    Source: ionstream

    “Organizations are looking for right-sized GPU solutions that match their specific AI workloads,” said Jeff Hinkle, chief executive officer at ionstream.ai. “The addition of the NVIDIA L40S to our cloud platform provides enterprises with the ideal infrastructure for inference and model refinement tasks, delivering the perfect balance of performance and cost-efficiency.”

    Enterprise-Grade AI Infrastructure, On Demand

    The NVIDIA L40S GPU, powered by the Ada Lovelace architecture, represents a breakthrough in AI infrastructure accessibility. ionstream.ai’s implementation delivers:

    • Advanced AI Capabilities:
      • Optimized for AI inference and fine-tuning workflows
      • Ideal for production-scale model deployment
      • Cost-effective alternative to H100 and H200 GPUs for inference tasks
      • Multi-user support for enterprise workloads
    • Revolutionary Cost Economics:
      • Right-sized infrastructure for inference workloads
      • Improved energy efficiency for sustainable operations
      • Zero upfront capital expenditure
      • Pay-as-you-go pricing with per-minute billing

    Transforming Enterprise AI Capabilities

    The L40S platform enables efficient AI model deployment across industrial domains:

    • Oil & Gas Exploration: Process complex seismic data through high-performance computing capabilities, enabling rapid subsurface imaging and reservoir characterization. The L40S accelerates traditional seismic processing workflows while supporting emerging AI-enhanced interpretation methods, reducing time-to-insight for critical exploration decisions.
    • Healthcare & Life Sciences: Deploy medical imaging models and fine-tune diagnostic systems
    • Financial Services: Run real-time inference for fraud detection and risk analysis
    • Automotive & Manufacturing: Power production-ready computer vision applications

    Flexible Deployment Options Meet Enterprise Needs

    ionstream.ai’s platform offers deployment flexibility:

    • Instant Provisioning: Deploy L40S instances in under 60 seconds
    • Flexible Acquisition Options: Available for purchase or lease to meet varying business needs
    • Enterprise-Grade Infrastructure: Hosted in a Tier 4 designed data center in Spring, Texas to provide optimal uptime
    • 24/7 Expert Support: Direct access to GPU infrastructure specialists

    Availability and Special Launch Offer

    The NVIDIA L40S is available on the ionstream.ai platform. For a limited time, new customers can receive:

    • Complimentary one-month proof of concept available for qualified enterprises
    • Complimentary AI infrastructure optimization consultation