Category: Computing

  • KORE Appoints Jared Deith as Chief Revenue Officer

    KORE Appoints Jared Deith as Chief Revenue Officer

    ATLANTA, GA, Feb 4, 2025 – KORE Group Holdings, Inc. has named Jared Deith as executive vice president and chief revenue officer. Deith, , brings a formidable track record of building high-performing teams and driving transformative growth in the IoT market.

    Jared Deith

    “I’m honored and excited to step into the role of CRO and drive KORE to expected new heights of success and growth,” said Deith. “In my recent role leading the Global Connected Health business, I’ve seen firsthand the transformative power of IoT solutions for our customers. Many are fueling their growth through connected devices and require highly available, secure, and scalable solutions—perfectly aligned with KORE’s strengths.” 

    “As a proven leader in the IoT market, Jared brings a growth mindset and an unwavering customer focus, making him the ideal choice,” said Ron Totton, chief executive officer of KORE. “His entrepreneurial spirit and execution-focused approach will help us redefine what’s possible in the IoT space.”

    Tim Donahue, KORE’s chairman of the Board of Directors, added, “Jared has a keen understanding of customer needs and a relentless drive and energy – exactly what’s needed to build on KORE’s strong foundation and take the Company to new heights.”

    In his new role, Deith will oversee global sales, partnerships, marketing and revenue operations, accelerating KORE’s momentum as the go-to provider for IoT solutions. 

    About KORE

    KORE Group Holdings, Inc., founded in 2002 and headquartered in Atlanta, GA, is a provider of Internet of Things (IoT) services and solutions worldwide. The company offers a comprehensive suite of IoT offerings, including connectivity and location-based services, device solutions, and managed and professional services, catering to industries such as healthcare, fleet and vehicle management, asset management, communication services, and industrial manufacturing. As of 2023, KORE employs approximately 600 individuals. In the trailing twelve months ending June 30, 2024, the company reported revenues of approximately $285 million.

    Source: KORE

  • KULR, EDOM Tech Announce AI Supply Chain Collaboration

    KULR, EDOM Tech Announce AI Supply Chain Collaboration

    HOUSTON, TX, Feb 4, 2025 – KULR Technology Group, Inc. announced its partnership with EDOM Technology , a long-standing NVIDIA Channel Partner and a integration and distribution company. This collaboration positions KULR to deliver KULR Xero Vibe (KXV) and KULR ONE product lines to Taiwan, a global epicenter of AI supply chain development, by leveraging its suite of energy management products and solutions to address the need for large-scale systems cooling within the AI ecosystem.

    Image: KULR Technology Group, Inc.

    The partnership will enable KULR to service both server and edge computing devices within the AI supply chain while deploying its suite of energy management products and solutions to meet the needs of the entire AI ecosystem. By aligning with EDOM, KULR is positioning itself to address the global surge in demand for AI infrastructure, fueled by initiatives like The Stargate Project making a recent $500 billion push to accelerate AI infrastructure expansion in the United States.

    “Our partnership with EDOM underscores our commitment to scaling our AI solutions to meet the growing demands of the industry,” said Michael Mo, CEO of KULR Technology Group. “EDOM’s deep-rooted relationship with NVIDIA and extensive expertise in the AI supply chain make them an ideal partner to integrate and distribute our technologies, such as the KXV and KULR ONE, across the region.”

    Taiwan plays a pivotal role in the global AI supply chain, driving advancements that shape the future of AI infrastructure. Highlighting this prominence, Bloomberg featured Taiwan’s importance in the AI ecosystem. With EDOM as a key partner, KULR plans to grow its AI business across Taiwan and the broader Asian market by tapping into EDOM’s market knowledge.

    In recent months, the company has made progress advancing its infrastructure buildout to support the AI ecosystem, including:

    KXV Licensing Partnership for Data Center Cooling: KULR secured a licensing agreement with a Japanese company specializing in systems integration and semiconductor solutions. Their KXV technology will help optimize large-scale fan systems for data center cooling, HVAC, and other industrial applications.

    KXV with NVIDIA Jetson: KULR introduced KXV, integrated with NVIDIA Jetson, to improve vibration control for edge AI systems. This integration combines strong vibration management with AI capabilities, ensuring high performance and reliable operation in edge AI environments.

    Carbon Fiber Cathode Licensing Agreement in Nuclear Reactor Systems: KULR has signed a licensing agreement with a technology partner in Japan for advanced carbon fiber cathode use in nuclear reactor systems. The license focuses on supporting laser-based nuclear fusion systems and small modular reactors (SMRs), offering a cost-effective and reliable method for producing fusion energy with high-powered lasers.  According to Goldman Sachs Research, nuclear power will be a key part of a suite of new energy infrastructure built to meet data-center power demand driven by AI.

    Mo concluded, “With our shared focus on innovation and a commitment to driving progress, this collaboration with EDOM empowers us to deliver cutting-edge technologies, from thermal management solutions to AI-optimized products like the Jetson AI platform, to the rapidly expanding AI supply chain.”

    KULR and EDOM are focused working together to advance AI and energy management, creating a supply chain ecosystem that supports future AI technologies.

    About KULR Technology Group Inc.

    KULR Technology Group Inc., founded in 2013 and headquartered in San Diego, CA, specializes in developing and commercializing high-performance thermal management technologies for electronics, batteries, and other components. The company’s products, lithium-ion battery thermal runaway shields, automated battery cell screening systems, and fiber thermal interface materials, serve industries including space, aerospace, defense, electric vehicles, energy storage, battery recycling transportation, cloud computing, and 5G communication devices. As of 2023, KULR employs approximately 60 individuals. In 2023, the company reported revenues of $10 million.

    About EDOM Technology

    EDOM Technology Co., Ltd., established in 1996 and headquartered in Taipei, Taiwan, is a distributor of integrated circuits (ICs) and electronic components. The company offers a diverse range of products, including analog ICs, batteries, connectors, embedded modules, memory, microcomponents, sensors, and optoelectronics. The products serve various industries such as automotive, computing, consumer electronics, Internet of Things (IoT), medical, mobile, networking and data centers. With over 800 employees worldwide, EDOM operates 32 offices across Greater China, Southeast Asia, Japan, Korea, and India. In 2023, the company reported revenues of approximately US$3.43 billion.

    Source: KULR Technology Group, Inc.

  • Baya Systems Welcomes Manish Muthal and Siva Yerramilli to Board of Directors

    Baya Systems Welcomes Manish Muthal and Siva Yerramilli to Board of Directors

    SANTA CLARA, CA, Jan 29, 2025 – Baya Systems has announced the additions of Manish Muthal and Siva Yerramilli to its Board of Directors. Both bring extensive experience in semiconductor technology and software and scaling disruptive startups, further strengthening Baya Systems’ leadership team.

    They join chairman Jim Keller, Baya Systems CEO Dr. Sailesh Kumar and Matrix Capital General Partner Stan Reiss on the board.

    Manish Muthal. Image: LinkedIn

    Manish Muthal, senior managing director at Maverick Silicon, is a seasoned semiconductor industry veteran with leadership roles at Intel, LSI, Xilinx and Broadcom, as well as a track record of founding successful venture-backed startups.

    “At Maverick Silicon, we believe Baya Systems’ technology will be the key enabler of next-generation data movement, communication and multi-die systems that will power AI acceleration, scalable infrastructure and mobility,” said Muthal. “I’m excited to join a team of this caliber that is enabling the paradigm shift for explosive growth of intelligent compute systems.”

    Also joining the board is Siva Yerramilli, SVP of the Corporate Incubation Group at Synopsys, Inc., that delivers trusted and comprehensive silicon to systems design solutions, from electronic design automation to silicon IP and system verification and validation. Prior to this, he had a very successful career at Intel, leading technology enablement and product development.

    “Having followed the founding team building NetSpeed Systems and their contribution to the Xeon program during my time at Intel, I see the generational leap they have made with Baya Systems. I look forward to being part of this journey,” said Yerramilli. “Baya’s focus on software-driven architecture analysis and development and their expertise in highly accurate performance modeling are fundamental to the next generation of system design that translates target software workloads rapidly into customized and future-proofed designs.”

    “Manish and Siva bring great experience that complements our team,” said Keller. “We’re thrilled to welcome them to the board. Their guidance and insights will be invaluable to our vision of more capable, intelligent compute.”

    About Baya Systems

    Baya Systems, headquartered in Santa Clara, CA, specializes in high-performance, software-driven system IP solutions. Founded in 2018, the company focuses on enabling the design, optimization, and deployment of complex, intelligent computing systems. Its flagship products include WeaverPro software and WeaveIP, that provide customizable and scalable transport fabrics for multi-chiplet and system-on-chip (SoC) designs. The solutions cater to industries such as AI acceleration, automotive, IoT, and data centers, delivering high throughput, low latency, and chiplet-readiness. Baya Systems is a key player in optimizing data movement and supporting advanced, data-intensive applications, including autonomous systems, AI processing, and large-scale infrastructure. Baya Systems is gaining recognition for its innovative solutions that reduce the risks and complexities of building next-generation high-performance systems.

    Source: Baya Systems

  • InOrbit Unveils Space Intelligence to Connect Robotics and Enterprise

    InOrbit Unveils Space Intelligence to Connect Robotics and Enterprise

    MOUNTAIN VIEW, CA, Jan 29, 2025 – Launched by InOrbit.AI, Space Intelligence is a product suite that acts as the central nervous system for smart operations. By integrating autonomous robots, line-of-business data and spatial computing, Space Intelligence leverages physical AI to unlock a new level of performance and flexibility in warehouses, manufacturing facilities, and other demanding environments to drive smarter operations.

    “Traditional automation offers rigid and static operations,” said Florian Pestoni, CEO of InOrbit. “InOrbit Space Intelligence enables enterprises to leap forward with software-defined orchestration and leverage the latest development in Physical and AI and Reinforcement Learning to drive dynamic process optimization. Now enterprise customers can drive productivity and improve operational resilience.”

    Space Intelligence combines business, robotic, and spatial data to streamline real-world operations. It offers flexibility and clear insights, supporting ongoing improvements and achieving balance.

    InOrbit Space Intelligence offers enterprises:

    • Unified Command: Seamlessly connecting enterprise systems (WMS, ERP, WES) with robot operations for complete process visibility and control.
    • Intelligent Orchestration: Mapping and managing autonomous robots, IoT devices, and non-robotic equipment for synchronized and optimized facility operations.
    • Proactive Operations: Real-time traffic management, path planning and incident response to execute business workflows and improve utilization.
    • AI-Powered Optimization: Continuously learning and adapting, dynamically optimizing operations for peak efficiency and improved ROI.

    “We are excited to introduce Space Intelligence to enterprises,” continued Pestoni. “This transformative technology enables businesses to truly unlock the full potential of a facility with smart operations.”

    About InOrbit.AI

    InOrbit, headquartered in Mountain View, CA, specializes in cloud-based robot operations (RobOps) management. Founded in 2017, the company offers an AI-powered platform that helps enterprises and robot developers optimize autonomous robot fleets at scale. Serving industries like logistics, manufacturing, and smart cities, InOrbit streamlines deployment, monitoring, and optimization of robotic systems. Its platform enables seamless integration across multi-vendor fleets, improving efficiency and reliability.

    Source: InOrbit

  • Cincoze Launches Compact Rugged Computers for Factory, Military Use

    Cincoze Launches Compact Rugged Computers for Factory, Military Use

    TAIPEI, Taiwan, Jan 27, 2025 – Cincoze has launched the latest addition to the rugged computing – DIAMOND product line, the entry performance & compact industrial computer DC-1300 series. It packs enhanced performance and expansion flexibility in a compact 185 x 131 x 56.5 mm chassis. The new generation DC-1300 series can be equipped with an Intel Alder Lake-N processor, supports the essential I/O interfaces for industrial applications, and provides complete wireless transmission solutions (Wi-Fi, 5G, GNSS, etc.). The DC-1300 continues the legacy of the DC series for successful integration in space-limited industrial automation applications.

    4.5X Performance and Multiple Storage Options

    The new generation DC-1300 series supports the latest Alder Lake-N platform Intel Core i3-N305 processor, with computing performance of 4.5 times that of the previous generation. The DC-1300 supports up to 16GB DDR5 4800MHz in a single memory slot. It offers flexible storage options, such as 2.5-inch SATA HDD/SSD or Half-Slim SSD. Additionally, it supports expansion via the M.2 slot for NVMe SSD to meet the needs of different application scenarios.

    Innovative Stackable Expansion Box Design

    The DC-1300 has a set of native I/O interfaces, including LAN, USB, COM, and 4K DisplayPort, and can expand COM, DIO, display, and IGN functions through Cincoze’s exclusive CMI/CFM modules. Cincoze has launched a new Stackable Expansion Box (SEB) for the DC-1300 series. The SEB uses the built-in dual M.2 B Key slots to support I/O, CANbus, and Fieldbus modules. This design enhances its capabilities for smart manufacturing, transportation, energy management, and other applications.

    Rugged, Compact, and Flexible Deployment

    The DC-1300 series is compact (185 x 131 x 56.5 mm) and designed for space-limited factory automation applications. It supports various installation methods, including wall mounting, side mounting, and DIN rail mounting, for convenient and flexible deployment in machines, control cabinets, or automated guided vehicles. It supports wide temperature (-40 to 70°C) and wide voltage (9-48 VDC). It also complies with EMC standards (IEC 61000-6-2/4) and the US military shock standard (MIL-STD-810H) to ensure stable operation in harsh industrial environments.

    About Cincoze

    Cincoze was established in 2012 and is headquartered in New Taipei City, Taiwan. It specializes in the design and manufacture of rugged embedded computing systems tailored for edge computing and AIoT applications. Their product portfolio includes rugged embedded computers, convertible embedded systems, industrial panel PCs and monitors, and embedded GPU computers. These solutions serve industries such as manufacturing, intelligent transportation, railway computing, in-vehicle computing, warehouse and logistics, energy, kiosks, and security and surveillance. Cincoze has experienced rapid and steady growth, maintaining a compound annual growth rate (CAGR) of 40% for eight consecutive years. The company has expanded its global presence with facilities in the USA, China, and Germany, and a network spanning over 40 countries with more than 50 value-added partners.

    Source: Cincoze

  • Innatera T1 SNP: A Tiny Chip on a Big Mission

    Innatera T1 SNP: A Tiny Chip on a Big Mission

    The hardware that makes AI possible is hidden away in data centers, huge non-descript buildings that, because of security concerns, may not reveal their purpose. But their thermal signature is a dead giveaway. So much energy is needed to cool down the servers inside that future mega-sized data centers must be big energy production facilities, such as hydroelectric dams. That may not be enough. They are on the verge of bringing back nuclear reactors.

    To counter the surge required by present and future data centers is a small but growing movement to reduce the need for computing done by data centers. By doing the computing on the edge, rather than in the (data) center, could we not stop the madness of building ever bigger data centers? OpenAI, the company that created ChatGPT, has plans to build several 5 GW data centers, 30 times the size of the biggest data centers currently in existence. 5 GW is roughly the amount of energy all of Iceland consumes in a year!

    The argument for more edge processing is sound. Sensors, such as those in consumer devices and industrial equipment, emit a steady stream of data that is currently processed by hordes of GPUs on neural network chips in data center servers. These servers are always on, always demanding energy and generating heat. Yet, most of the data they are processing is just noise.

    Filtering out the noise at the edge makes sense. And if the useful signal remaining could be processed on the edge, we could actually downsize our need for data centers.

    That’s the theory behind a new crop of small spiking neural processors (SNPs) in a nutshell – literally. A nutshell could actually hold several SNPs.

    Hey, Buddy. Want to Buy a SNP?

    I was shown my first SNP in a hotel room.

    I meet Innatera’s chief commercial officer, Shreyas Derashri and CEO, Sumeet Kumar, in a suite at the Venetian Hotel in Las Vegas. As companies sometimes do, Innatera has booked a room instead of a booth on the show floor. Smart. They have my undivided attention. The show floors are a circus. A suite lets the company set up rooms for different applications of their chip.

    A paperweight to thank me for visiting? Look closely. What appears to be a gold flake is the smallest of SOCs (systems on a chip), measuring just 6 X 6 mm.

    The Innatera team has traveled to Las Vegas from HQ in Delft, Netherlands, to show off the first-ever always-on spiking neural processor, the T1. Tiny beyond belief, just 6mm square, the T1 is indeed a marvel of miniaturization. Its diminutive size gives it a big advantage over NVIDIA’s processors on the big stage at CES. Yet, this little chip can do quite a bit of what its more famous counterpart can do with just 1/500 the power.

    What Can It Do?

    The T1’s lower-than-low power consumption opens up a flood of new product improvements. Think of portable devices, such as smartphones, wearables (like Fitbit and smartwatches), smart speakers, and smart appliances, hundreds of which are on display a few floors below, all able to last for weeks or months without being recharged or having to replace their batteries. Think also of commercial buildings and smart homes loaded with battery-operated smart and connected devices, such as doorbell cams, smoke alarms, thermostats, people counters, security systems, etc., that don’t have to be wired or have their batteries changed from the top of tall ladders.  

    In all these smart devices, sensors abound. A home smart speaker may have two sensors, a commercial building, hundreds. Innatera estimates that four billion smart devices will be added each year.

    I am reminded of my smart doorbell, which seems to run low on charge at the most inopportune times, such as during my vacation, and uses its precious battery to inform me of stray cats, blowing leaves, passing cars, and other things I care little about. Could it just let me know of package delivery or who is ringing the bell?

    How SNPs Work

    Neurons firing in a fish brain is an example of a biological spiking neural network.

    A spiking neural processor, such as the one developed by Innatera, is inspired by the functioning of biological neurons in the brain. Instead of continuously processing information like traditional neural processors, SNPs become active only when they encounter discrete, event-driven signals called “spikes,” e.g., a change in input signal or sensor data. This mimics how biological neurons fire in response to stimuli. The spikes, which carry information, are recorded temporally (via timing and sequence) rather than in continuous values. SNPs ignore the ever-present noise altogether, much as your brain ignores the sensations of the shirt on your back.

    SNPs excel at real-time processing of sensory data, such as audio, visual, or tactile inputs, with minimal latency. We see an example of a song being recognized by its audio signature at the sensor. This makes SNPs suited for the recognition of complex data patterns more effectively, particularly in environments where data is sparse and episodic.

    Living on the Edge

    Innatera’s SNP’s are fundamentally different and more efficient for specific tasks compared to the GPU-based architectures used for large language models (LLMs) that, by comparison, consume unnecessary energy since they are always on and always processing.

    Innatera’s chips are designed to work on the edge, in remote areas where power is scarce and what little power is needed can be supplied by batteries.

    Edge processing not only saves the energy typically required to transmit data from the field to the data center, but it also eliminates latency, delivering output for AI systems without pause. Neural processors, because of their energy requirements, require them to be plugged in and, therefore, can be far from sensors. This causes latency problems.

    The Innatera T1 SNP SoC requires less than 10 mW of power. Being such a little drain on a battery lets a remote sensor operate for months.

    The TI SNP is able to filter the waving leaves of a houseplant from a room cam.

    Innatera showed applications of the use. There was a people counter by which an FIR (far infrared sensor) backed by an SNP was able to process the thermal shapes of people in the room. In another room, a radar system was able to filter out the leaves of a houseplant being blown around by a fan. Both applications ensure privacy and minimize power use as they process entirely on the device, and images are of low resolution, unlike vision-based systems.

    Innatera vs NVIDIA and Unfair Comparison

    NVIDIA neural processors differ significantly from Innatera spiking neural processors. Nvidia’s A100 and H100, for example, are general-purpose AI processors optimized for matrix computations. NVIDIA has explored neuromorphic computing but focuses on simulating SNPs on GPUs rather than building dedicated neuromorphic chips. Compared to Innatera chips, NVIDIA GPUs are larger and consume much more power (100–700 W) than Innatera SNPs.

    And then there is the cost. NVIDIA’s H100 costs roughly $25,000 each. (Innatera has not yet released pricing for the T1 SNP).

    The Competition

    NVIDIA offers low-power chips meant for edge use, such as their Nano and Xavier. Still, with power consumption of 10W and 30W, respectively, there is no comparison to the Innatera’s sub-milliwatt chip.

    Innatera is not alone in making spiking neural network chips for edge devices, but a quick survey of the competition showed Innatera to be among the smallest and most energy efficient.

    Intel has the Loihi, which operates at relatively low power for its capabilities (~50 mW per chip under typical workloads. IBM’s TrueNorth, with its simulation of a million neurons, a large form factor, and 70 mW power consumption, is better deployed in a data center or research applications than on the edge.

    GrAI Matter Labs GrAI One chips are larger than Innatera’s chips as they prioritize computational capabilities for edge-cloud hybrid systems and are more power-hungry, requiring a slightly higher power level (10–50 mW)

    SynSense chips are compact and tailored for real-time sensory processing, making them similar in size to Innatera’s chips. Their Dynap-SE consumes around one mW for sensory processing tasks, comparable to Innatera’s chips. SynSense emphasizes vision-based tasks, while Innatera is versatile across sensory modalities like sound, motion, and biological signals.

    BrainChip’s Akida AKD100 “reference chip,” which uses Akida’s “neuron fabric” may be Innatera’s closest competition.

    About Innatera Nanosystems

    Sumeet Kumar, CEO of Innatera, at CES 2025.

    I wrap up the meeting with a brief chat with Dr Sumeet Kumar. Dr Kumar earned both his master’s and PhD in microelectronics from Delft University of Technology in the Netherlands. He went on to manage research programs at Delft aimed at developing next-generation compute hardware for highly automated vehicles. Dr Kumar worked at Intel, focusing on developing domain-specific tools for complex media processor architectures. He founded Innatera in 2018.

    Dr Kumar is a gentleman, pleasant, well dressed and exudes an air of quiet confidence. We expect someone who thinks on the scale of microns but Dr Kumar thinks of the whole world. He explains how SNPs like Innatera’s, small as they are, could affect climate change. By simply ignoring the torrent of noise from sensors and processing only spikes at the edge, current data centers would be relieved and produce less heat. Data centers working at less than full capacity would obviate the need for more data centers.

    The Dutch-based Innatera Nanosystems was spun out of Delft University of Technology in 2018 to develop ultra-low power neuromorphic processors that mimic the brain’s mechanisms for processing sensory data. The company is headquartered in Rijswijk, Netherlands and maintains a design center in Bangalore, India.

    Innatera has over 80 employees. It is a private company and, as such, is not required to disclose revenue.

    Innatera raised a total of €25 million in funding over two rounds. In November 2020, the company secured €5 million in seed funding. This round was led by Munich-based deep-tech investors MIG Verwaltungs AG and the Industrial Technologies Fund of btov.

    In 2024, Innatera raised an additional €20 million in an oversubscribed Series A funding round, including investments from Invest-NL Deep Tech Fund, the EIC Fund, MIG Capital, and Matterwave Ventures.

  • BOXX Announces Compatibility with NVIDIA GeForce RTX 50 GPUs

    BOXX Announces Compatibility with NVIDIA GeForce RTX 50 GPUs

    AUSTIN, TX, Jan 24, 2025 – BOXX Technologies has announced that, as a supplier of NVIDIA-Certified Systems, select BOXX products will support the new NVIDIA GeForce RTX 50 Series GPUs available as of date. The new NVIDIA Blackwell architecture GPUs combine the latest-generation RT Cores and Tensor Cores with GDDR7 memory, increased clock speed, and VRAM, to deliver improved AI, graphics, rendering, and ray-tracing performance.

    “Our support for the latest NVIDIA GeForce RTX 50 Series technology is essential because these GPUs accelerate application performance and creative workflows,” said BOXX CEO Kirk Schell. “Now video editors, VFX artists, animators, architects, and other content creators can take advantage of all AI has to offer and design, render, collaborate, and meet project deadlines faster than ever before.”

    The NVIDIA GeForce RTX 50 Series GPU supported by BOXX systems features the latest Blackwell technology for accelerated AI and ray tracing, as well as up to1.8TB/s of GDDR7 memory bandwidth to power:

    • Faster content creation
    • Multi-application workflows
    • Improved AI and machine learning support

    The new GPU series also offers 33% more VRAM than the previous generation, enabling users of Adobe Creative CloudDaVinci ResolveCinema 4DRevitRhino, and other applications supported by NVIDIA Studio Drivers, to optimize creative tasks like:

    • Next-gen raytracing & AI-powered graphics
    • AI-assisted video editing and rendering
    • Real-time 8K video editing

    To accelerate V-RayAutodesk ArnoldLumion, and other 3D renderers supported by NVIDIA Studio Drivers, the new NVIDIA GeForce RTX 50 Series GPUs inside BOXX systems feature DLSS 4 with Multi Frame Generation which also delivers superior image quality by multiplying frame rates by up to eight times over traditional rendering.

    In addition to the new NVIDIA GeForce RTX GPUs, these select BOXX products include multi-core Intel Core UltraAMD Ryzen 9000 or AMD Ryzen Threadripper and Intel Xeon W processors, liquid cooling, ample memory, and plenty of hard drives.

    “Demanding graphics and workflows require powerful, purpose-built solutions,” added Schell, “and NVIDIA GeForce RTX 50 Series GPUs supported by innovative BOXX solutions give creators the performance they need to run the latest 3D and AI-accelerated applications.”

    About BOXX

    Founded in 1996 and headquartered in Austin, Texas, BOXX Technologies specializes in high-performance computer workstations tailored for professionals in architecture, engineering, product design, visual effects, animation, and data science. Their product lineup includes deskside and rack-mounted workstations, rendering systems, and servers, all designed to accelerate workflows in industries such as media and entertainment, manufacturing, and government. BOXX’s APEXX series, for instance, offers performance-tuned, liquid-cooled systems featuring the latest Intel Core Ultra and AMD Ryzen processors, delivering unparalleled speed and reliability. The company maintains its design, manufacturing, and support operations at its Austin headquarters, ensuring quality control and customer service excellence. With a global presence through 40 international resellers, BOXX provides purpose-built solutions that enhance productivity and meet the demanding requirements of creative professionals worldwide. As of 2024, BOXX Technologies employs approximately 67 individuals and has an estimated annual revenue of $24.1 million.

    Source: BOXX

  • MediaTek, Cadence Partner to Accelerate 2nm Design Process

    MediaTek, Cadence Partner to Accelerate 2nm Design Process

    SAN JOSE, CA, Jan 23, 2025 – Cadence has announced that MediaTek has adopted the AI-driven Cadence Virtuoso Studio and Spectre X Simulator on the NVIDIA accelerated computing platform for its 2nm development. As design size and complexity continue to escalate, advanced-node technology development has become increasingly challenging for SoC providers. To meet the aggressive performance and turnaround time (TAT) requirements for its 2nm high-speed analog IP, MediaTek is leveraging Cadence’s proven custom/analog design solutions, enhanced by AI, to achieve a 30% productivity gain.

    “As MediaTek continues to push technology boundaries for 2nm development, we need a trusted design solution with strong AI-powered tools to achieve our goals,” said Ching San Wu, corporate vice president at MediaTek. “Closely collaborating with Cadence, we have adopted the Cadence Virtuoso Studio and Spectre X Simulator, which deliver the performance and accuracy necessary to achieve our tight design turnaround time requirements. Cadence’s comprehensive automation features enhance our throughput and efficiency, enabling our designers to be 30% more productive.”

    MediaTek has used the Virtuoso ADE Suite to add its AI-based optimization algorithm to streamline future product development. This has helped its designers work more efficiently on circuit designs. Cadence’s Spectre X running on NVIDIA H100 GPUs delivers the same accuracy as Spectre X running on CPUs while delivering up to a 6X performance improvement for post-layout simulations of large, advanced-node designs.

    “Improved performance and efficiency are key to advancing today’s complex chip design processes,” said Dion Harris, director of accelerated computing at NVIDIA. “With Cadence’s Spectre X running on NVIDIA Hopper GPUs, companies like MediaTek can accelerate the verification of their complex post-layout designs, maximize analog circuit simulation performance and reduce time to market.”

    MediaTek’s analog layout team now uses the Virtuoso Layout Suite device-level router for custom digital blocks in 2nm technology, improving layout efficiency.Additionally, MediaTek is leveraging AI and Virtuoso’s open platform to create a prototyping placement and low-power prediction process. This approach improves design productivity by 30%.

    “MediaTek’s validation of our latest Virtuoso Studio release and Spectre X Simulator on NVIDIA’s accelerated computing platform demonstrates that Cadence’s continued investment in enhancing our industry-leading custom design solutions and AI tools is a game changer for our customers’ most challenging 2nm designs,” said Vinod Kariat, corporate vice president and general manager of the Custom Products Group at Cadence. “Bringing the power of AI and GPUs to Spectre X enables MediaTek to solve its large-scale verification simulation challenges even more quickly, without sacrificing accuracy.”

    Source: Cadence

  • Keysight Launches Chiplet PHY Designer 2025 for AI, Data Center Apps

    Keysight Launches Chiplet PHY Designer 2025 for AI, Data Center Apps

    SANTA ROSA, CA, Jan 23, 2025 – Keysight Technologies has announced the launch of Chiplet PHY Designer 2025, its latest solution for high-speed digital chiplet design tailored to AI and data center applications. The enhanced software introduces simulation capabilities for the Universal Chiplet Interconnect Express (UCIe) 2.0 standard and adds support for the Open Computer Project Bunch of Wires (BoW) standard. As an advanced, system-level chiplet design and die-to-die (D2D) design solution, Chiplet PHY Designer enables pre-silicon level validation, streamlining the path to tapeout.

    As AI and data center chips grow more complex, ensuring reliable communication between chiplets becomes crucial for performance. The industry is addressing this challenge through open, emerging standards like UCIe and BoW that define the interconnects between chiplets within an advanced 2.5D/3D package. When designers use these standards and ensure chiplets meet compliance, they help expand chiplet interoperability while lowering development costs and risks in semiconductor development.

    Key Benefits of the Chiplet PHY Designer 2025:

    • Ensures Interoperability: Verifies designs meet UCIe 2.0 and BoW standards, enabling seamless integration across advanced packaging ecosystems.
    • Accelerates Time-to-Market: Automates simulation and compliance testing setup, such as Voltage Transfer Function (VTF), simplifying chiplet design workflows.
    • Improves Design Accuracy: Provides insight into signal integrity, bit error rate (BER), and crosstalk analysis, reducing risks of costly silicon re-spins.
    • Optimizes Clocking Designs: Supports advanced clocking scheme analysis, such as quarter-rate data rate (QDR), for precise synchronization in high-speed interconnects.

    Hee-Soo Lee, high-speed digital segment lead, Keysight EDA, said: “Keysight EDA launched Chiplet PHY Designer one year ago as the industry’s first pre-silicon validation tool to provide in-depth modeling and simulation capabilities; this enabled chiplet designers to rapidly and accurately verify that their designs meet specifications before tapeout. The latest release keeps pace with evolving standards like UCIe 2.0 and BoW while delivering new features, such as the QDR clocking scheme and systematic crosstalk analysis for single-ended buses. Engineers using Chiplet PHY Designer save time and avoid costly rework, ensuring their designs meet performance requirements before manufacturing. Early adopters, like Alphawave Semi, attest that Chiplet PHY Designer ensures seamless operation and interoperability for 2.5D/3D solutions available to their chiplet customers.”

    Source: Keysight

  • ionstream.ai Expands Cloud GPU Platform with NVIDIA L40S

    ionstream.ai Expands Cloud GPU Platform with NVIDIA L40S

    HOUSTON, TX, Jan 16, 2025 – ionstream.ai has announced the immediate availability of NVIDIA L40S GPUs on its GPU as a Service (GaaS) platform. This strategic expansion provides organizations with a cost-effective solution optimized for AI inference and fine-tuning tasks, offering an alternative to larger and expensive GPU options.

    Source: ionstream

    “Organizations are looking for right-sized GPU solutions that match their specific AI workloads,” said Jeff Hinkle, chief executive officer at ionstream.ai. “The addition of the NVIDIA L40S to our cloud platform provides enterprises with the ideal infrastructure for inference and model refinement tasks, delivering the perfect balance of performance and cost-efficiency.”

    Enterprise-Grade AI Infrastructure, On Demand

    The NVIDIA L40S GPU, powered by the Ada Lovelace architecture, represents a breakthrough in AI infrastructure accessibility. ionstream.ai’s implementation delivers:

    • Advanced AI Capabilities:
      • Optimized for AI inference and fine-tuning workflows
      • Ideal for production-scale model deployment
      • Cost-effective alternative to H100 and H200 GPUs for inference tasks
      • Multi-user support for enterprise workloads
    • Revolutionary Cost Economics:
      • Right-sized infrastructure for inference workloads
      • Improved energy efficiency for sustainable operations
      • Zero upfront capital expenditure
      • Pay-as-you-go pricing with per-minute billing

    Transforming Enterprise AI Capabilities

    The L40S platform enables efficient AI model deployment across industrial domains:

    • Oil & Gas Exploration: Process complex seismic data through high-performance computing capabilities, enabling rapid subsurface imaging and reservoir characterization. The L40S accelerates traditional seismic processing workflows while supporting emerging AI-enhanced interpretation methods, reducing time-to-insight for critical exploration decisions.
    • Healthcare & Life Sciences: Deploy medical imaging models and fine-tune diagnostic systems
    • Financial Services: Run real-time inference for fraud detection and risk analysis
    • Automotive & Manufacturing: Power production-ready computer vision applications

    Flexible Deployment Options Meet Enterprise Needs

    ionstream.ai’s platform offers deployment flexibility:

    • Instant Provisioning: Deploy L40S instances in under 60 seconds
    • Flexible Acquisition Options: Available for purchase or lease to meet varying business needs
    • Enterprise-Grade Infrastructure: Hosted in a Tier 4 designed data center in Spring, Texas to provide optimal uptime
    • 24/7 Expert Support: Direct access to GPU infrastructure specialists

    Availability and Special Launch Offer

    The NVIDIA L40S is available on the ionstream.ai platform. For a limited time, new customers can receive:

    • Complimentary one-month proof of concept available for qualified enterprises
    • Complimentary AI infrastructure optimization consultation