Inspur Launches New AI Servers to Support Latest NVIDIA A100 PCIe Gen 4 at ISC20
22.6.2020 10:07:00 EEST | Business Wire | Press release
Inspur, a leading data center and AI full-stack solutions provider, releases NF5468M6 and NF5468A5 AI servers supporting the latest NVIDIA A100 PCIe Gen 4 GPU at ISC High Performance 2020. It will provide AI users around the world with the ultimate AI computing platform with superior performance and flexibility.
Thanks to its agile and strong product design and development capabilities, Inspur is one of the first in the industry to support the NVIDIA A100 Tensor Core GPU and build up a comprehensive and competitive next-generation AI computing platform. The A100 GPU brings unprecedented versatility by accelerating a full range of precisions—from FP32 to FP16 to INT8 and all the way down to INT4. This includes the new TF32 precision, which works like FP32 while providing 20X higher FLOPS for AI without requiring any code change. In addition, the NVIDIA A100 offers multi-instance GPU technology, which enables a single GPU to be partitioned into seven hardware-isolated instances to work on multiple networks simultaneously. At present, Inspur’s two new products-NF5488M5-D and NF5488A5 with the NVIDIA A100 have taken the lead in mass production.
The newly released NF5468M6 and NF5468A5 present many innovative designs and strike a balance between superior performance and flexibility, which well meets increasingly complex and diverse AI computing needs. NF5468M6 and NF5468A5 can offer superb computing performance for high-performance computing and cloud application scenarios.
NF5468M6 and NF5468A5 accommodate eight double-width A100 PCIe cards in a 4U chassis. Both support the latest PCIe Gen4 of 64GB/s bi-directional bandwidth, delivering a 100% increase in bandwidth compared to PCIe Gen3 with the same power consumption. Its superior performance will meet the requirements of the most complex challenges in data science, high-performance computing, and artificial intelligence. Besides, 40GB of HBM2 memory increases memory bandwidth by 70% to 1.6TB/s, allowing users to train larger deep learning models. The unique NVIDIA NVLink bridge design can provide P2P performance of up to 600GB/s between two GPUs, resulting in significant increases in training efficiency
Furthermore, another two leading AI servers of Inspur, NF5468M5 and NF5280M5, also support NVIDIA A100 PCIe Gen 4.
As the world’s leading AI server manufacturer, Inspur offers an extensive range of AI products, and works closely with AI customers to improve AI application performance in different scenarios such as voice, semantic, image, video, and search.
About Inspur
Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the world’s top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to www.inspursystems.com.
To view this piece of content from cts.businesswire.com, please give your consent at the top of this page.
View source version on businesswire.com: https://www.businesswire.com/news/home/20200622005207/en/
Contact information
Fiona Liu
Liuxuan01@inspur.com
About Business Wire
For more than 50 years, Business Wire has been the global leader in press release distribution and regulatory disclosure.
Subscribe to releases from Business Wire
Subscribe to all the latest releases from Business Wire by registering your e-mail address below. You can unsubscribe at any time.
Latest releases from Business Wire
Riskified Unveils Next-Generation AI Suite at Ascend 2026, Empowering Merchants with Unprecedented Visibility and Control Over Ecommerce Risk6.5.2026 16:00:00 EEST | Press release
Riskified (NYSE: RSKD), a global leader in ecommerce fraud and risk intelligence, today announced a major leap forward in its AI platform capabilities, introducing a powerful suite of control and empowerment tools designed to give merchant fraud teams complete visibility into risk patterns and identity behavior, conversational AI-driven insights, and the ability to surgically tailor their risk strategies. The innovation suite was announced onstage at Ascend 2026, Riskified’s premier global summit series, taking place May 4-6, 2026, at the Conrad New York Downtown in Manhattan, NY. With global ecommerce continuing to expand and losses from AI-driven fraud attacks projected to spike, particularly with the rise of agentic commerce, where AI bots may conduct transactions on behalf of consumers, accurate fraud decisions are fundamental, but are no longer enough. Today, more than ever, sophisticated fraud and risk teams need to understand the why behind every transaction and pattern, and req
Twilio’s Next Generation Platform: An Infrastructure Layer for Every Conversation in the Agentic Era6.5.2026 16:00:00 EEST | Press release
Twilio (NYSE: TWLO), the infrastructure for customer engagement in the AI era, kicked off its user conference, SIGNAL, by unveiling its next generation platform capabilities for the agentic era. Generally available today, Conversation Memory, Conversation Orchestrator, Conversation Intelligence, and Agent Connect combine to turn disparate interactions into continuous, intelligent, and personal conversations across humans, agents, and systems. “The agentic era is here. Agents are joining conversations alongside the people they represent, and modern customer engagement requires an infrastructure that serves both equally,” said Khozema Shipchandler, Chief Executive Officer at Twilio. “Twilio’s new platform is the foundational infrastructure layer that makes every conversation persistent, contextual, and actionable – ensuring interactions feel like part of one continuous relationship." An Infrastructure Layer for Every Conversation Every business runs on conversations. Today, however, busi
Vultr, SUSE & Supermicro Debut Unified Cloud-to-Edge Architecture for Global AI Scaling6.5.2026 15:00:00 EEST | Press release
Vultr, the world's largest privately-held cloud infrastructure company, in collaboration with SUSE and Supermicro, today announces a strategic architectural framework designed to solve the complexities of deploying and operating AI workloads across distributed environments. As AI moves closer to the point of data creation - from manufacturing floors to retail storefronts - organizations face significant challenges in latency, cost and operational consistency. This joint initiative provides a seamless, Cloud-to-Edge pipeline that integrates high-performance hardware, localized cloud infrastructure, and unified Kubernetes management. The partnership addresses the reality that sending all data back to a central cloud is no longer viable for real-time AI. The solution breaks down the infrastructure into three critical layers: The Cloud and Near-Edge - Enterprises can deploy regional Kubernetes-based AI clusters closer to their users by leveraging Vultr’s 33 global cloud data center regions
Waiv Enters Collaboration with Daiichi Sankyo to Deliver AI-Derived Biomarkers for ADC Program6.5.2026 15:00:00 EEST | Press release
Waiv, formerly Owkin Dx, a Paris-based company catalyzing AI precision testing, today announced it has entered a collaboration with Daiichi Sankyo (TSE: 4568) to lead digital pathology biomarker discovery for an antibody-drug conjugate (ADC) program. With deep expertise across diverse pathology and multimodal data, and a global data network spanning academic institutions, hospitals, and laboratories, Waiv has a proven track record delivering AI-powered biomarker solutions across the full drug development lifecycle. Under the collaboration, Waiv will apply its end-to-end computational pathology platform to early phase data. This includes tumor microenvironment (TME) analysis across both hematoxylin and eosin (H&E) and immunohistochemistry (IHC) stained samples, as well as biomarker discovery and outcome prediction capabilities aimed at identifying biomarkers of treatment response ahead of next clinical trial phases. Purpose-built AI approach tackles one of pharma's hardest challenges: b
Elisa Expands PON Deployment with Vecima’s All-PON™ Shelf, Delivering 10G Fiber Services in Estonia6.5.2026 14:30:00 EEST | Press release
Vecima Networks Inc. (TSX: VCM) announced today that leading telecommunications operator Elisa has deployed Vecima’s Entra EXS1610 All-PON™ Shelf for 10G Fiber-to-the-Home (FTTH) services for its subscribers in Estonia. In Estonia’s competitive broadband market, Elisa brings highly innovative solutions to its subscribers. Vecima’s EXS1610 supports multiple deployment use cases, including greenfield, targeted brownfields, rural edge-outs, hybrid fiber-coax (HFC) overbuilds, footprint extensions, and hub collapses. Its compact shelf footprint can help reduce operating costs and allow operators to deploy anywhere – for maximum flexibility, including data centers, remote cabinets, the headend, and multi-dwelling units. The Entra EXS1610 All-PON Shelf features: 16 x PON ports: 10G-EPON, XGS-PON, and Combo XGS-PON & GPON Temperature-hardened for outside plant deployments Multi-vendor optical network terminal (ONT) interoperability Uplink optics: 2 x 100/40G & 2 x 25/10G with broad third-part
In our pressroom you can read all our latest releases, find our press contacts, images, documents and other relevant information about us.
Visit our pressroom
