Current location:
Home > News > Company News > Gooxi Unveils Latest AI Server: Accelerating Deployments of Generative AI to Unleash AI PotentialGooxi Unveils Latest AI Server: Accelerating Deployments of Generative AI to Unleash AI Potential
Gooxi has recently launched its newest all-in-one AI server, designed with ample memory capacity and flexible high-speed interconnection options to cater to various AI application scenarios. With extensive support for expansion slots, it significantly enhances intelligent computing power performance, providing enterprises with optimal performance and cost for deploying model training and inference applications, thereby offering better general computing power.
As generative artificial intelligence technology rapidly advances and large AI models are increasingly deployed, these models are evolving from mere entertainment tools to production tools for solving everyday problems. Computing power forms the underlying infrastructure of this era of large models. As the demand for AI computing power evolves into a need for general computing power, AI models and application scenarios continue to diversify and become more complex. To meet the upgraded demands for computing power in model training and help users quickly establish efficient AI application environments, Gooxi has introduced the new AI training and inference integrated machine, the Gooxi Intel Eagle Stream platform 4U 8-GPU server. It boasts leading architecture, powerful computing power, and flexible expansion capabilities, providing robust computing support for various AI applications.
As a leading domestic server solution provider, Gooxi possesses rich technical experience and strong R&D capabilities. The Gooxi Intel Eagle Stream platform 4U 8-GPU server is designed for large-scale AI training and inference, featuring modular design, supporting 8TB memory capacity, and a maximum of 16 PCIe 5.0 expansion slots.
Among them, using CPU-GPU pass-through can support up to 8 600W mainstream high-performance enterprise-grade double-width GPUs, meeting the power consumption requirements of next-generation GPUs and reducing user platform upgrade costs. This solution does not require a Switch chip, offering higher cost-effectiveness. Front-facing hard drives can flexibly choose from 12-disk/8-disk/16-disk/24-disk slots, 3.5" or 2.5" SATA/SAS4.0/GEN5 NVME, providing massive storage and data read/write performance.
Regarding GPU-GPU interconnection through Switch chip, it is more suitable for P2P scenarios, with a Switch-CPU bandwidth of PCIe 5.0 x32, capable of meeting high-performance GPU computing requirements. During LLM inference, it can significantly improve model response speed, supporting up to 10 600W mainstream high-performance enterprise-grade double-width GPUs.
The Gooxi Intel Eagle Stream platform 4U 8-GPU server can support running models with trillions of parameters. In terms of frameworks and algorithms, it supports mainstream AI frameworks such as PyTorch, TensorFlow, Caffe, MXNet, and popular development tools such as DeepSpeed, making underlying adaptation more efficient and convenient, achieving seamless migration of the ecosystem.
In the backdrop of the continuous evolution and acceleration of computing power demand in the AIGC era, Gooxi will further meet the higher computing performance, higher memory bandwidth, and higher scalability computing power demands of enterprises in the process of model training and inference applications, helping users deploy and accelerate their AI applications, driving the intelligent transformation of their businesses.
Related recommendations
Learn more newsLeading Provider of Server Solutions
YouTube