Asus ESC8000A-E13P 4U Rackmount Barebones System
Eight GPU LLM Server
Large Language Model Server Banner Image Visualizing Data Streams

Eight GPU Large Language Model Server

Powerful 4U rackmount server supporting up to eight NVIDIA GPUs for training, fine-tuning, and inference with AI large language models.

Overview

Eight GPU 4U server supporting NVIDIA RTX PRO™ and H200 NVL GPUs

  • Up to 1128GB of VRAM across eight GPUs
  • Great for 150B parameter fp16 inference and fine-tuning smaller models
  • Requires four 200-240V power connections on separate circuits

Not sure what you need?

and one of our experts will reply within 1 business day to help configure the right computer for your workflow. If you don’t see what you are looking for here, check out our other systems for more options.

System Core


2x

This system includes onboard dual 10Gb RJ45 ports as well as a dedicated remote management port. Up to 5 additional network adapters can be selected as well.

Storage


Chassis & Cooling


This system requires four 200-240V power connections. For redundancy to be functional, the total power consumption of the system must be lower than the maximum output of two of the PSU modules.

Software


Also available with Windows Server 2022. Please contact us for licensing costs.

Accessories


For H200 NVL only, please select an NVLink option matching the number of GPUs

Additional Information

Help us help you! We review each configuration to ensure you’re getting the right hardware. Any info you can provide about your workflow and software will help us provide you with a better experience.


System Cost

Loading…

per unit

Typically ships in 1-2 weeks

Contact us for lead times

Contact us for quotes for more than 100 units