It follows many of the design choices of the original DGX station, such as the tower orientation, single socket CPU mainboard, a new refrigerant-based cooling system, and a reduced number of accelerators compared to the corresponding rackmount DGX A100 of the same generation. DGX Station A100 Īs the successor to the original DGX Station, the DGX Station A100 aims to fill the same niche as the DGX station in being a quiet, efficient, turnkey cluster-in-a-box solution that can be purchased, leased, or rented by smaller companies or individuals who want to utilize machine learning. The initial price for the DGX A100 Server was $199,000. The DGX A100 also moved to an AMD EYPC 7742 CPU, the first DGX server to not be built with an Intel Xeon CPU. The DGX A100 is in a much smaller enclosure than its predecessor, the DGX-2, taking up only 6 Rack units. Also included is 15TB of PCIe gen 4 NVMe storage, 1 TB of RAM, and eight Mellanox-powered 200GB/s HDR InfiniBand ConnectX-6 NICs.
The DGX A100 was the 3rd generation of DGX server, including 8 Ampere-based A100 accelerators. with SXM4 / Ampere DGX A100 Server Īnnounced and released on May 14, 2020. Īdditionally, there is a higher performance version of the DGX-2, the DGX-2H with a notable difference being the replacement of the Dual Intel Xeon Platinum 8168's 2.7 GHz with Dual Intel Xeon Platinum 8174's 3.1 GHz. The initial price for the DGX-2 was $399,000. Also present are eight 100Gb/sec InfiniBand cards and 30.72 TB of SSD storage, all enclosed within a massive 10U rackmount chassis. DGX-2 has a total of 512GB of HBM2 memory, a total of 1.5TB of DDR4. The DGX-2 delivers 2 Petaflops with 512GB of shared memory for tackling massive datasets and uses NVSwitch for high-bandwidth internal communication. It was announced on the 27th of March in 2018. The successor of the Nvidia DGX-1 is the Nvidia DGX-2, which uses sixteen Volta-based V100 32GB (second generation) cards in a single unit.
This was Nvidia's first venture into bringing high performance computing to the average developer or researcher, which has since remained a prominent marketing strategy for Nvidia. This, among other features, made this system a compelling purchase for customers without the infrastructure to run rackmount DGX systems, which can be loud, output a lot of heat, and take up a large area. The DGX station is water-cooled to better manage the heat of almost 1500W of total system components, this allows it to keep a noise range under 35 dB under load. Four Volta-based Tesla V100 accelerators, each with 16 GB of HBM2 memory.The DGX station was first available with the following specifications. ĭesigned as a turnkey AI supercomputer, the DGX Station is a tower computer that can function completely independently without typical datacenter infrastructure such as PDUs, Redundant Power, or 19 inch racks. The Volta based DGX-1 is equipped with an E5-2698 V4 and was priced at launch at $149,000.Pricing for the variant equipped with an E5-2698 V4 is unavailable, the Pascal based DGX-1 with an E5-2698 V3 was priced at launch at $129,000 The Pascal based DGX-1 has two variants, one with an Intel Xeon E5-2698 V3, and one with an E5-2698 V4.Also there were some offers for upgrading these add-on boards. With this freedom built in the system was first offered with Pascal (SXM1) and later, when getting available, with Volta (SXM2) computation modules. Īs the DGX-1 had SXM2 sockets inside and the design were form-factor compatible with previous SXM (=SXM1).
The initial Pascal based DGX-1 delivered 170 teraflops of half precision processing, while the Volta-based upgrade increased this to 960 teraflops. The product line is intended to bridge the gap between GPUs and AI accelerators in that the device has specific features specializing it for deep learning workloads. 3200W of combined power supply capability.All models are based on a dual socket configuration of Intel Xeon E5 CPUs, and are equipped with the following features. The DGX-1 was announced on the 6th of April in 2016. Models with SXM 1 to 3 / Pascal + Volta DGX-1 ĭGX-1 servers feature 8 GPUs based on the Pascal or Volta daughter cards with 128GB of total HBM2 memory, connected by an NVLink mesh network.