Nvidia Migrates Ampere A100 GPUs to Google Cloud

More than a month after announcing its latest generation of Ampere A100 GPUs, Nvidia announced this week that its Powerhouse processor system is now available on Google Cloud. The A100 Accelerator Optimized VM A2 family of instances is designed for huge loads of artificial intelligence and data analysis.

The A100 Accelerator Optimized VM A2 family of instances is designed for huge loads of artificial intelligence and data analysis. Nvidia reports that users can expect significant improvements over previous processing models. In this case, we are talking about increasing productivity by 20 times. The system reaches a maximum of 19.5 TFLOPS for single-precision performance and 156 TFLOPS for artificial intelligence and high-performance computing applications requiring TensorFloat 32 operations.

Nvidia Ampere is the largest 7-nanometer chip ever made. It has 54 billion transistors and offers innovative features such as a multi-instance GPU, automatic mixed-precision, NVLink, which doubles the GPU’s direct throughput to the GPU and increases the memory speed to 1.6 terabytes per second. The accelerator has 6912 CUDA cores and has 40 GB of HBM2 memory.

In describing the architecture, Ampere Invidia stated that its improvements provide unrivaled acceleration at any scale.

The new cloud service is currently in alpha mode. The service will be available in five configurations depending on the needs of the business. Configurations range from one to 16 GPUs and from 85 to 1360 GB of RAM.

Google said businesses can easily connect to Ampere A100 GPUs. Prices have not yet been announced. Google announced that cloud services will be available to the public after this year. The rapid availability of the new service for cloud operations testifies to the growing needs of AI innovators.

NVIDIA says the A100 is now available on Google faster than any NVIDIA GPU in history.

Tags: