Send Message
products

HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing

Basic Information
Place of Origin: China
Brand Name: NVIDIA
Model Number: NVIDIA A30
Minimum Order Quantity: 1pcs
Price: To be discussed
Packaging Details: 4.4” H x 7.9” L Single Slot
Delivery Time: 15-30 word days
Payment Terms: L/C, D/A, D/P, T/T
Supply Ability: 20pcs
Detail Information
NAME: HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing Keyword: HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing
Model: NVIDIA A30 FP64: 5.2 TeraFLOPS
FP64 Tensor Core: 10.3 TeraFLOPS FP32: 0.3 TeraFLOPS
TF32 Tensor Core: 82 TeraFLOPS | 165 TeraFLOPS* BFLOAT16 Tensor Core: 165 TeraFLOPS | 330 TeraFLOPS*
FP16 Tensor Core: 165 TeraFLOPS | 330 TeraFLOPS* INT8 Tensor Core: 330 TOPS | 661 TOPS*
INT4 Tensor Core: 661 TOPS | 1321 TOPS* GPU Memory: 24GB HBM2
GPU Memory Bandwidth: 933GB/s
High Light:

HBM2 nvidia ampere data center

,

24GB nvidia ampere data center

,

nvidia gpu for scientific computing


Product Description

 

HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing

NVIDIA A30 Data Center Gpu

NVIDIA Ampere A30 Data Center Gpu

Versatile compute acceleration for mainstream enterprise servers.

AI Inference and Mainstream Compute for Every Enterprise

 

Bring accelerated performance to every enterprise workload with NVIDIA A30 Tensor Core GPUs. With NVIDIA Ampere architecture Tensor Cores and Multi-Instance GPU (MIG), it delivers speedups securely across diverse workloads, including AI inference at scale and high-performance computing (HPC) applications. By combining fast memory bandwidth and low-power consumption in a PCIe form factor—optimal for mainstream servers—A30 enables an elastic data center and delivers maximum value for enterprises.

 

HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing 0

 

NVIDIA A30 Data Center Gpu Technical Specifications

 

GPU Architecture

NVIDIA Ampere

FP64

5.2 teraFLOPS

FP64 Tensor Core

10.3 teraFLOPS

FP32

10.3 teraFLOPS

TF32 Tensor Core

82 teraFLOPS | 165 teraFLOPS*

BFLOAT16 Tensor Core

165 teraFLOPS | 330 teraFLOPS*

FP16 Tensor Core

165 teraFLOPS | 330 teraFLOPS*

INT8 Tensor Core

330 TOPS | 661 TOPS*

INT4 Tensor Core

661 TOPS | 1321 TOPS*

Media engines

1 optical flow accelerator (OFA)
1 JPEG decoder (NVJPEG)
4 video decoders (NVDEC)

GPU memory

24GB HBM2

GPU Memory Bandwidth

933 GB/s

Interconnect

PCIe Gen4: 64GB/s

Max thermal design power (TDP)

165W

Form Factor

Dual-slot, full-height, full-length (FHFL)

Multi-Instance GPU (MIG)

4 GPU instances @ 6GB each
2 GPU instances @ 12GB each
1 GPU instance @ 24GB

Virtual GPU (vGPU) software support

NVIDIA AI Enterprise for VMware
NVIDIA Virtual Compute Server

 

 

 

 

 

 

 

 

 

 

 

 

High-Performance Computing

 

Data scientists need to be able to analyze, visualize, and turn massive datasets into insights. But scale-out solutions are often bogged down by datasets scattered across multiple servers.

 

Accelerated servers with A30 provide the needed compute power—along with large HBM2 memory, 933GB/sec of memory bandwidth, and scalability with NVLink—to tackle these workloads. Combined with NVIDIA InfiniBand, NVIDIA Magnum IO and the RAPIDS™ suite of open-source libraries, including the RAPIDS Accelerator for Apache Spark, the NVIDIA data center platform accelerates these huge workloads at unprecedented levels of performance and efficiency.

 

HBM2 Memory A30 24GB Nvidia Ampere Data Center GPU For Scientific Computing 1

 

Contact Details
sales

Phone Number : +8613269312134

WhatsApp : +8618701294598