NVIDIA Tesla Architecture
Massively-parallel computing architecture with 128 multi-threaded processors per GPU
Scalar thread processor with full integer and floating point operations
Thread Execution Manager enables thousands of concurrent threads
per GPU
Parallel Data Cache enables processors to collaborate on shared information at local cache performance
Ultra-fast memory access with 76.8 GB/sec. peak bandwidth per GPU
IEEE 754 single-precision
floating point
Supporting Platforms
Tesla certified system*
Microsoft Windows XP (32-bit)
Linux (64-bit and 32-bit)
Red Hat Enterprise Linux 3, 4 and 5
SUSE 10.1, 10.2 and 10.3
For deskside supercomputer and GPU computing server
Tesla C870 GPU Computing Processor
One Tesla GPU (128 thread processors)
Over 500 gigaflops
1.5 GB dedicated memory
Fits in one full-length, dual slot with one open PCI Express x16 slot
Tesla D870 Deskside Supercomputer
Two Tesla GPUs (128 thread processors per GPU)
Over 500 gigaflops per GPU
3 GB system memory (1.5 GB dedicated memory per GPU)
Quiet operation (40dB) suitable for office environment
Connects to host via cabling to a
low power PCI Express x8 or x16 adapter card
Optional rack mount kit
Tesla S870 GPU Computing Server
Four Tesla GPUs (128 thread processors per GPU)
Over 500 gigaflops per GPU
6 GB of system memory (1.5 GB dedicated memory per GPU)
Standard 19”, 1U rack-mount chassis
Connects to host via cabling to a
low power PCI Express x8 or x16 adapter card
Standard configuration: 2 PCI Express connectors driving 2 GPUs each
(4 GPUs total)
Optional configuration: 1 PCI Express connector driving 4 GPUs
Product Details
To learn more about NVIDIA Tesla solutions, go to
www.nvidia.com/tesla
© 2007