Nvidia’s $2,999 Titan V is a 110 teraflop GPU for AI processing
Nvidia cards are the de facto standard for running machine learning workloads and today, the company added yet another high-end compute-centric card to its line-up: the Titan V. This card, which is based on Nvidia’s Volta architecture, features 21.1 billion transistors on a 815 mm2 chip that can produce a full 110 teraflops of compute power.
All of that power comes at a price, though. The card, which features 12GB of HBM2 memory, will retail for $2,999. For that, though, users will see a 9x increase in raw power compared to the Titan Xp, the card’s predecessor, which retailed for “only” $1,299.
It’s worth noting that this isn’t really Nvidia’s most powerful card yet — that honor still belongs to the 112 teraflop Tesla V100, which tends to retail for around $10,000. It is Nvidia’s most powerful GPU for PCs, though. Both the Titan V and V100 cards seem to have quite a bit in common, including the V100 GPU at their core, though the Titan V features both less memory and a reduced memory bus width.
The launch of the Titan V came as a bit of a surprise. Nvidia CEO Jen-Hsun Huang wasn’t exactly expected to announce new hardware during his keynote at the NIPS conference.
“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said Huang. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”
While you could happy play PUBG with this card (it’s very much a standard PCI-E video card, after all), it’s very much geared toward researchers who need this kind of parallel processing power for running their scientific simulations or who want to build machine learning models (or you could just use it to mine cryptocurrencies, of course). The Titan V features 5120 Cuda cores and additional 640 tensor cores that have been optimized to speed up machine learning workloads.