In a significant leap for artificial intelligence, OpenAI has obtained one of the first NVIDIA Blackwell DGX B200 systems. The cutting-edge GPUs are poised to accelerate the training and performance of OpenAI's advanced AI models.
OpenAI Boosts AI Power With Early NVIDIA DGX B200 System
B200 cards, which use NVIDIA's Blackwell architecture, are selling like hotcakes.
The B200 GPUs are NVIDIA's fastest data center GPUs to date, and orders for them have begun to roll in from a number of multinational corporations. According to NVIDIA, OpenAI was going to use the B200 GPUs. The company appears to be aiming to boost its AI computing capabilities by taking advantage of the B200's groundbreaking performance.
OpenAI Showcases NVIDIA's Blackwell System for AI Innovation
Earlier today, OpenAI's official X handle shared a photo of its staff with an early DGX B200 engineering sample. They are now ready to put the B200 to the test and train their formidable AI models now that the platform has arrived at their office.
The DGX B200 is an all-in-one AI platform that will make use of the forthcoming Blackwell B200 GPUs for training, fine-tuning, and inference. With a maximum HBM3E memory bandwidth of 64 TB/s and eight B200 GPUs per DGX B200, each unit may provide GPU memory of up to 1.4 TB.
Blackwell GPUs Power Major Industry Players' AI Ambitions
The DGX B200, according to NVIDIA, can provide remarkable performance for AI models, with training speeds of up to 72 petaFLOPS and inference speeds of up to 144 petaFLOPS.
Blackwell GPUs have long piqued the curiosity of OpenAI, and CEO Sam Altman even hinted about the possibility of employing them to train their AI models at one point.
Global Tech Giants Jump on the Blackwell Bandwagon
With so many industry heavyweights already opting to use Blackwell GPUs to train their AI models, the firm certainly won't be left out. Amazon, Google, Meta, Microsoft, Google, Tesla, xAI, and Dell Technologies are all part of this pack.
WCCFTECH has previously stated that, in addition to the 100,000 H100 GPUs now in use, xAI intends to use 50,000 B200 GPUs. Using the B200 GPUs, Foxconn has now also stated that it will construct the fastest supercomputer in Taiwan.
NVIDIA B200 Outshines Previous Generations in Power Efficiency
When compared to NVIDIA Hopper GPUs, Blackwell is both more powerful and more power efficient, making it an ideal choice for OpenAI's AI model training.
According to NVIDIA, the DGX B200 is capable of handling LLMs, chatbots, and recommender systems, and it boasts three times the training performance and fifteen times the inference performance of earlier generations.


Nanya Technology Shares Surge 10% After $2.5 Billion Private Placement from Sandisk and Cisco
NVIDIA's Feynman AI Chip May Face Redesign Amid TSMC Capacity Crunch
Sonova Shares Slip as Hearing Aid Giant Lowers Growth Outlook and Plans Sennheiser Exit
Nintendo Switch 2 Production Cut as Holiday Sales Miss Targets
Amazon's "Transformer" Phone: Can It Succeed Where Fire Phone Failed?
Judge Dismisses Sam Altman Sexual Abuse Lawsuit, But Sister Can Refile
Golden Dome Missile Defense: Anduril and Palantir Join Forces on Trump's $185B Space Shield
Micron Technology Beats Q2 Earnings Estimates, Issues Strong AI-Driven Outlook
Xiaomi's AI Model "Hunter Alpha" Mistaken for DeepSeek's Next Release
OpenAI Pulls the Plug on Sora, Ending $1 Billion Disney Partnership
Air Canada Express Plane Collides with Ground Vehicle at LaGuardia Airport
Rio Tinto's Resolution Copper Mine: U.S. Smelting Challenges and Global Operations Update
Meta Ties Executive Pay to Aggressive Stock Price Targets in Major Retention Push
SK Hynix Eyes Up to $14 Billion U.S. IPO to Fund AI Chip Expansion
Alibaba Bets on AI Agents to Unify Its Vast Digital Ecosystem 



