In a significant leap for artificial intelligence, OpenAI has obtained one of the first NVIDIA Blackwell DGX B200 systems. The cutting-edge GPUs are poised to accelerate the training and performance of OpenAI's advanced AI models.
OpenAI Boosts AI Power With Early NVIDIA DGX B200 System
B200 cards, which use NVIDIA's Blackwell architecture, are selling like hotcakes.
The B200 GPUs are NVIDIA's fastest data center GPUs to date, and orders for them have begun to roll in from a number of multinational corporations. According to NVIDIA, OpenAI was going to use the B200 GPUs. The company appears to be aiming to boost its AI computing capabilities by taking advantage of the B200's groundbreaking performance.
OpenAI Showcases NVIDIA's Blackwell System for AI Innovation
Earlier today, OpenAI's official X handle shared a photo of its staff with an early DGX B200 engineering sample. They are now ready to put the B200 to the test and train their formidable AI models now that the platform has arrived at their office.
The DGX B200 is an all-in-one AI platform that will make use of the forthcoming Blackwell B200 GPUs for training, fine-tuning, and inference. With a maximum HBM3E memory bandwidth of 64 TB/s and eight B200 GPUs per DGX B200, each unit may provide GPU memory of up to 1.4 TB.
Blackwell GPUs Power Major Industry Players' AI Ambitions
The DGX B200, according to NVIDIA, can provide remarkable performance for AI models, with training speeds of up to 72 petaFLOPS and inference speeds of up to 144 petaFLOPS.
Blackwell GPUs have long piqued the curiosity of OpenAI, and CEO Sam Altman even hinted about the possibility of employing them to train their AI models at one point.
Global Tech Giants Jump on the Blackwell Bandwagon
With so many industry heavyweights already opting to use Blackwell GPUs to train their AI models, the firm certainly won't be left out. Amazon, Google, Meta, Microsoft, Google, Tesla, xAI, and Dell Technologies are all part of this pack.
WCCFTECH has previously stated that, in addition to the 100,000 H100 GPUs now in use, xAI intends to use 50,000 B200 GPUs. Using the B200 GPUs, Foxconn has now also stated that it will construct the fastest supercomputer in Taiwan.
NVIDIA B200 Outshines Previous Generations in Power Efficiency
When compared to NVIDIA Hopper GPUs, Blackwell is both more powerful and more power efficient, making it an ideal choice for OpenAI's AI model training.
According to NVIDIA, the DGX B200 is capable of handling LLMs, chatbots, and recommender systems, and it boasts three times the training performance and fifteen times the inference performance of earlier generations.


SpaceX Seeks FCC Approval for Massive Solar-Powered Satellite Network to Support AI Data Centers
TrumpRx Website Launches to Offer Discounted Prescription Drugs for Cash-Paying Americans
Baidu Approves $5 Billion Share Buyback and Plans First-Ever Dividend in 2026
Amazon Stock Rebounds After Earnings as $200B Capex Plan Sparks AI Spending Debate
Sony Q3 Profit Jumps on Gaming and Image Sensors, Full-Year Outlook Raised
Rio Tinto Shares Hit Record High After Ending Glencore Merger Talks
Nasdaq Proposes Fast-Track Rule to Accelerate Index Inclusion for Major New Listings
Alphabet’s Massive AI Spending Surge Signals Confidence in Google’s Growth Engine
Uber Ordered to Pay $8.5 Million in Bellwether Sexual Assault Lawsuit
Nvidia Nears $20 Billion OpenAI Investment as AI Funding Race Intensifies
CK Hutchison Launches Arbitration After Panama Court Revokes Canal Port Licences
TSMC Eyes 3nm Chip Production in Japan with $17 Billion Kumamoto Investment
Missouri Judge Dismisses Lawsuit Challenging Starbucks’ Diversity and Inclusion Policies
Global PC Makers Eye Chinese Memory Chip Suppliers Amid Ongoing Supply Crunch
Elon Musk’s Empire: SpaceX, Tesla, and xAI Merger Talks Spark Investor Debate
SpaceX Updates Starlink Privacy Policy to Allow AI Training as xAI Merger Talks and IPO Loom 



