Qualcomm announced Monday that it will release new artificial intelligence accelerator chips, marking its entry into the rapidly growing AI data center market and putting it in direct competition with Nvidia and AMD. The company’s AI200, set for 2026, and AI250, planned for 2027, can be deployed in full liquid-cooled server racks, allowing as many as 72 chips to function as a single computer, matching the rack-scale systems offered by Nvidia and AMD. Qualcomm’s AI chips are built on the Hexagon neural processing units (NPUs) used in its smartphones, but are optimized for AI inference rather than training, targeting customers such as cloud service providers. The announcement sent Qualcomm’s stock soaring 11%. With AI-focused data centers expected to drive nearly $6.7 trillion in capital expenditures by 2030, the move positions Qualcomm as a serious contender in a market long dominated by Nvidia, whose GPUs power major AI models, including OpenAI’s GPTs. Qualcomm emphasized that its chips offer advantages in power consumption, cost of ownership, and memory handling, supporting up to 768 GB of memory per card, surpassing current offerings from Nvidia and AMD. The company also plans to sell chips and components separately, allowing clients to design custom racks or mix and match with other vendors, while strategic partnerships, such as with Saudi Arabia’s Humain data centers, aim to deploy systems capable of using up to 200 megawatts of power.