According to the latest report from Reuters, OpenAI is actively seeking alternatives to NVIDIA GPUs to overcome the current performance bottlenecks in AI inference. It is reported that OpenAI is increasingly dissatisfied with the response latency and memory bandwidth limitations of NVIDIA chips in high-concurrency inference scenarios, and has engaged with multiple chip manufacturers such as AMD, Cerebras, and Groq, even considering developing its own inference chips.
This move reflects a shift in the AI industry’s focus from “model training” to “efficient inference,” and the competition for computing infrastructure is also shifting from single hardware performance to system-level optimization, including on-chip memory, dedicated architectures, and hardware-software co-design. This trend could not only reshape the global AI chip landscape but also highlight the urgent need for large model companies to have autonomous control over underlying computing power.
Influenced by this report, NVIDIA’s stock price fell by 2.9% this Monday against the trend. There were previous reports that NVIDIA planned to invest $100 billion in OpenAI, exchanging market share through equity ties, but negotiations have stalled. In an interview with Taipei media at the end of January 2026, Jensen Huang explicitly stated: “That has never been a formal commitment.” He emphasized that the original plan was based on long-term infrastructure development intentions, not a cash check. The news indicates that the current investment discussions between the two parties have decreased to “several hundred billion dollars” (estimated between $200-300 billion) as part of OpenAI’s current new round of financing.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
NVIDIA's hundred-billion-dollar investment rejected? OpenAI throws a fit, global AI chip reshuffle imminent!
According to the latest report from Reuters, OpenAI is actively seeking alternatives to NVIDIA GPUs to overcome the current performance bottlenecks in AI inference. It is reported that OpenAI is increasingly dissatisfied with the response latency and memory bandwidth limitations of NVIDIA chips in high-concurrency inference scenarios, and has engaged with multiple chip manufacturers such as AMD, Cerebras, and Groq, even considering developing its own inference chips.
This move reflects a shift in the AI industry’s focus from “model training” to “efficient inference,” and the competition for computing infrastructure is also shifting from single hardware performance to system-level optimization, including on-chip memory, dedicated architectures, and hardware-software co-design. This trend could not only reshape the global AI chip landscape but also highlight the urgent need for large model companies to have autonomous control over underlying computing power.
Influenced by this report, NVIDIA’s stock price fell by 2.9% this Monday against the trend. There were previous reports that NVIDIA planned to invest $100 billion in OpenAI, exchanging market share through equity ties, but negotiations have stalled. In an interview with Taipei media at the end of January 2026, Jensen Huang explicitly stated: “That has never been a formal commitment.” He emphasized that the original plan was based on long-term infrastructure development intentions, not a cash check. The news indicates that the current investment discussions between the two parties have decreased to “several hundred billion dollars” (estimated between $200-300 billion) as part of OpenAI’s current new round of financing.