Over the past year, artificial intelligence has been continuously depicted as a key tool for enhancing productivity, from writing emails and generating code to automatically summarizing documents, as if transforming the way we work entirely. However, a large-scale data study from OpenRouter shows that the actual ways people use AI differ significantly from mainstream perceptions.
OpenRouter Announces Global Real-World AI Usage Behavior Research Report
OpenRouter is a multi-model AI inference platform that integrates data from over 60 providers and more than 300 models, including OpenAI, Anthropic, as well as open-source models like DeepSeek and Meta LLaMA. The research analyzes more than 100 trillion tokens and billions of model interactions through anonymous metadata, without accessing actual conversation content. This approach sketches the real global AI usage behaviors while protecting user privacy. Notably, the study analyzed metadata from billions of interactions without viewing the actual dialogue text, revealing behavioral patterns while safeguarding user privacy.
What is the main use of open-source large language models (LLMs)?
The research shows that by the end of 2025, the use of large open-source language models is projected to account for about one-third of overall AI usage, with significant growth coinciding with major version releases. But what’s truly surprising is the primary purpose of these open-source models. Data indicates that over half of interactions with open-source models are not for work efficiency or commercial applications, but focus on role-playing, interactive novels, and creative storytelling. The most astonishing finding is that more than half of the open-source AI models are not used to boost productivity, but for role-playing and storytelling.
Role-playing activities even surpass programming assistance, becoming the largest usage scenario. Researchers point out that this shows many users view AI as a medium for companionship, exploration, and creation, rather than merely a productivity tool. The report states this challenges the assumption that LLMs are mainly used for coding, emails, or summaries.
Correcting code with AI is the fastest-growing application category
In comparison, programming is the fastest-growing application among all models. In early 2025, related requests accounted for only about 10% of total usage, but by the end of the year, it exceeded 50%. The length of prompts related to programming has also increased significantly, indicating that developers are integrating AI more deeply into debugging, architecture analysis, and system-level troubleshooting processes). Among these, Anthropic’s Claude series once dominated the programming domain, but competition from OpenAI and Google is rapidly intensifying.
Simplified Chinese becomes the second most common AI interaction language globally
The study also reveals a significant shift in the global landscape. The usage share of Chinese models jumped from 13% at the start of the year to about 30%, with models like DeepSeek, Alibaba’s Qwen, and Moonshot AI rising rapidly. Simplified Chinese has become the second most common interaction language for AI worldwide. Overall AI spending in Asia has doubled, with Singapore becoming an important usage country second only to the United States.
AI reasoning is rapidly emerging
Another key trend is the rise of “AI reasoning.” AI is no longer just answering single questions but can perform multi-step reasoning in long conversations, call tools, and continue executing tasks. Such interactions have grown from nearly nonexistent to accounting for over half of all usage within a year, symbolizing that AI is shifting from a text generation tool to an agent system with planning and execution capabilities.
Cinderella Glass Shoe Effect: solving problems first to build stickiness
The research also observes a phenomenon called the “Cinderella Glass Shoe Effect”: when a model is the first to accurately solve a particular key need, it can establish highly loyal user engagement, far surpassing subsequent competitors. Once users deeply embed the model into their workflows, switching costs increase dramatically. For example, the user retention rate for Google Gemini 2.5 Pro five months after launch in June 2025 was about 40%, much higher than later user groups. This challenges traditional notions about AI competition. While seizing the first-mover advantage is important, solving high-value problems first is essential to creating a lasting competitive edge.
Regarding pricing, data shows that AI usage is surprisingly insensitive to cost changes. High-priced and low-priced models coexist, and the market has not fallen into pure price competition. Quality, stability, and full-featured capabilities still allow models to command premiums.
Overall, this research paints a more complex and truthful picture of AI. AI is not only reshaping professional work but also changing forms of creation, entertainment, and companionship; the market is diversifying rapidly, and technology is evolving swiftly. User behaviors are more honest than marketing rhetoric—understanding these real usage patterns will be key to the next stage of AI development.
This article on the real usage of artificial intelligence and how it differs from our imagination was first published on Chain News ABMedia.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The real usage of artificial intelligence is very different from our imagination, and surprisingly, its most popular application is actually this?
Over the past year, artificial intelligence has been continuously depicted as a key tool for enhancing productivity, from writing emails and generating code to automatically summarizing documents, as if transforming the way we work entirely. However, a large-scale data study from OpenRouter shows that the actual ways people use AI differ significantly from mainstream perceptions.
OpenRouter Announces Global Real-World AI Usage Behavior Research Report
OpenRouter is a multi-model AI inference platform that integrates data from over 60 providers and more than 300 models, including OpenAI, Anthropic, as well as open-source models like DeepSeek and Meta LLaMA. The research analyzes more than 100 trillion tokens and billions of model interactions through anonymous metadata, without accessing actual conversation content. This approach sketches the real global AI usage behaviors while protecting user privacy. Notably, the study analyzed metadata from billions of interactions without viewing the actual dialogue text, revealing behavioral patterns while safeguarding user privacy.
What is the main use of open-source large language models (LLMs)?
The research shows that by the end of 2025, the use of large open-source language models is projected to account for about one-third of overall AI usage, with significant growth coinciding with major version releases. But what’s truly surprising is the primary purpose of these open-source models. Data indicates that over half of interactions with open-source models are not for work efficiency or commercial applications, but focus on role-playing, interactive novels, and creative storytelling. The most astonishing finding is that more than half of the open-source AI models are not used to boost productivity, but for role-playing and storytelling.
Role-playing activities even surpass programming assistance, becoming the largest usage scenario. Researchers point out that this shows many users view AI as a medium for companionship, exploration, and creation, rather than merely a productivity tool. The report states this challenges the assumption that LLMs are mainly used for coding, emails, or summaries.
Correcting code with AI is the fastest-growing application category
In comparison, programming is the fastest-growing application among all models. In early 2025, related requests accounted for only about 10% of total usage, but by the end of the year, it exceeded 50%. The length of prompts related to programming has also increased significantly, indicating that developers are integrating AI more deeply into debugging, architecture analysis, and system-level troubleshooting processes). Among these, Anthropic’s Claude series once dominated the programming domain, but competition from OpenAI and Google is rapidly intensifying.
Simplified Chinese becomes the second most common AI interaction language globally
The study also reveals a significant shift in the global landscape. The usage share of Chinese models jumped from 13% at the start of the year to about 30%, with models like DeepSeek, Alibaba’s Qwen, and Moonshot AI rising rapidly. Simplified Chinese has become the second most common interaction language for AI worldwide. Overall AI spending in Asia has doubled, with Singapore becoming an important usage country second only to the United States.
AI reasoning is rapidly emerging
Another key trend is the rise of “AI reasoning.” AI is no longer just answering single questions but can perform multi-step reasoning in long conversations, call tools, and continue executing tasks. Such interactions have grown from nearly nonexistent to accounting for over half of all usage within a year, symbolizing that AI is shifting from a text generation tool to an agent system with planning and execution capabilities.
Cinderella Glass Shoe Effect: solving problems first to build stickiness
The research also observes a phenomenon called the “Cinderella Glass Shoe Effect”: when a model is the first to accurately solve a particular key need, it can establish highly loyal user engagement, far surpassing subsequent competitors. Once users deeply embed the model into their workflows, switching costs increase dramatically. For example, the user retention rate for Google Gemini 2.5 Pro five months after launch in June 2025 was about 40%, much higher than later user groups. This challenges traditional notions about AI competition. While seizing the first-mover advantage is important, solving high-value problems first is essential to creating a lasting competitive edge.
Regarding pricing, data shows that AI usage is surprisingly insensitive to cost changes. High-priced and low-priced models coexist, and the market has not fallen into pure price competition. Quality, stability, and full-featured capabilities still allow models to command premiums.
Overall, this research paints a more complex and truthful picture of AI. AI is not only reshaping professional work but also changing forms of creation, entertainment, and companionship; the market is diversifying rapidly, and technology is evolving swiftly. User behaviors are more honest than marketing rhetoric—understanding these real usage patterns will be key to the next stage of AI development.
This article on the real usage of artificial intelligence and how it differs from our imagination was first published on Chain News ABMedia.