📢 Exclusive on Gate Square — #PROVE Creative Contest# is Now Live!
CandyDrop × Succinct (PROVE) — Trade to share 200,000 PROVE 👉 https://www.gate.com/announcements/article/46469
Futures Lucky Draw Challenge: Guaranteed 1 PROVE Airdrop per User 👉 https://www.gate.com/announcements/article/46491
🎁 Endless creativity · Rewards keep coming — Post to share 300 PROVE!
📅 Event PeriodAugust 12, 2025, 04:00 – August 17, 2025, 16:00 UTC
📌 How to Participate
1.Publish original content on Gate Square related to PROVE or the above activities (minimum 100 words; any format: analysis, tutorial, creativ
Report: The current state of AI in 2023 from the perspectives of research, industry, policy, etc
Original title: "2023 State of Artificial Intelligence Report"
Author: 36Kr's compilation team Divine Translation Bureau BONI
Editor's brief: For our increasingly digital, data-driven world, AI is a force multiplier of technological advancement. Therefore, it is important for our work to understand the current state of development of artificial intelligence. This "2023 State of Artificial Intelligence Report" summarizes the current state of artificial intelligence from the aspects of research, industry, politics, security, etc., and makes predictions for the development of artificial intelligence in the next 12 months, hoping to help you understand the development dynamics of artificial intelligence. The article is from the compilation.
Research
2023 is certainly the year of large language models (LLMs), and OpenAI's GPT-4 has shocked the world, successfully beating all other LLMs – both on classic AI benchmarks and on exams designed for humans.
Due to concerns about security and competition, we have found that AI has become less open to it. Regarding GPT-4, OpenAI only released a technical report with very limited information, Google did not disclose much about PaLM2, and Anthropic did not disclose any technical information, whether it was Claude... Or Claude 2.
However, Meta AI and other companies have stepped up to keep the flame of open source burning by developing and releasing an open-source LLM that rivals GPT-3.5's many features.
Looking at Hugging Face's leaderboards, open source is more active than ever, with downloads and model submissions soaring to all-time highs. Notably, the LLaMa model has been downloaded more than 32 million times on Hugging Face in the last 30 days.
While we have a lot of different (mostly academic) benchmarks for evaluating the performance of large language models, the biggest thing these different evaluation criteria seem to have in common, and the biggest science and engineering benchmark, is this: "resonance"
In addition to the exciting atmosphere at LLMs, researchers, including Microsoft, have been exploring the possibilities of small language models, finding that models trained with highly specialized datasets can rival competitors that are 50x larger.
If the team at Epoch AI gets it right, this work could become even more urgent. They predict that we run the risk of running out of stock of high-quality language databases in the next "two years," leading labs to explore alternative sources of training data.
Look at the status quo at a higher level – although it has diminished in recent years, the U.S. still leads the way, and the vast majority of highly cited papers still come from a small number of U.S. institutions.
Industry
All of this work means it's a great time to get into the hardware business, especially if you're NVIDIA. GPU demand has propelled them into the trillion-dollar club, with chips being used 19 times more in AI research than "other alternatives combined."
While NVIDIA continues to introduce new chips, their older GPUs are showing exceptional lifetime value. Released in 2017, the V100 is the most popular GPU in AI research papers of 2022. This CPU may be discontinued in 5 years, which means it has been in service for 10 years.
We've seen demand for NVIDIA H100 grow rapidly, and labs are rushing to build large computing clusters—and there may be more on the way. However, we have heard that these construction projects have not come without major engineering challenges.
The "chip wars" have also forced the industry to adjust, with NVIDIA, Intel and AMD all building special, sanctions-compliant chips for their large Chinese customer base.
Perhaps the most unsurprising news of all time is this: Chat-GPT is one of the fastest-growing internet products of all time. It's especially popular among developers, and has replaced Stack Overflow as a new place for developers to find solutions to coding problems.
But according to Sequoia Capital, there are now reasons to doubt the staying power of generative AI products — with inconsistent retention rates across everything from image generation to AI companions.
In addition to the consumer software space, there are signs that generative AI can accelerate progress in the field of physical AI. Wayve GAIA-1 has demonstrated impressive versatility and can be used as a powerful tool for training and validating autonomous driving models.
In addition to generative AI, we're also seeing significant moves from industries that have been struggling to find suitable applications for AI. Many traditional pharma companies have bet all their bets on artificial intelligence, struck multibillion-dollar deals with companies like Exscientia and InstaDeep.
As militaries rush to modernize their forces to deal with asymmetric warfare, the AI-first defense market is booming. However, the conflict between new technologies and established players makes it difficult for new entrants to gain a foothold.
In addition to these successes, the VC industry is focused on generative AI, a sector that holds up a slice of tech private markets like Atlas. If it weren't for the market boom in generative AI, investment in AI would be down 40% from last year.
The authors of the paper that first introduced Transformers' neural networks are living proof that in 2023 alone, the Transformer Gang has secured billions of dollars in funding.
The same goes for Baidu's DeepSpeech2 team, an artificial intelligence lab in Silicon Valley. Their work on deep learning for speech recognition shows us the law of augmentation that now underpins large-scale artificial intelligence. Most of the members of this team went on to become founders or senior executives at leading machine learning companies.
Many of the most high-profile blockbuster funding rounds were not led by traditional VC firms at all. 2023 is the year of corporate venture capital, with big tech companies making effective use of the war funds they have on hand.
Politics
Not surprisingly, billions of dollars of investment, coupled with huge leaps in capabilities, have put AI at the top of the agenda for policymakers. The spectrum ranges from loose to tightly regulated, and there are several approaches to regulation around the world.
There have been a number of potential proposals for global governance of AI, mainly by a range of global organizations. The UK AI Security Summit, organized by Matt Clifford and others, may help to concretize some of these ideas.
As we continue to see the power of AI on the battlefield, these debates are likely to become even more pressing. The conflict in Ukraine has become a laboratory for AI warfare, demonstrating how even relatively improvised patchwork systems can have devastating effects when cleverly integrated.
Another potential flashpoint is next year's U.S. presidential election. So far, deepfakes and other AI-generated content have played a relatively limited role compared to the kind of disinformation of the past. But a low-cost, high-quality model could change that, prompting preemptive action.
Previous State of Artificial Intelligence reports have warned that large labs may be ignoring the safety of AI. In 2023, the debate about whether humanity is at risk for survival because of AI has intensified, and the debate between researchers over open source versus closed source has intensified, and extinction risk has made headlines.
...... Needless to say, while not everyone agrees – Yann LeCun and Marc Andreessen are the main skeptics.
It's no surprise that policymakers are only now alarmed by the potential risks of AI, although they have struggled to understand it. The United Kingdom has spearheaded the creation of a dedicated Frontier AI Taskforce, led by Ian Hogarth, and the United States has launched a congressional investigation.
While there are still theoretical disputes, labs have already begun to take action, and Google DeepMind and Anthropic were among the first to elaborate on their approach in more detail when it comes to mitigating the extreme risks of development deployments.
Even without touching on the distant future, tricky questions are being asked about techniques such as reinforcement learning based on human feedback (which underpins technologies like Chat-GPT).
Prediction
As always, in the spirit of transparency, we take care of last year's predictions – we scored 5/9.
✅ LLM training, generative AI/audio, tech giants are fully invested in general AI R&D, alignment investments, and training data.
❌ Doom for multimodal research, biosafety lab regulation, and semi-finished start-ups.
Generative AI/Filmmaking
Artificial Intelligence and Elections
Self-improvement agents
The return of IPOs
Models worth more than $1 billion
Competition investigations
Global governance
Bank + GPU
-Music