Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Zhejiang University research team proposes a new approach: teaching AI how the human brain understands the world
null
Large models have been getting bigger and bigger. The mainstream view is that the more parameters a model has, the closer it will be to human-like ways of thinking. However, a paper published by a team from Zhejiang University on April 1 in Nature Communications proposes a different view (original link: After increasing from 22.06 million to 304.37 million, the performance on specific concept tasks rose from 74.94% to 85.87%, while that on abstract concept tasks fell from 54.37% to 52.82%).
Differences in how humans and models think
When the human brain processes concepts, it first forms a set of classification relationships. A swan and an owl look different; people still put them into the “birds” category. Going one level up, birds and horses can still be grouped into the broader “animals” layer. When people see something new, they often first think: what it resembles among things they have seen before, and which category it likely belongs to. People continuously learn new concepts, then organize experience, using these relationships to recognize new things and adapt to new situations.
Models also classify, but their way of forming these groupings is different. They mainly rely on forms that repeatedly appear in large-scale data. The more often a specific object appears, the easier it is for the model to recognize it. Once it reaches the next step—bigger categories—the model finds it more difficult. It needs to capture commonalities among multiple objects, and then group those shared features into the same class. Existing models still show clear weaknesses here. After further increasing the number of parameters, performance on concrete concept tasks improves, while performance on abstract concept tasks sometimes declines.
A commonality between the human brain and models is that both can form a set of classification relationships internally. But the emphasis differs. The brain’s higher-level visual areas naturally distinguish broad categories such as living things and non-living things. Models can separate specific objects, but they struggle to stably form these larger categories. This difference makes the human brain more likely to apply old experience to new objects. So when faced with something unfamiliar, we can categorize it quickly. Models, on the other hand, depend more on existing knowledge, so when encountering a new object, they are more likely to get stuck on superficial features. The method proposed in the paper is to build around this characteristic—using brain signals to constrain the model’s internal structure, making it closer to the human brain’s classification方式.
The solution from the Zhejiang University team
The solution the team proposes is also unique. It is not about simply stacking more parameters; instead, it uses a small amount of brain signals for supervision. These brain signals come from recordings of brain activity when people look at pictures. The paper’s original text says: transfer human conceptual structures to DNNs. In other words, it tries to teach the model how the human brain categorizes, how it generalizes, and how it groups similar concepts together.
The team runs experiments using 150 known training categories and 50 unseen test categories. The results show that as this training progresses, the distance between the model’s representations and brain representations keeps shrinking. This change appears in both categories as well, indicating that the model is not learning a single sample, but truly beginning to learn a concept organization approach that is closer to how the human brain works.
After this training, the model has stronger learning ability when the amount of training samples is small, and it performs better when facing new situations. In a task where it is given only a tiny number of examples, yet is required to distinguish abstract concepts such as living things versus non-living things, the model improved on average by 20.5%, and even outperformed control models with much larger parameter counts. The team also ran 31 additional groups of dedicated tests, and several models showed improvements of nearly 10% as well.
In past years, the path familiar to the model industry has been bigger model scale. The Zhejiang University team chose another direction: moving from bigger is better to structured is smarter. Scaling up is indeed useful, but it mainly boosts performance on familiar tasks. Human-like abstract understanding and transfer ability are equally crucial for AI. In the future, we need to make the way AI structures its thinking closer to the human brain. The value of this direction lies in pulling industry attention away from plain scale expansion and back to the cognition structure itself.
Neosoul and the future
This opens up a bigger possibility: AI’s evolution may not only happen during the model training stage. Model training can determine how AI organizes concepts and forms higher-quality judgment structures. After entering the real world, the next layer of AI evolution has only just begun: how an AI agent’s judgments are recorded, how they are tested, and how it continually grows and evolves through real-world competition—self-learning and self-evolving as humans do. This is exactly what Neosoul is doing now. Neosoul doesn’t just make AI agents produce answers; it places AI agents into a system of continuous prediction, continuous validation, continuous settlement, and continuous filtering—so that they constantly optimize themselves through prediction and outcomes, keeping better structures and eliminating worse ones. What the Zhejiang University team and Neosoul are pointing to, in fact, is the same goal: to make AI no longer only good at answering questions, but also capable of comprehensive thinking—continuously evolving.