People often say that machines are thinking. Actually, it's not that simple. The key is not in AI itself, but in the entire ecosystem it resides in. The prompts you give it, the context, and the usage scenarios—these human-constructed environments are the true factors that determine the final output of the LLM. In other words, it is the framework we build and the surrounding interpretive space that drive the model's "thinking." The machine is just performing on this stage.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
10 Likes
Reward
10
3
Repost
Share
Comment
0/400
StablecoinAnxiety
· 01-15 15:16
That's right, the framework determines everything. We're just feeding data and building the stage, then surprised that the machine becomes "smart" — hilarious.
Prompt engineering is the real alchemy, do you understand?
View OriginalReply0
BoredStaker
· 01-15 15:05
That's right, prompt engineering is the real cutting-edge technology, while the model itself is actually just a puppet.
View OriginalReply0
nft_widow
· 01-15 14:51
Interesting point, but I think this way of explaining still simplifies the problem. No matter how clever the prompt words are, garbage data still results in garbage output. What truly determines everything is that training system—what exactly are we feeding the model?
People often say that machines are thinking. Actually, it's not that simple. The key is not in AI itself, but in the entire ecosystem it resides in. The prompts you give it, the context, and the usage scenarios—these human-constructed environments are the true factors that determine the final output of the LLM. In other words, it is the framework we build and the surrounding interpretive space that drive the model's "thinking." The machine is just performing on this stage.