Grok just tightened up its image manipulation game. xAI's rolled out new restrictions preventing the AI tool from editing photos of real people into revealing outfits like bikinis—basically shutting down what became a pretty messy content problem. The company made the call within hours of mounting pressure over users exploiting the feature to generate explicit imagery. It's a classic move: AI platform catches heat, implements guardrails, tries to restore trust. The decision underscores the ongoing tension between creative freedom and responsible AI deployment—something the entire industry's wrestling with as these tools become more powerful and accessible.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
9 Likes
Reward
9
4
Repost
Share
Comment
0/400
GasOptimizer
· 5h ago
Close the vulnerability within a few hours... The efficiency is acceptable, but the cost algorithm is a bit uneconomical. Do the math: reputation loss vs repair costs, clearly the former carries more weight. A common industry issue.
View OriginalReply0
Ser_APY_2000
· 5h ago
Haha, here we go again. This is a common problem with AI platforms... First, they let players do whatever they want, then disable it three seconds after a problem occurs. I really can't hold it anymore.
View OriginalReply0
WenMoon42
· 5h ago
Haha, now this vulnerability is finally patched. It should have been done this way long ago.
View OriginalReply0
AltcoinHunter
· 5h ago
Another good show, xAI was forced to urgently cut losses... Where is the promised freedom of creation? In the end, they still had to bow to public opinion.
Grok just tightened up its image manipulation game. xAI's rolled out new restrictions preventing the AI tool from editing photos of real people into revealing outfits like bikinis—basically shutting down what became a pretty messy content problem. The company made the call within hours of mounting pressure over users exploiting the feature to generate explicit imagery. It's a classic move: AI platform catches heat, implements guardrails, tries to restore trust. The decision underscores the ongoing tension between creative freedom and responsible AI deployment—something the entire industry's wrestling with as these tools become more powerful and accessible.