Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Windsurf trained a specialized bug-catching small model using RL, and in internal evaluations, it has matched Claude Opus 4.6.
ME News report, April 15 (UTC+8). According to Beating Monitoring, Cognition AI, the parent company of the AI programming tool Windsurf, has partnered with AI training company Applied Compute to train a model specifically for code bug detection, SWE-Check, using reinforcement learning. The model analyzes the user’s current code changes (diff), automatically flags potential bugs, and provides repair suggestions.
In evaluations where the test data follows the same distribution as the training data, SWE-Check’s F1 score has matched Claude Opus 4.6 (the gap has narrowed from 0.09 to 0). In cross-distribution evaluations, the gap has shrunk from 0.49 to 0.29—still behind leading models, but with clear progress.
Its key advantages are speed and cost: SWE-Check runs an order of magnitude faster than state-of-the-art models, and its inference costs have also been significantly reduced. As a result, it enables instant, free bug detection directly within the IDE, which cannot be achieved by making direct calls to large models such as Opus 4.6.
Two training design choices are especially worth noting:
Reward linearization: The team aims to optimize the global F-beta metric, but this metric cannot be directly decomposed into individual samples. They convert the global metric into a per-sample computable reward function using a first-order approximation, allowing training to effectively climb the global metric. In early versions, the false positive rate was too high, so the team adjusted beta from 1 to 0.5 to emphasize precision.
Two-stage post-training: In the first stage, the model purely maximizes bug-detection capability without penalizing latency. In the second stage, latency penalties are introduced based on the real statistical distribution of how long users take to switch away after triggering detection. This staged approach outperforms optimizing both objectives at the same time, because simultaneous optimization can easily fall into local optima—for example, learning to be very fast but with shallow analysis.
A preview version of SWE-Check has been launched in Windsurf Next (shortcut: cmd+U). It will later be rolled into the official Windsurf release.
(Source: BlockBeats)