Building open AI infrastructure: Inside Gonka’s vision for decentralized compute

Gonka aims to decentralize AI compute, giving developers and hardware providers predictable, verifiable access while challenging the dominance of centralized cloud giants.

As control over AI compute becomes increasingly concentrated among a handful of cloud providers and hardware giants, Gonka has emerged as a new Layer-1 network focused on decentralized, high-efficiency AI infrastructure. The project’s founder believes that by treating compute as open, verifiable infrastructure rather than a gated service, Gonka can unlock global access to AI resources and challenge the structural limitations of today’s centralized systems.

1. What is Gonka, and what problem does it solve?

Gonka is an L1 decentralized network for high‑efficiency AI compute, designed to address a structural problem beneath today’s AI boom: how compute for AI is produced, allocated, and incentivized.

Today, the main bottleneck in AI is no longer models, it is control over compute. Advanced GPUs are highly concentrated among a small number of hardware manufacturers and hyperscale cloud providers, making AI compute expensive, opaque, and increasingly constrained by geography and politics. The U.S. and China are rapidly consolidating control over chips, energy, and data-center capacity, placing much of the rest of the world in a dependent position and limiting its ability to compete in the AI economy.

This affects both startups and entire regions. Developers face pricing volatility, capacity shortages, and vendor lock-in, while many countries risk falling behind due to restricted access to foundational AI infrastructure.

Gonka rethinks this at the protocol level. Instead of treating compute as a gated service controlled by centralized providers, we took inspiration from systems that have already proven it’s possible to coordinate large-scale physical infrastructure through open incentives. Much like Bitcoin demonstrated for hardware and energy, Gonka applies similar principles to AI compute, not at the application layer, but at the level of the protocol itself.

Using a Transformer-based Proof-of-Work mechanism, the network directs nearly all available GPU power toward meaningful AI workloads. Today, this work is primarily AI inference, with training planned for the future. Hosts earn rewards based on verified computational contribution, rather than capital allocation or speculative mechanics. And unlike many decentralized systems, compute is not burned on abstract security tasks or duplicated consensus work, it is used productively.

For developers, this provides predictable access to AI compute without reliance on closed APIs or a single cloud provider. More broadly, Gonka treats AI compute as foundational infrastructure, efficient, verifiable, and globally distributed, rather than a resource controlled by a few gatekeepers.

2. How does Gonka’s Proof-of-Work model differ from other decentralized AI projects like Bittensor?

The main difference lies in what each network defines as “work” and how value is created around it.

Many decentralized AI projects, including Bittensor, focus on coordination at the model or network level. Their incentives are often shaped by staking, delegation, or peer-based evaluation systems, where rewards and influence are not always directly proportional to raw computational contribution. This approach can be effective for certain coordination problems, but it doesn’t necessarily optimize for large-scale, efficient AI compute infrastructure.

Gonka takes a different path. It is designed as a compute-first network, where “work” is defined as verifiable AI computation. Proof-of-Work in Gonka is based on a Transformer-based PoW mechanism that measures real GPU work, rather than capital allocation or speculative participation. Voting power and rewards are tied directly to verified computational contribution, aligning incentives with actual infrastructure performance.

Another key distinction is efficiency. In many decentralized systems, a significant portion of available compute is consumed by consensus, validation, or duplicated work that has little value outside the network. For example, in systems like Bittensor, around 60% of rewards are allocated to staking, which, while necessary for network security, does not contribute to AI computation. Gonka’s Sprint-based design minimizes this overhead, allowing nearly all available GPU resources to be directed toward meaningful AI workloads, primarily inference.

In simple terms, projects like Bittensor focus on coordinating intelligence. Gonka focuses on building the economic and infrastructural foundation for AI compute itself. These approaches operate at different layers of the stack, and Gonka’s model is intentionally optimized for hardware providers and real-world AI workloads.

3. Why did Gonka choose to focus on AI inference rather than training?

Gonka is built as a compute-first network, and that perspective naturally shapes where we chose to start.

The decision to focus on inference first was a matter of sequencing, not limitation. Inference is where the majority of real-world AI usage happens today, and it’s also where the infrastructure bottlenecks are most visible. As AI systems move from experimentation into production, continuous inference becomes expensive, capacity-constrained, and tightly controlled by centralized providers.

From a network design perspective, inference is also the right place to begin. It allows us to validate the core principles of Gonka – verifiable compute, efficient resource allocation, and incentive alignment – under real production workloads. Inference workloads are continuous, measurable, and well-suited to a decentralized environment where hardware utilization and efficiency matter.

Training, especially at larger scales, is a different class of problem with its own coordination dynamics and execution characteristics. Our focus is on building infrastructure that works under real demand first, and inference is where those demands are already present today. But Gonka does plan to introduce training in the future, and the network dedicates 20% of all inference revenue to support future model training.

4. How does Gonka verify that miners are actually performing the AI inference work they claim to have completed?

Verification in Gonka is built directly into how the network measures and values compute.

Inference tasks are executed during short, time-bound periods called Sprints. In each Sprint, Hosts are asked to run inference on large Transformer models that are randomly initialized for each cycle. Because these tasks are computationally intensive and change continuously, they cannot be precomputed, simulated, or reused from previous runs. The only practical way to produce valid outputs is to perform the real computation.

The network checks results by validating whether the outputs match what would be expected from actually running the model

To keep the system efficient, Gonka does not recheck every single computation. Instead, it verifies a portion of the results on an ongoing basis and increases checks for participants who were suspected of fabricating the results. Part of the reward of Hosts consists of fees for useful work. These fees are not paid if the performed work didn’t pass validation. This approach keeps overhead low while ensuring that submitting incorrect or fabricated results is not worthwhile

Over time, Hosts that consistently submit correct results are recognized as reliable contributors and gain greater participation in the network. This same principle, rewarding proven, real computation, underlies both incentives and influence in Gonka.

5. OpenAI, Google, and Microsoft control massive computing infrastructure with established customer bases. What makes Gonka competitive against these incumbents?

The challenge isn’t the technology itself, but how access to compute is controlled.

We don’t see Gonka as competing with companies like OpenAI, Google, or Microsoft in the traditional sense. They build and operate some of the most advanced centralized AI stacks in the world, and those systems will continue to play a major role.

The difference lies in the layer of the stack we’re addressing. Centralized providers control massive infrastructure, but that control comes with trade-offs. Access to compute is gated, pricing is opaque, and capacity is shaped by internal priorities. For many developers and regions, this results in volatility, lock-in, and limited long-term predictability.

Gonka is designed as an open infrastructure rather than a service. Compute is supplied by a decentralized network of Hosts, and availability is shaped by real computational supply and demand. Incentives are aligned at the network level, rewarding verified compute and encouraging continuous infrastructure optimization.

This makes Gonka competitive not by replacing incumbents, but by enabling use cases that are structurally underserved by centralized platforms, workloads that require openness, predictable access, and infrastructure-level transparency. By creating a market where hardware providers compete directly on performance and efficiency, Gonka also drives down the cost of AI compute, making it accessible to a much broader range of developers, startups, and regions.

6. Since launching in August 2025, Gonka has grown to 2,200 developers and 12,000 GPU-equivalent capacity. What’s driving this adoption?

What’s driving this adoption isn’t short-term hype, it’s structural alignment.

On the supply side, Hosts are looking for alternatives to centralized models that underutilize their hardware. On the demand side, developers face pricing volatility, capacity constraints, and closed APIs from centralized providers. As AI workloads move into production, predictability and access become just as important as raw performance.

As more Hosts join, either independently or through pools (which is a larger topic on its own), the network becomes more useful for developers. As more workloads come online, this creates sustained demand that further attracts infrastructure. This feedback loop has been the primary driver of adoption.

The pace of adoption reflects that Gonka’s incentives are aligned with real-world needs on both sides of the market. Hosts are rewarded for useful compute, developers gain reliable access to inference capacity, and the network scales organically as those interests reinforce each other.

Much of this coordination happens openly within the Gonka community, including ongoing discussions in the Gonka Discord.

7. Gonka recently secured a $50 million investment from Bitfury while maintaining a decentralized governance model. How does Gonka balance institutional capital with its decentralization vision?

The key point is that Gonka is decentralized by design at the protocol level, not just in narrative. Governance in the network is tied to real, verifiable computational contribution rather than to capital ownership.

Recent support from an institutional partner like Bitfury does not translate into control over the network. Their involvement reflects deep experience in building large-scale compute infrastructure, but it does not grant special privileges within the protocol. In Gonka, funding itself does not convert into influence. Decisions about investments are made by the Gonka community, which held a vote to choose to sell GNK from the community pool to Bitfury

In practice, voting power and participation in network decisions are determined by how much verified AI compute a participant actually contributes. Influence grows through real work: connected GPUs, sustained performance, and proven contribution to AI workloads. It cannot be bought or acquired through financial investment alone; it must be earned by operating infrastructure. This applies equally to individuals, large operators, and institutional participants.

This separation is intentional. Institutional capital can accelerate early development, research, and ecosystem growth, but decentralization is enforced by the network’s incentive and governance mechanics. No participant, institutional or otherwise, can gain dominant control without contributing a proportional share of verified compute.

This approach allows Gonka to work with experienced infrastructure partners while preserving its core principle: the network is governed by those who power it, not by those who finance it.

8. If AI inference becomes commodified, value typically flows to those controlling the models, not the infrastructure. How does Gonka capture sustainable long-term value?

That pattern holds primarily in closed ecosystems, where the same few companies control models, infrastructure, and access. In those systems, value concentrates not only in control, but also in margins, and participation in the upside is limited to a narrow set of corporate shareholders.

Today, people can pay OpenAI, Anthropic, or other providers to use AI, but they cannot meaningfully participate in the economics of AI compute itself. There is no way to directly engage with or benefit from the compute layer behind these systems. Public companies like Nvidia, Meta, or Google offer exposure to AI only as part of much broader businesses, not as direct participation in AI compute as a standalone economic layer. As a result, one of the fastest-growing parts of the AI economy remains largely closed.

At the same time, while inference may commodify at the surface level, compute does not. Compute is constrained by hardware availability, energy access, geography, and coordination. As inference demand scales globally, the bottleneck increasingly shifts away from models and toward access to reliable, cost-efficient compute at scale, and that bottleneck becomes structurally valuable

This has broader economic implications. When access to compute is concentrated, entire regions are pushed into a dependent position, limiting local innovation, productivity growth, and participation in the AI economy.
Countries without privileged access to hyperscale clouds or advanced GPUs are forced to consume AI as a service, rather than building with it or contributing to its underlying infrastructure.

Gonka is built around that bottleneck at the protocol level. Instead of owning models or extracting rents, the network coordinates how compute is produced, verified, and allocated through open, permissionless rules. GNK represents direct participation in the economy of AI compute itself, not equity in a company, but access and influence tied to real, verifiable contribution.

This model also changes who can participate. Hardware owners, from large operators to smaller GPU holders, can contribute directly to AI workloads and earn based on verified computation, either independently or through pools. Developers gain access to predictable, transparent compute without being locked into a single provider or opaque pricing models.

More broadly, we see two possible futures emerging. One where most AI capacity is owned and controlled by a small number of corporations and states, and another where open networks allow compute to be coordinated globally, with value flowing to those who actually contribute resources. Gonka is built for the second path

It’s also important not to overlook the role of open-source models. From the very beginning, they have been a core driver of innovation in AI, especially among developers and startups. We believe networks like Gonka naturally support the development and adoption of open models by providing accessible, verifiable compute, allowing intelligence to remain open, competitive, and not locked behind proprietary infrastructure.

9. What specific experience in the AI industry led the founders to believe decentralized infrastructure was necessary?

Our conviction didn’t come from theory, it came from years of working with distributed computation and from building AI systems inside centralized environments at scale.

At Snap and later through Product Science, we worked on production AI systems where access to compute directly determined what could be built and deployed. We saw how infrastructure decisions are made once AI becomes commercially critical, and how tightly controlled those decisions become.

What stood out most was how concentrated the AI compute market really is. A small number of corporations control access to advanced GPUs, set pricing, define capacity limits, and decide which use cases are viable. This concentration doesn’t just shape markets; it shapes power. Control over compute increasingly determines who can participate in AI innovation at all.

We also saw how this concentration extends beyond economics into geography and sovereignty. Access to compute is becoming regionally constrained, influenced by energy availability, export controls, and national infrastructure strategies. In practice, this puts entire regions in a structurally dependent position, limiting their ability to build competitive AI ecosystems.

At the same time, we had seen decentralized systems successfully coordinate physical infrastructure at a global scale. Bitcoin was a clear example, not as a financial asset, but as a protocol that aligned incentives around real-world hardware and energy. That contrast made the problem obvious.

Gonka emerged from that realization: if AI compute is becoming foundational infrastructure, it needs a coordination model that is open, permissionless, and resilient, not one controlled by a handful of actors.

10. What needs to happen for Gonka to succeed in a competitive landscape where tech giants continuously upgrade their own AI infrastructure and capabilities?

Gonka doesn’t need to outbuild or outspend technology giants to succeed. It needs to remain focused on a different layer of the stack, one that centralized players are structurally less equipped to address.

Large technology companies will continue to build powerful AI infrastructure. Their systems are optimized for closed ecosystems, internal priorities, and centralized control. That model can be very efficient, but it also concentrates access, pricing power, and decision-making.

For Gonka to succeed, the network must consistently deliver infrastructure-level efficiency, ensuring that most compute is directed toward real AI workloads rather than protocol overhead. Incentives must remain tightly coupled to verified computational work, so rewards and influence scale with real contribution, not capital or speculation.

Just as importantly, Gonka must preserve an open, permissionless architecture with transparent protocol-level rules. Compute for AI is increasingly becoming foundational infrastructure, similar to electricity in the industrial era or the internet in its early days. In those moments, the defining question wasn’t which company had the best product, but who had access to the underlying grid, and under what conditions.

Technology giants will continue to exist and play an important role. Gonka succeeds if it becomes a complementary infrastructure layer, one that constrains excessive centralization, expands global access, and allows AI innovation to grow in a more open and decentralized economic environment.

VSN1.43%
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)