📢 Gate Square Exclusive: #WXTM Creative Contest# Is Now Live!
Celebrate CandyDrop Round 59 featuring MinoTari (WXTM) — compete for a 70,000 WXTM prize pool!
🎯 About MinoTari (WXTM)
Tari is a Rust-based blockchain protocol centered around digital assets.
It empowers creators to build new types of digital experiences and narratives.
With Tari, digitally scarce assets—like collectibles or in-game items—unlock new business opportunities for creators.
🎨 Event Period:
Aug 7, 2025, 09:00 – Aug 12, 2025, 16:00 (UTC)
📌 How to Participate:
Post original content on Gate Square related to WXTM or its
The Rise of AI DePIN Networks: Decentralized GPU Computing Leading New Trends
AI DePIN Network: The Future of Decentralized GPU Computing
Since 2023, AI and DePIN have become popular trends in the Web3 field, with market values reaching $30 billion and $23 billion respectively. This article focuses on the intersection of the two and explores the development of related protocols.
In the AI technology stack, the DePIN network empowers AI by providing computing resources. The GPU shortage caused by large tech companies has made it difficult for other AI developers to obtain sufficient GPU computing power. The traditional approach is to choose centralized cloud service providers, but this requires signing inflexible long-term contracts, which is inefficient.
DePIN provides a more flexible and cost-effective alternative by incentivizing resource contributions that align with network goals through tokens. In the AI sector, DePIN integrates individual GPU resources into data centers, offering users a unified supply. This not only provides developers with customized and on-demand access but also creates additional revenue for GPU owners.
There are many AI DePIN networks in the market. This article will explore the roles, goals, and highlights of each protocol, as well as the differences between them.
Overview of AI DePIN Network
Render
Render is a pioneer in the P2P GPU computing network, initially focusing on graphic rendering and later expanding to AI computing tasks.
Highlights:
Akash
Akash is positioned as a "super cloud" platform that supports storage, GPU, and CPU computing, serving as an alternative to traditional platforms like AWS. With container platforms and Kubernetes-managed compute nodes, any cloud-native application can be seamlessly deployed.
Highlights:
io.net
io.net provides dedicated access to distributed GPU cloud clusters for AI and ML, aggregating GPU resources from data centers, miners, and others.
Highlights:
Gensyn
Gensyn provides GPU computing power focused on machine learning and deep learning. It achieves a more efficient verification mechanism through techniques such as proof of learning.
Highlights:
Aethir
Aethir specializes in deploying enterprise-level GPUs, focusing on computation-intensive fields such as AI, ML, and cloud gaming. Containers in the network serve as virtual endpoints for executing cloud applications, delivering a low-latency experience.
Highlights:
Phala Network
Phala Network serves as the execution layer for Web3 AI solutions, utilizing trusted execution environments (TEE) to address privacy issues. This allows AI agents to be controlled by on-chain smart contracts.
Highlights:
Project Comparison
| | Render | Akash | io.net | Gensyn | Aethir | Phala | |--------|-------------|------------------|---------------------|---------|---------------|----------| | Hardware | GPU & CPU | GPU & CPU | GPU & CPU | GPU | GPU | CPU | | Business Focus | Graphics Rendering and AI | Cloud Computing, Rendering and AI | AI | AI | AI, Cloud Gaming and Telecommunications | On-Chain AI Execution | | AI Task Type | Inference | Both can be | Both can be | Training | Training | Execution | | Work Pricing | Performance-Based Pricing | Reverse Auction | Market Pricing | Market Pricing | Bidding System | Equity Calculation | | Blockchain | Solana | Cosmos | Solana | Gensyn | Arbitrum | Polkadot | | Data Privacy | Encryption&Hashing | mTLS Authentication | Data Encryption | Secure Mapping | Encryption | TEE | | Work Fees | 0.5-5% per job | 20% USDC, 4% AKT | 2% USDC, 0.25% reserve fee | Low fees | 20% per session | Proportional to the staked amount | | Security | Render Proof | Stake Proof | Computation Proof | Stake Proof | Render Capability Proof | Inherited from Relay Chain | | Completion Proof | - | - | Time Lock Proof | Learning Proof | Rendering Work Proof | TEE Proof | | Quality Assurance | Dispute | - | - | Verifier and Whistleblower | Checker Node | Remote Proof | | GPU Cluster | No | Yes | Yes | Yes | Yes | No |
Importance
Availability of cluster and parallel computing
The distributed computing framework has implemented a GPU cluster, providing more efficient training while enhancing scalability. Training complex AI models requires powerful computing capabilities, often relying on distributed computing. For example, OpenAI's GPT-4 model has over 1.8 trillion parameters and was trained using about 25,000 Nvidia A100 GPUs across 128 clusters over a period of 3-4 months.
Most projects have now integrated clusters to achieve parallel computing. io.net collaborates with other projects to incorporate more GPUs into the network, having deployed over 3,800 clusters in the first quarter of 2024. Although Render does not support clustering, it breaks down a single frame into multiple nodes for simultaneous processing, working in a similar way. Phala currently only supports CPUs but allows CPU workers to be clustered.
Data Privacy
AI model development requires large datasets, which may contain sensitive information. Samsung disabled ChatGPT due to concerns about code leaks, and Microsoft's 38TB data leak incident highlights the importance of AI security measures. Therefore, various data privacy methods are crucial for protecting data control.
Most projects use some form of data encryption. Render uses encryption and hashing when publishing rendering results, io.net and Gensyn adopt data encryption, and Akash uses mTLS authentication to restrict data access.
io.net recently partnered with Mind Network to launch fully homomorphic encryption (FHE), allowing the processing of encrypted data without decryption, better protecting data privacy than existing encryption technologies.
Phala Network introduces a Trusted Execution Environment ( TEE ) to prevent external processes from accessing or modifying data. It also incorporates zk-proofs in the zkDCAP validator and jtee CLI to integrate RiscZero zkVM.
Calculation Completion Certificate and Quality Inspection
Due to the wide range of services, from rendering to AI computation, the final quality may not always meet user standards. Completing proofs and quality checks is beneficial for users.
Gensyn and Aethir generate completion proofs, and the proof from io.net indicates that GPU performance is being fully utilized. Gensyn and Aethir conduct quality checks, with Gensyn using validators and reporters, while Aethir uses check nodes. Render suggests using a dispute resolution process. Phala generates TEE proofs to ensure that AI agents perform the required operations.
Hardware Statistics
| | Render | Akash | io.net | Gensyn | Aethir | Phala | |-------------|--------|-------|--------|------------|------------|--------| | Number of GPUs | 5600 | 384 | 38177 | - | 40000+ | - | | Number of CPUs | 114 | 14672 | 5433 | - | - | 30000+ | | H100/A100 Quantity | - | 157 | 2330 | - | 2000+ | - | | H100 Cost/Hour | - | $1.46 | $1.19 | - | - | - | | A100 Cost/Hour | - | $1.37 | $1.50 | $0.55 ( Estimated ) | $0.33 ( Estimated ) | - |
High-performance GPU demand
AI model training tends to use high-performance GPUs such as Nvidia A100 and H100. The inference performance of H100 is 4 times that of A100, making it the preferred choice for large companies training LLMs.
Decentralized GPU market providers need to offer lower prices and meet actual demands. In 2023, Nvidia delivered over 500,000 H100 units to large tech companies, making it difficult to acquire equivalent hardware. Therefore, considering the number of hardware that these projects can introduce at a low cost is crucial for expanding the customer base.
Akash has only over 150 H100 and A100, while io.net and Aethir each have over 2000. Pre-trained LLMs or generative models typically require clusters of 248 to over 2000 GPUs, so the latter two projects are more suitable for large model computations.
The cost of decentralized GPU services has fallen below that of centralized services. Gensyn and Aethir claim to offer A100-level hardware for rent at less than $1 per hour, but it will still take time to verify.
Compared to GPUs connected via NVLink, the memory of GPU clusters connected over the network is limited. NVLink supports direct communication between GPUs, making it suitable for large parameters and large datasets in LLMs. Nevertheless, decentralized GPU networks still provide strong computing power and scalability for distributed computing tasks, creating opportunities for building more AI and ML use cases.
provides consumer-grade GPU/CPU
The CPU also plays an important role in AI model training, used for data preprocessing to memory management. Consumer-grade GPUs can be used for fine-tuning pre-trained models or training small-scale models.
Considering that over 85% of consumers' GPU resources are idle, projects like Render, Akash, and io.net also serve this market. Providing these options allows them to develop niche markets, focusing on large-scale intensive computing, small-scale rendering, or a mix of both.
Conclusion
The AI DePIN field is still relatively emerging and faces challenges. For example, io.net was accused of falsifying the number of GPUs, but later resolved the issue by introducing a proof of work process.
Nevertheless, the number of tasks and hardware executed on these decentralized GPU networks has significantly increased, highlighting the growing demand for alternatives to Web2 cloud provider hardware resources. At the same time, the increase in hardware providers reflects previously underutilized supply. This further demonstrates the product-market fit of AI DePIN networks, effectively addressing demand and supply challenges.
Looking ahead, AI is set to develop into a thriving multi-trillion-dollar market. These decentralized GPU networks will play a key role in providing developers with cost-effective computing alternatives. By continuously bridging the gap between demand and supply, these networks will make significant contributions to the future landscape of AI and computing infrastructure.