The Rise of AI DePIN Networks: Decentralized GPU Computing Leading New Trends

AI DePIN Network: The Future of Decentralized GPU Computing

Since 2023, AI and DePIN have become popular trends in the Web3 field, with market values reaching $30 billion and $23 billion respectively. This article focuses on the intersection of the two and explores the development of related protocols.

In the AI technology stack, the DePIN network empowers AI by providing computing resources. The GPU shortage caused by large tech companies has made it difficult for other AI developers to obtain sufficient GPU computing power. The traditional approach is to choose centralized cloud service providers, but this requires signing inflexible long-term contracts, which is inefficient.

DePIN provides a more flexible and cost-effective alternative by incentivizing resource contributions that align with network goals through tokens. In the AI sector, DePIN integrates individual GPU resources into data centers, offering users a unified supply. This not only provides developers with customized and on-demand access but also creates additional revenue for GPU owners.

There are many AI DePIN networks in the market. This article will explore the roles, goals, and highlights of each protocol, as well as the differences between them.

The intersection of AI and DePIN

Overview of AI DePIN Network

Render

Render is a pioneer in the P2P GPU computing network, initially focusing on graphic rendering and later expanding to AI computing tasks.

Highlights:

  • Founded by the Oscar-winning technology company OTOY
  • The GPU network is used by major entertainment companies such as Paramount and PUBG.
  • Collaborate with Stability AI and others to integrate AI models with 3D content rendering workflows.
  • Approve multiple computing clients and integrate more DePIN network GPUs.

Akash

Akash is positioned as a "super cloud" platform that supports storage, GPU, and CPU computing, serving as an alternative to traditional platforms like AWS. With container platforms and Kubernetes-managed compute nodes, any cloud-native application can be seamlessly deployed.

Highlights:

  • A wide range of computing tasks from general computing to web hosting
  • AkashML allows running over 15,000 models on Hugging Face.
  • Custodians of important applications such as Mistral AI's LLM chatbot and Stability AI's SDXL.
  • The metaverse, AI deployment, and federated learning platforms are utilizing its super cloud.

io.net

io.net provides dedicated access to distributed GPU cloud clusters for AI and ML, aggregating GPU resources from data centers, miners, and others.

Highlights:

  • The IO-SDK is compatible with frameworks such as PyTorch and Tensorflow, and can automatically scale according to requirements.
  • Supports the creation of 3 different types of clusters, can be launched within 2 minutes.
  • Collaborate and integrate GPUs from other DePIN networks such as Render and Filecoin.

Gensyn

Gensyn provides GPU computing power focused on machine learning and deep learning. It achieves a more efficient verification mechanism through techniques such as proof of learning.

Highlights:

  • The cost of V100 GPU is about $0.40 per hour, significantly reducing costs.
  • Can fine-tune the pre-trained base model to complete more specific tasks.
  • The basic model will be decentralized, globally shared, and provide additional features.

Aethir

Aethir specializes in deploying enterprise-level GPUs, focusing on computation-intensive fields such as AI, ML, and cloud gaming. Containers in the network serve as virtual endpoints for executing cloud applications, delivering a low-latency experience.

Highlights:

  • Expand to cloud phone services, launch decentralized cloud phones in collaboration with APhone.
  • Establish extensive collaborations with large Web2 companies such as NVIDIA and HPE
  • Collaborating with multiple Web3 partners such as CARV, Magic Eden

Phala Network

Phala Network serves as the execution layer for Web3 AI solutions, utilizing trusted execution environments (TEE) to address privacy issues. This allows AI agents to be controlled by on-chain smart contracts.

Highlights:

  • As a co-processor protocol for verifiable computation, empowering AI agents with on-chain resources.
  • AI agent contracts can access top large language models like OpenAI through Redpill.
  • The future will include multiple proof systems such as zk-proofs, MPC, and FHE.
  • Future support for H100 and other TEE GPUs to enhance computing power.

Project Comparison

| | Render | Akash | io.net | Gensyn | Aethir | Phala | |--------|-------------|------------------|---------------------|---------|---------------|----------| | Hardware | GPU & CPU | GPU & CPU | GPU & CPU | GPU | GPU | CPU | | Business Focus | Graphics Rendering and AI | Cloud Computing, Rendering and AI | AI | AI | AI, Cloud Gaming and Telecommunications | On-Chain AI Execution | | AI Task Type | Inference | Both can be | Both can be | Training | Training | Execution | | Work Pricing | Performance-Based Pricing | Reverse Auction | Market Pricing | Market Pricing | Bidding System | Equity Calculation | | Blockchain | Solana | Cosmos | Solana | Gensyn | Arbitrum | Polkadot | | Data Privacy | Encryption&Hashing | mTLS Authentication | Data Encryption | Secure Mapping | Encryption | TEE | | Work Fees | 0.5-5% per job | 20% USDC, 4% AKT | 2% USDC, 0.25% reserve fee | Low fees | 20% per session | Proportional to the staked amount | | Security | Render Proof | Stake Proof | Computation Proof | Stake Proof | Render Capability Proof | Inherited from Relay Chain | | Completion Proof | - | - | Time Lock Proof | Learning Proof | Rendering Work Proof | TEE Proof | | Quality Assurance | Dispute | - | - | Verifier and Whistleblower | Checker Node | Remote Proof | | GPU Cluster | No | Yes | Yes | Yes | Yes | No |

The Intersection of AI and DePIN

Importance

Availability of cluster and parallel computing

The distributed computing framework has implemented a GPU cluster, providing more efficient training while enhancing scalability. Training complex AI models requires powerful computing capabilities, often relying on distributed computing. For example, OpenAI's GPT-4 model has over 1.8 trillion parameters and was trained using about 25,000 Nvidia A100 GPUs across 128 clusters over a period of 3-4 months.

Most projects have now integrated clusters to achieve parallel computing. io.net collaborates with other projects to incorporate more GPUs into the network, having deployed over 3,800 clusters in the first quarter of 2024. Although Render does not support clustering, it breaks down a single frame into multiple nodes for simultaneous processing, working in a similar way. Phala currently only supports CPUs but allows CPU workers to be clustered.

The Intersection of AI and DePIN

Data Privacy

AI model development requires large datasets, which may contain sensitive information. Samsung disabled ChatGPT due to concerns about code leaks, and Microsoft's 38TB data leak incident highlights the importance of AI security measures. Therefore, various data privacy methods are crucial for protecting data control.

Most projects use some form of data encryption. Render uses encryption and hashing when publishing rendering results, io.net and Gensyn adopt data encryption, and Akash uses mTLS authentication to restrict data access.

io.net recently partnered with Mind Network to launch fully homomorphic encryption (FHE), allowing the processing of encrypted data without decryption, better protecting data privacy than existing encryption technologies.

Phala Network introduces a Trusted Execution Environment ( TEE ) to prevent external processes from accessing or modifying data. It also incorporates zk-proofs in the zkDCAP validator and jtee CLI to integrate RiscZero zkVM.

The Intersection of AI and DePIN

Calculation Completion Certificate and Quality Inspection

Due to the wide range of services, from rendering to AI computation, the final quality may not always meet user standards. Completing proofs and quality checks is beneficial for users.

Gensyn and Aethir generate completion proofs, and the proof from io.net indicates that GPU performance is being fully utilized. Gensyn and Aethir conduct quality checks, with Gensyn using validators and reporters, while Aethir uses check nodes. Render suggests using a dispute resolution process. Phala generates TEE proofs to ensure that AI agents perform the required operations.

Hardware Statistics

| | Render | Akash | io.net | Gensyn | Aethir | Phala | |-------------|--------|-------|--------|------------|------------|--------| | Number of GPUs | 5600 | 384 | 38177 | - | 40000+ | - | | Number of CPUs | 114 | 14672 | 5433 | - | - | 30000+ | | H100/A100 Quantity | - | 157 | 2330 | - | 2000+ | - | | H100 Cost/Hour | - | $1.46 | $1.19 | - | - | - | | A100 Cost/Hour | - | $1.37 | $1.50 | $0.55 ( Estimated ) | $0.33 ( Estimated ) | - |

The Intersection of AI and DePIN

High-performance GPU demand

AI model training tends to use high-performance GPUs such as Nvidia A100 and H100. The inference performance of H100 is 4 times that of A100, making it the preferred choice for large companies training LLMs.

Decentralized GPU market providers need to offer lower prices and meet actual demands. In 2023, Nvidia delivered over 500,000 H100 units to large tech companies, making it difficult to acquire equivalent hardware. Therefore, considering the number of hardware that these projects can introduce at a low cost is crucial for expanding the customer base.

Akash has only over 150 H100 and A100, while io.net and Aethir each have over 2000. Pre-trained LLMs or generative models typically require clusters of 248 to over 2000 GPUs, so the latter two projects are more suitable for large model computations.

The cost of decentralized GPU services has fallen below that of centralized services. Gensyn and Aethir claim to offer A100-level hardware for rent at less than $1 per hour, but it will still take time to verify.

Compared to GPUs connected via NVLink, the memory of GPU clusters connected over the network is limited. NVLink supports direct communication between GPUs, making it suitable for large parameters and large datasets in LLMs. Nevertheless, decentralized GPU networks still provide strong computing power and scalability for distributed computing tasks, creating opportunities for building more AI and ML use cases.

The Intersection of AI and DePIN

provides consumer-grade GPU/CPU

The CPU also plays an important role in AI model training, used for data preprocessing to memory management. Consumer-grade GPUs can be used for fine-tuning pre-trained models or training small-scale models.

Considering that over 85% of consumers' GPU resources are idle, projects like Render, Akash, and io.net also serve this market. Providing these options allows them to develop niche markets, focusing on large-scale intensive computing, small-scale rendering, or a mix of both.

The Intersection of AI and DePIN

Conclusion

The AI DePIN field is still relatively emerging and faces challenges. For example, io.net was accused of falsifying the number of GPUs, but later resolved the issue by introducing a proof of work process.

Nevertheless, the number of tasks and hardware executed on these decentralized GPU networks has significantly increased, highlighting the growing demand for alternatives to Web2 cloud provider hardware resources. At the same time, the increase in hardware providers reflects previously underutilized supply. This further demonstrates the product-market fit of AI DePIN networks, effectively addressing demand and supply challenges.

Looking ahead, AI is set to develop into a thriving multi-trillion-dollar market. These decentralized GPU networks will play a key role in providing developers with cost-effective computing alternatives. By continuously bridging the gap between demand and supply, these networks will make significant contributions to the future landscape of AI and computing infrastructure.

The Intersection of AI and DePIN

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 4
  • Share
Comment
0/400
SpeakWithHatOnvip
· 08-06 07:46
Still trading depin? The sooner you die, the sooner you will be reborn.
View OriginalReply0
SmartContractPlumbervip
· 08-06 07:46
This code still needs to be audited, don't expose the vulnerabilities directly.
View OriginalReply0
HodlNerdvip
· 08-06 07:45
fascinating game theory at play... decentralized gpu pools are the next logical evolution tbh
Reply0
PanicSellervip
· 08-06 07:34
Just short of a graphics card, look at how capable these people are.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)