DA and DePin vs Nvidia and the stock market.
How to leverage Data Availability Networks and Decentralized Physical Infrastructure Networks for Building.
This Monday our beautiful hyper-tech capital-centered occidental world woke up as crazy as it can get. During the weekend, Berkshire cashed out half of his position in Apple, which triggered a collective hysteria on Wall Street. In a world where you stake huge piles of money in tech giants that provide services between each other, the domino effect comes with exciting consequences.
It makes sense that when your largest customers suffer a shrinkage, you are in a vulnerable position. Don’t get me wrong, I am not happy about this, I’ve invested myself in Nvidia and micro-transistors EFTs triggered by the quantum computing trend a couple of years ago. I might lose my best scenario to make cash out at least for the following X months or even years. We’ll see. That is not the subject of this post. Framing the vulnerability of tech giants fueled by confusing capital market rules is. How do we build great AI products independent of tech giants and traditional market rules? Yeah once again, let’s talk about decentralization specifically decentralized tools for AI training and building.
I want to introduce you to:
DePIN stands for Decentralized Physical Infrastructure Network. A combination of blockchain technology with physical infrastructure to create decentralized networks that can manage real-world resources and services. DePIN is looking to shift control and ownership of physical infrastructure from centralized entities to a distributed network of participants. By using blockchain to ensure transparency, security, and trust among participants. Smart contracts can automate processes and facilitate peer-to-peer interactions without intermediaries. One of the best use cases is distributed power computation.
My favorite players in this segment are:
Nosana: By creating a marketplace for GPU power, Nosana allows individuals and companies to contribute or access computational resources, facilitating more cost-effective and scalable AI model training and execution.
io.net: The largest decentralized computing network.
Another key vertical is Data Availability Networks.
A data availability network is a decentralized system designed to store and provide consensus on the availability of blockchain data. These networks aim to solve the data availability problem by ensuring that all transaction-related data is accessible to nodes in the blockchain network without requiring them to download and verify the entire data set.
Why is this important to training a model or developing AI-based tools?
Data availability networks allow the storage of vast amounts of data in a decentralized manner. This means:
Scalability: You can access large datasets without worrying about storage limits. This is crucial for training AI models that require extensive data to learn effectively.
Cost-Effectiveness: Utilizing decentralized storage, you can reduce costs associated with traditional cloud storage solutions like AWS.
For example, a startup developing an AI model for image recognition can store millions of images on a data availability network, ensuring they have enough diverse data to train their model effectively. For data integrity and security, Data availability networks ensure that the data is tamper-proof and easily verifiable. For instance, an AI product designed for fraud detection can utilize data from a data availability network to ensure that the transaction data it analyzes is trustworthy. On the collaboration side, DA Networks are also game changers. Different teams can work together to develop AI models that leverage shared data, leading to more capable products. For example, multiple healthcare providers could collaborate on a data availability network to share patient data (while maintaining privacy), enabling the development of more accurate AI models for disease prediction. With AI dynamic models, like financial tools, you can access and process data in real-time.
My favorite data availability projects on Ethereum are:
Celestia: is a data availability (DA) blockchain created as an affordable and efficient hub for Layer 2 (L2) scaling solutions to publish transaction data. Celestia helps projects launch their L2s by allowing rollup teams to sidestep needing to manage their own DA solutions and instead focus exclusively on their execution layer, where transactions occur.
EigenDA: based on Eigen Layer, is a data availability service designed for high throughput and decentralized operation, primarily targeting rollups on the Ethereum blockchain. It utilizes the EigenLayer restaking to ensure a secure and scalable infrastructure for data availability.
Two great options in Solana, and one L1 & L2 agnostic are:
Grass: Is a decentralized network gathering users’ public web data to train AI models via a ZK Solana L2.
Synesis One: allows anyone to earn cryptocurrency by completing small tasks like providing data for models, data labeling, and data annotation. This platform simplifies AI development involvement for people by providing a straightforward way to train data.
OG Labs: An infinitely scalable data availability layer and data storage system that provides the necessary infrastructure to scale Web3. The cool thing about OG is you can run it in any L1 or L2 through Near Protocol.
I think you got yourself a nice stack to begin building without Nvidia, Databricks, and Open AI.
Thanks for reading and sharing!