July 8, 2024

AI x Crypto Future Use Cases

At Stateless we enjoy thinking about the future. While most of these ideas might be wrong, it is a useful exercise to think through future use cases in technology as ideas often sprout from a seed, evolve and grow into a widely adopted new use case.

After-all, all new technologies start out as somebody’s (often crazy) idea.

  • Tokenized AI Models. Tokenized Ownership and Governance of Open Source LLM Models. Open source LLM’s like Llama2 and Mistral are quickly becoming as powerful as their closed source rivals such as Anthropic and OpenAI. However, they currently capture no value. The model is free to use and deploy. Governance is currently run by a company or foundation actively building and determining the future roadmap of the model. What if the model could be owned and governed by token holders. Open source contributors could receive tokens for their contributions.
  • Token or Points Incentivized RLHF. Currently, models like Midjourney use RLHF (Reinforcement Learning from Human Feedback) to improve their model in real time from their userbase, mostly via discord. Users are actively contributing to the improvement of midjourney’s models without receiving any compensation. Could you build the largest RLHF AI feedback network in the world using token incentives?
  • Decentralized LLM’s with Proprietary Encrypted Weights. Many AI LLM developers want to build and invest in new models but don’t want to give away the weights. Using zk or other privacy preserving technologies, developers can allow users to run their proprietary model locally using encrypted weights.
  • Decentralized Search Engine. Part of what makes Perplexity AI so good is they built their own internal search engine. When a user makes a prompt request, Perplexity first runs a search and then runs a model based on that search result, combining two steps into one for the user (search and inference). Open source models will never be able to compete with Perplexity until an open source search engine can be run in combination with an open source LLM. Elastic, Opensearch and Searx are some examples of open source search engines that could be run on a decentralized web stack as a primitive with a decentralized LLM.
  • Decentralized Perplexity. Once the decentralized search engine and LLM primitives exist, a decentralized version of Perplexity could be built by combining the two.
  • Verified Compute. Verified compute is a new category of computation that allows one to verify a computation has been done correctly off chain and verify it on-chain. Often via a ZK proof, this allows for almost infinite scaling of applications on chain. The ability to use the speed and compute power of off-chain computers/GPUs and have the output be a trusted verified output enables all existing web applications to be run “on chain.” This means you can run an AI model such as Llama 2 or even chatGPT and trust that the network running the model ran the correct LLM to generate your answer.
  • On-chain Inference. In AI, inference is the act of asking an LLM a question and receiving an answer. Using ChatGPT you are generating an inference, asking it questions and receiving the inference as an answer. Unlike with model training, these LLM’s have already been built.
  • Embedded LLM’s inside NFTs. NFT’s are not yet in their final form. As new NFT standards get built and shipped into the core layer 1 protocols, new NFT mechanisms and use cases will emerge. For example, Dynamic NFT. NFT’s that can store data either on chain or verifiably off-chain could host an LLM inside them. They could move/jump around different wallet addresses as they see fit based on user interactions with them on-chain. They could even accrue value using the new EIP 6551 token standard.
  • AR DeGINs: (Decentralized Gamified Incentive Networks): As AR gets traction with the Apple Vision Pro, meta games can be built that overlay incentive prizes and rewards for doing things in the IRL world. As people play the game, the AI can update the rules or difficulty.
  • Social Networks for AI’s: As people train their own AI agents to become extensions of themself, they will interact and engage with each other on AI social networks. Chirper AI is attempting this and I expect more to come.
  • AI Agent Focused Business Models: On AI Social networks, there are no “eyeballs” for advertisers to target to pay per click or impression. Thus a new business model must be created.
  • AI Celebrities: We will see the first AI celebrities to gain attention and followings. Think Lil Miquela but with the ability to interact via a proprietary, custom trained LLM and crypto wallet.
  • AI Website logins: With AI Agents crawling the internet, it makes no sense for an AI Agent to login to websites using a username and password. Just like how we use a crypto wallet address to connect to a web3 website today, an AI Agent would do the same. Web2 websites would now have the incentive to add wallet connect to allow AI agents to sign in natively.
  • GPU Futures: In the digital world, GPUs are the new oil. AI is creating massive demand for GPUs and as cloud GPU providers standardize, GPU units become fungible and thus the market can create a price for them. GPU futures would be a natural by product as buyers and sellers of GPU will want to buy and sell their future GPU needs. Crypto exchanges and DEX’s are actually extremely well positioned to create this product.

These are just some of the ideas that have been percolating. Like in past technology breakthroughs, most of the 0 to 1 ideas come out of nowhere and are only obvious in hindsight. It will be exciting to see what gets invented over the next 12–24 months.