A Concept for Using Personalised AI to Fix DAO Governance

How to train and personalise AI to take part in DAO governance processes

A Concept for Using Personalised AI to Fix DAO Governance
Photo by Frederic Köberl / Unsplash

DAO governance is broken - participation amongst token holders is generally very low. The crypto community knows that and has tried to workaround the problem by introducing “Delegated Proof of Stake” - token holders delegate their voting power to someone else (usually a well known person or a validator) and letting them digest and vote on governance proposals. This however has several problems:

  1. It requires a lot of research to find a “match” that the token holder can delegate to
  2. Similar to representative democracy, the token holder might not find a representative that matches with his/her preferences in all areas
  3. The representatives preferences may change
  4. And perhaps most importantly it goes against the idea of decentralisation by combining a lot of voting power with a few individuals or organisations

One could potentially consider letting one AI with one set of preferences completely govern the DAO - we might talk about that in a future blog post. But for this one, we will look at how we can make use of personalised AI to help govern DAOs, without necessarily giving up on decentralisation and still giving each token holder their unique voice.

AI Concept to Fix DAO Governance

Here’s the broad concept for personalised AI for DAO governance: The AI gets trained on crypto documents and on-chain data, in particular on the documentation for the DAO in question - to a point at which it can reason about projects vision, tokenomics and project economics. The token holders then set their voting preferences - for example, whether they are more risk averse or risk taking. The AI then analyses a governance proposal and either directly votes or suggests a vote to the user, while also providing them with the reasoning as to why it wants to vote that way.

Training the AI on Crypto

In order to let the AI make good decisions on the governance proposals we can train it on four types of data:

  1. The DAOs documentation (eg whitepaper or GitBooks)
  2. General crypto data (eg Binance’s crypto academy)
  3. Previous governance proposals
  4. On-chain data (eg how many token holders there are or how they voted in the past)

Together, the last two points can be used for “backtesting” - seeing how AI (with certain preferences) would have voted on past governance proposals. AI could also analyse how the actual governance vote affected the project to further inform its decision. Though this may be tricky to solve correctly because there are a lot of external factors impacting the DAO as well (for example, hikes of the interest rate).

Creating a Personalised AI through Voting Preferences

To ensure that the DAO overall represents the wants of its members, we have to let the AI voting tool know the token holders preference. One way to do this is to let the user configure their preferences with a few sliders, for example:

  1. Are you more risk averse or risk taking?
  2. Are you looking for short term profits or long term profits?
  3. Are you more focused on profit or aligning with the vision of the project?

Based on these preferences, the AI can evaluate governance proposals differently: A user that is risk taking and looking to maximise profit in the short term would most likely vote differently than a user that is more risk averse and prioritising the long term vision of the project.

These voting preferences could possibly also be extended to take into account the voting behavior of other token holders that the user trusts - whether it’s friends or influential people/organisations.

AI Voting Transparency

Blockchain is built upon the principle of transparency. When we let AI help govern DAOs, we need to ensure that the decisions it takes are traceable and transparent - in 2 ways:

  1. Transparency towards the user: The AI should explain how it arrived at the conclusion, for example by explaining its reasoning about the predicted outcome step by step and how the users preferences impacted the reasoning.
  2. Technical transparency: How do we know that there was no tampering and which text model was used? And going even further, can we minimise model risk by using multiple different models and letting the majority decide on the vote?

Next Steps

In the next weeks, I’ll start building a prototype by starting to train an AI on the documents of a popular DAO and just asking it what it thinks about recent governance proposals. From then on I’ll add more features step by step and checking how those features affected the thinking and vote of the AI.

Subscribe to Endeavours Way

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe