The twelfth and final principle of systems science states that systems can be improved.
When we improve a system, we increase its capacity to successfully achieve specific results related to its purpose. This week we’ll take a look at how the understanding of a problem facing Gitcoin led to specific engineering improvements which helped the system more effectively achieve its stated goals.
The Problem
Last week we learned a bit about Gitcoin, an organization that creates blockchain-based open-source software tools which help communities fund public goods.
Gitcoin uses a novel funding mechanism called Quadratic Funding (QF) in order to raise funds from communities and distribute them to projects aligned with the communities’ goals. One of the main benefits of QF is that it helps ensure that projects with the most broad support from the community get the highest amount of funding. The crowd has more power than individuals with deep pockets.
Like any other system, QF has its weaknesses. One of the main problems Gitcoin faced when running grants rounds was that it was hard to verify that each participant was actually a unique human. This made the system vulnerable to sybil attacks.
Instead of making one large donation from a single account, an “attacker” can make multiple donations from several accounts. The project that the attacker is supporting will get a larger slice of funding from the centralized matching pool than if they had donated from a single account. What appears to be overwhelming democratic support for a project is actually a single bad actor trying to game the system in their favor.
The more popular a system using QF is, the more attackers are incentivized to game the system, embracing techniques like creating bots in order to siphon funds. As the amount of funding up for grabs during Gitcoin grants rounds increased over the years, it became a more tempting target for attackers.
The Solution
We saw last week how Blockscience used the techniques of network science in order to help Gitcoin distinguish potential bad actors from genuine participants. But what can be done to actually deter bad actors in a scalable manner?
The approach Gitcoin took was to increase the cost of attack by requiring users to verify their identity. If attackers have to put time, effort, and money into verifying each identity they use, it drastically decreases the potential gains from gaming the system. Eventually the cost of the attack becomes so high that it isn’t worth carrying out.
This goal was accomplished by developing Gitcoin Passport, a decentralized identity verification system. Gitcoin Passport allows users to “present evidence that they are real, unique humans and signal their trustworthiness to apps.”
Gitcoin Passport users collect “stamps” from various traditional (ie; Google, Github, Facebook) and crypto-based (ie; ENS, Proof of Humanity) identity authenticators. The stamps are composed of verified credentials which contain various properties of a user’s account on the issuing service, for example the number of stars, followers, and forks a user has on Github. Crucially, “stamps do not store any personally identifiable information, only the verifiable credentials issued by the identity authenticator.”
We can say with a fairly high degree of certainty that any account which takes the time to set up a passport is much less likely to be an attacker or a bot. The more stamps associated with a passport, the less likely it is an attacker. During Gitcoin grants round 15, donors were able to connect their internet identities to Gitcoin Passport in order to receive a 150% matching “trust bonus.” 130,605 stamps were verified during GR15 and 33,515 passports had at least one stamp by the end of the round.
Gitcoin successfully used software engineering to make an improvement to their system. Incorporating user authentication via Gitcoin passport helped ensure that more funds were going towards public goods valued by the community rather than selfish attackers trying to hoard funds for themselves.
Systems can be understood with science and improved with engineering. In complex adaptive systems, the process of understanding and improvement is never really finished as the systems must continually adapt and evolve to survive in ever-changing environments.
Because the twelve principles of systems science that we’ve covered in this series can be applied to any complex adaptive system, keeping them in mind helps us know what we should be looking for when analyzing a system we’re interested in. I used the twelve principles as a lens for studying Gitcoin last fall and found the process to be incredibly helpful.
It is my firm belief that for systems science to mature and reach its full potential as an academic discipline, we must identify a core set of principles that are universally accepted as being true of all systems. Any conversation focused on achieving this goal should certainly consider these twelve as part of the set of worthy candidates.