Reputation management across distributed systems is one of the most important protocol developments supporting blockchain applications. The necessity of reputation systems built in or on top of blockchain protocols ensures that peer-to-peer human and machine ecosystems alike can sustainably survive strategic bad actors waiting every patiently for the mainstream adoption of crypto-platforms.

Blockchain technology is more than just a trustless foundation in which transactions are transparently shared and stored, it is (hopefully) a byzantine fault tolerant store of reputational reference that can dynamically (and one day interoperably) change a user’s platform experience and capabilities to protect the health status of the broader P2P community. Zed reputation is propagated across the network to empower every agent’s ability to avoid fraudulent transactions.

Blockchain entrepreneurs will quickly determine, via the destruction of early platforms at the hands of bad actors, how important reputational guidelines are within their blockchain networks — guidelines that go beyond the simplistic “average score” method and attend to the following issues:

  • Collusion — Shilling Attack, where malicious nodes submit dishonest feedback and collude with each other to boost their own ratings or bad-mouth non-malicious nodes
  • Reputation Cashing — Agents cashing in on their good reputation to carry fraudulent transactions with higher gain
  • Strategic Deception — Establishing initial trust for new agents more dynamically (using reputation on other networks, feedback from agents that they have transacted with)
  • Faking Identity — Agents faking identities within social impact networks to steal disbursed, charitable resources

Of course, the list goes on, but the aforementioned issues seem to summarize most of the larger obstacles toward operating a fair, decentralized network. The key responsibilities of a reputation protocol should be the following:

  • Align agent incentives across the network and test that incentive framework many times before deployment
  • Always incorporate of stake disincentives to put a cost of malicious behavior (through which transacting agents must stake a certain amount of capital in escrow until the transaction has been confirmed to be complete on both sides)
  • Develop a strong, statistics based reputation engine that can incorporate some level of machine learning in future iterations
  • Develop a scalable, decentralized reputation score propagation method so that the network can store historically informed reputation scores across all nodes, or so that nodes are enabled to inquire about reputation information via other nodes that have transacted with the agent in question

Of course, much of this is still very abstract when not mapped to working examples within the current market of protocols and applications. Some examples in which reputation is particularly pertinent include the following:

  • Consensys — Civil: Civil is a news media micro-economy in which journalists are directly incentivized by their readers to produce honest, reliable content. There are also news evaluator agents ensuring that the content is in fact, real. The total incentives reaped in from the work developed is divided and distributed to all contributors, from the evaluators to the authors. Dynamic, rememberable reputation is a critical platform need for Civil, as the integrity of the network relies on incentivized goodwill of the participating agents — otherwise, there is no value proposition of maintaining the integrity of the fourth estate.
  • United Nations — Blockchain Against Hunger: The UN’s refugee voucher disbursement blockchain project is one of the foremost production-ready proof-of-concepts within the broader blockchain market. Of course, the project is still operating on a private parity chain, and once the network is opened to a public blockchain, so that the UN can utilize the platform across refugee populations and participating vendors, identity security and agent reputation will be critical to ensure that resources aren’t transacted fraudulently.
  • Consensys — Virtue Poker: Virtue Poker is a P2P decentralized poker platform built on ethereum. Many of the forms of cheating on virtual poker platforms are avoidable with the implementation of reliable identity and reputation management systems. The same issues we discussed earlier, like agent collusion and multi-accounting are also readily available to malicious actors within this space.

So how can you get started developing such a hardcore protocol? Some extremely helpful reference points are the BETA Reputation Engine and TrustGuard Reputation Framework. Let’s take a look at the TrustGuard architecture, and, from a higher level, show how the BETA Reputation Engine could fit within this model:

As you can see, there are some core components within a reputational protocol that a team should consider implementing if they want to minimize the amount of attack vectors their network could potentially incur. These components are as follows for each node(given TrustGuard’s model):

  • Transaction Manager: The Transaction Manager consists of four components: (1) the trust value output from the Trust Evaluation Engine, (2) the transaction proof exchange component, (3) designated nodes on the overlay network, and (4) feedback admission control. The trust value output is used to make trust decisions (to transact with or to not transact with) before calling the transaction execution component. The transaction proof exchange (execution) component is responsible for generating and exchanging transaction proofs. Once the transaction is completed, feedbacks are manually entered by the transacting users. This feedback is routed to the designate nodes on the overlay network for decentralized storage through a decentralized overlay protocol (eg. DHT based). The designated nodes only invoke their data storage service admit a feedback if it passes the feedback admission control, where fake transactions are detected.
  • Trust Evaluation Engine: Whenever a node n wants to transact with another node m, it calls the Trust Evaluation Engine to perform a trust evaluation of node m. It collects feedback about node m from the network through an overlay protocol and aggregates them into a trust value.
  • Feedback Data Storage Service: The feedback storage service is responsible for storing reputation and trust data on the overlay network securely, including maintaining replicas for feedbacks and trust values. TrustGuard builds their storage service on top of PeerTrust. It is also responsible for the feedback admission control transaction detection logic.

Notice, that the common need for a Trust Value, comes up when discussing logic that determine whether it is safe to transact with another agent on the network or not. This value is determined by a Reputation Engine, like BETA Engine. We won’t get bogged too much with the lower level details, but here’s a quick snippet of how the BETA Reputation Engine works:

  • The beta reputation system uses beta probability density functions to combine feedback and derive reputation ratings. It incorporates detailed mathematical implementations to attend to feedback weight, given the transacting agent (discounting), scalable feedback collection, and ‘forgetting,’ through which the system acknowledges that old feedback may not always be relevant for the actual reputation rating, because the agent may change its behaviour over time.

So that about covers it! Of course, this is just a high level overview, but hopefully it was helpful as you develop your future blockchain architecture. If you found any of this remotely helpful, don’t be afraid to leave applause 🙂

Lastly, here are my annotated version of each research paper so that you can find the good parts more easily:


Author robbygreenfield

More posts by robbygreenfield