or how ignoring the lessons of behavioral economics will invalidate many Vickrey–Clarke–Groves predictions
Biases and social norms can prevent even carefully mechanism-designed blockchains from growing. To understand why, a brief detour of game theory, behavioral economics and mechanism design is needed.
Take a deep breath… Mechanism design is about finding the optimal set of rules and parameters needed to nudge a set of potentially-competing, at times non-truthful rational agents to reveal their information and act towards reaching the common goal of the system — phew, that’s a mouthful!
The typical example that clarifies what this convoluted definition means, is an ancient problem tackled by a very clever mom. A cake needs to be split among two kids. The kids’ mother is aware of the world-ending possibility that one of them might perceive the piece of the sibling as being larger after she cut the cake. Numerous episodes of sharing things taught mom that whatever choice she makes over the size allocation, the kids might still erupt in an endless quarrel. This is because neither her, nor the kids know beforehand how each will react to the actual split. Nor do the kids trust each other’s sharing skills.
So, mom proposes a simple game: one kid cuts the cake in two equal parts, and then the second kid chooses a piece she likes (or vice-versa). The first kid will attempt to cut the cake in equal parts — in as precise way as possible. Should he cut one piece larger than the other, the sister would have the right to choose that larger piece, leaving him with the smaller piece. He will thus be incentivized to be fair to both him and his sister. We don’t know if this brought piece to that household, but this intuition carries forward to the Nobel-prize winning Mechanism Design Theory. Our agents (the kids), are competing over a finite resource. The social-planner (mom), considers how to make both kids happy. Her job is to design and implement a good social choice function.
Under the hood of mechanism design, one will find that the interaction between our agents is modeled as a game — the “A Beautiful Mind” type of game. In a game theoretic setting, agents make their choices based on what the other players might do, knowing that their decision will influence the other agents’ decisions. This is the choice of rational agents in a strategic environment with complete information. Complete information means you know the payoffs of all the other agents, for all possible outcomes. Incomplete information settings are also possible and quite often encountered as well.
An example may prove illuminating here. Even if you’ve already eaten your game theory veggies, glossing over the example will be useful in the argumentation over what PoW achieves.
Two guys get caught for a minor misdemeanor (breaking in) but are presumed guilty of a bigger one (homicide of the house-owner were police found them). Our detective knows they have a big incentive to coordinate their statements to get out. Unless she can find a way to motivate them to be truthful (mechanism design again!). So, she splits them up without allowing them to talk to each other and presents them with the following offer: if you (player 1) confess to the murder while the other (player 2) doesn’t, you walk away, and the other guys gets locked up for 9 years (P1 plays Confess, P2 plays Don’t confess — 1st column, 2nd row in the matrix below). All the four possible choices are presented individually to each one, and each knows the punishment for both (complete info) but will not be able to coordinate their answers.
Payoffs in the Prisoner’s Dilemma and A Simple Strategy to Solve the Game (a.k.a. find out who will do what)
A simple way for our (unbelievably smart) criminals to solve this, assuming rationality and complete information, is to consider what the other would not do under any circumstance. For example, from P1’s perspective, P2 would never choose Don’t Confess because under both possible actions of P1, P2 would end up with a longer sentence time if he chose not to confess: -1 and -9 vs. 0 and -6. We are now comparing payoffs not among agents, but for one agent across his alternatives — thus contrasting the 1st to the 2nd column. Note how both rationality and complete information are needed here. They iteratively eliminate strategies they would never play. In the same way, P2 realizes that P1 would never choose Don’t Confess — so they both end up confessing.
This outcome is induced by the careful selection of payoffs, and is in a broad sense, the primary objective of mechanism design: optimize the payoff structure of the game and carefully select the sequence of events so as to induce truthfulness of participants and achieve the goal of the system.
Now, if the body was planted and the two guys just happened to be there…that’s another movie altogether. We’ll need Bayesian priors on that, a topic for another post. It’s all turtles from here baby! It’s nevertheless important to keep this in mind as the entire payoff structure depends on the prior objective: make them confess. In our case, even if our guys did not commit the murder but only ended up being in the wrong place at the wrong time, their rationality and the payoff structure tell them that confessing is the optimal choice. This is an unexpected way in which mechanism design can produce twisted results.
Something discussed quite frequently is the Vickrey–Clarke–Groves (or VCG) mechanism. If you thought the first paragraph was a mouthful, this is one geek level higher. But in plain terms, it is a general recipe to find an optimum to a broad set of problems that depend on many potentially-competing non-truthful rational individuals or firms. The reason why rational is bolded, is that any carefully researched VCB solution assuming rational agent backfires at best or collapses altogether if our individuals are not rational. Behavioral economics is studying how people reach and stick to at first sight irrational decisions. It’s a branch of economics which developed as an antidote to the rational agent assumption dominating economics across most of its subfields until around the 80s. This brings us to the psychological and sociological elements of games.
One may well design games and implement them in practice, but do people really behave rationally? Even when they know all payoffs?
The result is (unfortunately for the tech enthusiasts so happy to fend off any social norms) quite often no. Numerous experiments show that most of us do not fully and strictly obey the rationality assumption. Also, if we assume the others not to be rational while we believe to be, we change our strategy as a result. Yet another instance of deviating from the all-rational-agents solution.
Perhaps another example will illustrate why this is important and how it links to the functioning of a blockchain’s protocol. Blockchains? Well yes, as this technology is designed primarily to solve a long list of problems arising from potentially-competing non-truthful agents interacting with each other over finite resources.
First, let’s start with social norms and conventions. Liberman et al. (2004) in their article “The Name of the Game: Predictive Power of Reputations versus Situational Labels in Determining Prisoner’s Dilemma Game Moves” (Personality and Social Psychology Bulletin) ask what happens when the same Prisoner’s Dilemma (PD) is being framed as a competitive game (being labeled ‘The Wall Street Game’) vs. a cooperative game (being labeled ‘The Community Game’).
This experiment explores the predictive power of reputation-based assessment vs. the stated label of a game (or “name of the game”). It’s important to our discussion as the experiment provides evidence about the malleability of construal processes (mental map of the behavior of others toward the subject). It shows how changing these processes leads to subsequent changes in behavioral choices. The label should have no influence on the outcome of the game, according to the rational agent model. But the experiment shows a more nuanced reality.
A 7 round PD is presented to one half of the experimental group as the “Wall Street Game”. The label connotes individualism and contexts in which competitive or exploitative norms are likely to operate. The same PD is presented to the other half of the groups as the “Community Game”, here the label connotes interdependence, collective interest and contexts where cooperation norms are likely to operate.
The participants nomination status as most likely to cooperate vs. most likely to defect had no predictive power at all. So how the other perceive you is not a good indicator of how you’ll behave. But the name of the game exerted a considerable effect on the participants choices: in the first round, roughly 70% cooperated in the “Community Game” with only 30% doing so for the “Wall Street Game”. Recall, across the two settings, the payoffs are identical and only the name of the game changes. This result survives in subsequent game rounds: participants who both cooperated and received cooperation on the 1st round generally continued to cooperate over subsequent rounds. Participants who had defected and/or faced defection on the 1st round tended to defect afterwards.
Maaravi et al. (2007) in their “Negotiation as a Form of Persuasion: Arguments in First Offers” (Journal of Personality and Social Psychology) show how the word “because” may trigger a defensive reaction in a negotiation and how adding arguments to first offers affects counteroffers and settlement prices. Adding an argument to a first offer (persuasion attempt) is inherently different from merely stating the value of the first offer (intention to provide information)
Identical data can have ambivalent interpretations, either as an intention to persuade or simply to inform. In the experiment, a seller of an apartment is requested to provide details on the property (number of bedrooms, proximity to the CBD, etc.). When this info is perceived by the buyer primarily as objective data no negative effect may be expected. If the data is provided as an argument, then the reactance effect will prompt the buyer to search for counterarguments and alter his/her negotiation strategy. Reactance is activated after the word “because”: It is a negative reaction created when one feels their counter-party tries to control or limit one’s actions.
Plenty of related experimental and econometric evidence can be found in the literature.
By removing the need for extended trust among agents, blockchains bring forward a great promise of increased efficiency and transparency, lower costs and if only users were convinced as well, new markets and products. From Bitcoin to corporate-grade, the different architectures make different underlying assumptions about the utility functions of the agents using the system and their potential incentive structures.
On one side of the spectrum, Bitcoin assumes a fully hostile environment where agents will attempt to rewrite the history of transactions for their own benefit.
This assumption is fundamental as it determines subsequently almost all the technical specifications of the blockchain and constraints the possible strategies of the actors in the system (for miners and users alike).
From how consensus is reached to how rewards are split, the starting assumption drives what agents may or may not do. Costs of operating the system and scalability are irremediably soldered to this fundamental assumption. It’s also useful to note how the mix of PoW and probabilistic rewards leverage the sunk cost fallacy to drive miners to stick around and provide stable mining services. Whether planned or not, this combination of blockchain features integrates in its operation a well-researched human bias which works in favor of the Bitcoin network.
The body was not there when our unfortunate robbers broke in. They are guilty (people act only in self-interest and if given the chance, would try to abuse the system), one just needs to get them to confess (PoW and reward in relation to resource commitment to prevent them from taking that route). The number of strategies and payoffs in the opening Prisoners’ Dilemma example are entirely determined by the starting assumption — they did it, one just needs to find a way to get to confess.
Permissioned blockchains soften this fundamental assumption and introducing the possibility for varying degrees of centralization, allows setting where agents can trust each other over selected parameters and in different degrees. If identified, legally liable in case of breach of contract and with common interest in the stated objective of the system, the architecture presumes that the environment is not fully hostile and, given a different setup, will operate faster and at lower costs. In between these two we find many other spawns playing with the multi-dimensional optimization problem cost vs. scalability vs. speed. These are two opposing views of the world that reach back to the philosophical debates on the fundamental nature of human-beings.
Machiavelli might have been a Bitcoin maximalist, Confucius perhaps more of an KYC-compliant blockchain type.
Who is right? It depends on the problem at hand. For example, an evolutionary agent-based model may give us some hints on which one is sustainable over the long.
Sequential games, the modelling workhorse of blockchain-relevant mechanism design, are the crown-jewel of blockchain architects. Too often though, the issue to be tackled is cast in terms of multi-dimension problem that requires advanced numerical analysis with little attention payed to all the underlying behavioral assumptions that underpin VGC. The sociological context of the users, how contracts are presented (competition vs. cooperation), the culturally-conditioned biases, all these elements challenge otherwise technically feasible but human-inadequate contracting and blockchain proposals. One may well do fancy mechanisms design, but the problem may not be properly identified (need to first establish if the body was there or not when robbers broke in) and furthermore, agents may not act as expected (rationality fails).
If you believe the above points bear no relation to blockchain adoption, that they are simply not relevant in tackling the issues confronting the hard science its-mostly-tech blockchain community, think about the frame promoted by most mainstream media when discussing cryptocurrencies. Right or wrong, it is one of dishonesty, money-laundering and tax evasion.
It might be that potential blockchain users simply stay away because of the weird looks they get from their friends when they talk about it.
What is the frame for blockchains? Lot’s of funding for very little actual value. Ignoring these fundamental traits of human psychology and how they coagulate in a social construct will further hinder blockchain adoption. By focusing only on removing trust, most blockchain architectures are also doing away with many related essential human elements that turn a reluctant person into a devout user.
Bring back the users, their social context and biased view of economics, make sure the problem is well-identified and perhaps then we may find human-sized solutions that will further boost blockchain adoption — both private and public.
 This is one of the simplest strategies to solve a game: iterative elimination of strictly dominated strategies.