Part of our nature as human beings is to make decisions — we need to act in order to survive in a dangerous and dynamic world. Nature endowed us with reasoning abilities in order to better navigate through the decision-making process and act on a desired result that will benefit us in some way.
However, according to social psychologist Jonathan Haidt, we are not wholly rational beings: much of what guides our decision-making process is rooted in personal preferences and emotionally charged beliefs, which can be measured as a type of systematic motivated irrationality. Even yet, both decision theory and game theory idealize scenarios in which rational beings can reach logical conclusions. There cannot be a universal normative theory of decision-making because reality cannot be quantified. Not only this, but each individual uses rationality only to support frameworks of belief based on personal preference and unconscious psychological motivations, and this is exemplified through our actions.
Jonathan Haidt is Professor of Ethical Leadership at New York University’s Stern School of Business and specializes in the psychology of morality and the moral emotions. He frequently refers to behavioral economics (which studies the effects of psychological, social, cognitive, and emotional factors on the economic decisions) when explaining his views on human decision-making; the way we spend our money and behave politically reveals how we act upon what matters to us. In his article Vote for Me (Here’s Why!), Haidt describes a phenomenon that human beings display when coming in contact with new information:
…The confirmation bias… [is] the tendency to seek out and interpret new evidence in ways that confirm what you already think. People are quite good at challenging statements made by other people, but if it’s your belief, then it’s your possession — your child, almost — and you want to protect it, not challenge it and risk losing it. [It is] important for survival. (Haidt)
According to Haidt, the confirmation bias is something that coincides with our motivated reasoning, both held by the foundation that is our personal preferences. In the article, he distinguishes between two questions the mind asks while interpreting new information – Can I believe it? as well as Must I believe it? (Haidt) During the former, the mind searches for evidence that supports the proposition at hand, and if their preferences match up with this notion, the agent will accept the information. During the latter, the mind searches for contrary evidence that disproves or rationalizes against the proposition using their already-held notions. A truth-seeking mind would be thought to test the validity of incoming propositions against a world independent from their bias.
Decision theory and game theory attempt to put forth a model of decision-making by means of logical equations and rationality, all the while accounting for an individual’s preferences. The theories are normative theories, meaning that they are attempting to establish a standard as to how we SHOULD act if we are rational agents (versus descriptive theories, which describe how we act in practice). This assumes that the world is quantifiable and capable of being framed or “frozen” as it is in logic and other forms of mathematics. Although, once the world is shrunk down into a game-type scenario like in a sport or card game it becomes much easier to predict outcomes.
When a “frame” is set, it makes it infinitely easier to make it a hit-or miss scenario in which there are limited decisions and there is a higher probability that you will make a “good” one. However, reality does not exist in a vacuum; when some system happens continually, pausing it at one point would be arbitrary because it would cease to be what it was in its nature. Life transcends a game, and in attempting to create a standard is only imposing an artificial limitation where there was none. Quantifying breeds inconsistencies and such theories only work in isolated scenario.
It is assumed that one makes decisions that believe will benefit them the most, and this is known as Expected Utility theory. However, it is unclear how options are weighed in this way. What is “good” for one may not be “good” for another, and it also gets foggy when more than one option is “good”. In sum – the options become fixed and comparable once we assign values of importance to them. An article on expected utility theory describes this objection:
One objection to this interpretation of utility is that there may not be a single good (or indeed any good) which rationality requires us to seek. But if we understand “utility” broadly enough to include all potentially desirable ends—pleasure, knowledge, friendship, health and so on—it’s not clear that there is a unique correct way to make the tradeoffs between different goods so that each outcome receives a utility. There may be no good answer to the question of whether the life of an ascetic monk contains more or less good than the life of a happy libertine—but assigning utilities to these options forces us to compare them. (6)
This objection brings up a very good argument for why there can be no universal model of decision-making. If utility is defined as “resulting in the most ‘good’” then it becomes very hard to decide what is ‘good’ for each individual person in different circumstances. What a theorist could assign as ‘good’ for one person may be detrimental in practice, like in the case of a rebellion over imposed or limited national religions. However, asking which is “more good” in the first place forces there to be a “better” option and a “worse” option, which are then determined by the individual’s subjective notions.
Game theory as well as decision theory has a major blind spot: the theories do not account for a self that changes throughout the course of the action, or an equation that changes by being evaluated. This dynamic real-world system is unable to be scared as we are far from the cold, rational agents idealized by decision theory and game theory. The Stanford Encyclopedia of Philosophy’s entry on Game Theory describes the subjective element of expected utility:
An economic agent is, by definition, an entity with preferences. Game theorists, like economists and philosophers studying rational decision-making, describe these by means of an abstract concept called utility. This refers to some ranking, on some specified scale, of the subjective welfare or change in subjective welfare that an agent derives from an object or an event. By ‘welfare’ we refer to some normative index of relative well-being, justified by reference to some background framework… In the case of people, it is most typical in economics and applications of game theory to evaluate their relative welfare by reference to their own implicit or explicit judgments of it. This is why we referred above to subjective welfare. (Sec. 2.1)
By referring to the person making the decision as an “agent” or “entity” the author dehumanizes us in order to fit the ideal agent that is at the foundation of decision and game theory. Yet even as a “country” we are humans making decisions that are relative to and molded by our environments and individual experiences. In referring to “welfare” as ‘some normative index of relative well-being, justified by reference to some background framework”, we are again kept from the unconscious motivators that provide the foundation for the options that are being weighed by the individual. The quote also repeatedly brings up the subjective element of the abstract concept of utility; something is only as useful to someone as they require it to be, and this necessitates a dynamic relationship between the subject and the object of their decisions, which cannot fit into the static framework of decision theory. The article continues to describe how the individual’s subjective attitudes would fit into such a theory:
...Decision theory is as much a theory of beliefs, desires and other relevant attitudes as it is a theory of choice; what matters is how these various attitudes (call them “preference attitudes”) cohere together. The focus of this entry is normative decision theory. That is, the main question of interest is what criteria an agent’s preference attitudes should satisfy in any generic circumstances. This amounts to a minimal account of rationality, one that sets aside more substantial questions about appropriate values and preferences, and reasonable beliefs, given the situation at hand. The key issue in this regard is the treatment of uncertainty. (Intro)
The article states that a normative theory would attempt to frame the agent’s decision in a way that disregards the circumstances or legitimacy of the agent’s preferences in relation to those circumstances. In reality, people frame their decision in ways that are unique to them, motivated by on their experiences and dispositional beliefs. It would be a mistake to forget to account for these, as well as things like societal and cultural influences. What’s the point of a decision-making model, normative or descriptive, that fails to account for the psychological preferences that guide those decisions? Clearly decision and game theory idealize simple and isolated gamble-type scenarios. The article goes on to bring up another problem of decision theory:
There remains a deeper worry about the lack of a rational basis for decision framing—that it makes EU theory too permissive in that any choice dispositions can be represented as reflecting rational preferences, via astute selection of decision frame. Even worse, it seems that if an agent’s preferences were modelled as irrational, this would suggest a defect in the decision framing rather than a defect in the agent’s reasoning, since, after all, the decision model is supposed to capture everything that bears on an agent’s preferences. This issue is far-reaching. (22)
One persistent way individuals use rationality is to support their deeply held beliefs, and accept or reject new information based on their current mental state. So if, say, an addict is deciding between a) using a harmful illegal substance and b) drunk driving, he is not going to make a “rational” decision either way he goes. But decision and game theorists would quantify this notion and plug something like this into a mathematical model, thus eliminating the sensitivity of the decision.
It can be conceived that at times it might be most beneficial for one to make a decision that lies outside of a pre-existing frame, depending on the situation at hand. There cannot be a universal model for how one should make decisions, because each individual has different beliefs and desires and ways of attaining those ends. Although we experience similar rational and emotional complexes that are unique to humans, we do not make decisions similarly. Consider mothers who would do anything for their child even if it meant the end of their own life, although it can be argued that she is doing it for her own benefit, knowing her child would be safe. Such is the problem of altruism.
In any case, the mother would be acting on her deep love for her child and it is very unlikely that she had to step back and use her reason to weigh any options; even if she did her emotions eventually chose for her. We also may not choose rationally in cases where something changes that we were previously quite used to, like a faulty blinker in the car that continue to press out of habit when making turns even though we know it is not having any effect. In sum, an accurate depiction of our decision-making abilities would need to be tailored to the individual and the current situation they find themselves in. The influence of one’s own mental state like upset, tired, or under the influence of drugs may cause one to act irrationally, as well as new information during the temporal course of one’s decision-making; a normative theory of decision-making cannot account for real-world dynamic scenarios but only static game-like situations.