The Choices We Have To Make | Judgment, Reasoning, & Decision-Making (Cognitive Psych #9)

Nov 20, 2020 09:00 · 7620 words · 36 minute read judgments aware rules happens figuring

Decision-making Listen closely. Take note of everything. Make the best choice. An unusual disease that started out with isolated cases in a small community has now become a local outbreak in your town of 600 people. The health department drafted two programs and you, the mayor, have the power to implement one of them. In Program A, you’ll be able to save 200 citizens. In Program B, there’s a 1-in-3 chance that all 600 will be saved, but with a 2-in-3 chance that no one will be saved. Which program will you vote for? Here’s another problem.

Same disease, same town, same health department, same mayor - which 00:52 - is you. Program C, choose it and 400 will surely die. But in Program D, there’s a ⅓ chance that no one dies, and a ⅔ chance that 600 will die. Which program will you vote for? In 1981, psychologist-economist duo Amos Tversky and Daniel Kahneman would publish an article in the journal Science which would permanently change how we viewed our cognitive abilities, inspire decades of research exploring their work, and shape the course of cognitive psychology, cognitive science, and behavioral economics. Kahneman would win the 2002 Nobel Memorial Prize in Economic Sciences for the decades of work following this research. And he dedicated this award to his long-time friend and collaborator Tversky who passed away in 1996.

The problems you just encountered were 2 of 01:58 - the 10 in that 1981 article entitled “The framing of decisions and the psychology of choice”. Kahneman and Tversky found that people held on to certain hopes than the chance of failure even when greater gains were at stake. But when almost all hope is lost, people would choose to risk it all even if lesser losses were promised. And the thing is that the two problems you watched had the same options, just worded and framed differently! Either 200 will live and 400 will die, or you hang on to the hope that everyone lives against the greater chance that everyone doesn’t make it. Tversky and Kahneman pointed out that even when given all the information we need, we often make bad decisions.

02:38 - So, unlike the deep belief that humans are homo economicus, or economical problem-solvers and decision-makers who are logical and rational, we’re actually easily swayed by our dispositions, expectations, and emotions. And the surprising thing is that it’s not a bad thing that we rely on shortcuts and other sources to make decisions - because in the limitedness and unpredictableness of being human, we need all the help we could get. Judgment, reasoning, and decision-making are all about taking information to make inferences, conclusions, and choices. Problem-solving, judgment, reasoning, and decision-making look like similar processes because they all involve taking new information and our existing knowledge to do something, anything. The difference is that while problem-solving uses information in order to help us complete a task, judgment, reasoning, and decision-making involve making sense of the data itself: what it tells us, and what trends we should look out for.

So, researchers in this field tend 03:47 - to look at the three together - as we’ll do in this episode. But first, even if they’re a close trio of constructs, they still do different things. Judgment is all about making inferences (like predictions or guesses) about what’s likely to happen, based on what information you have at present. Weather forecasts, presidential betting based on pre-election surveys, and calculating your highest possible score on an exam based on a quick scan of the items all involve judgments because you looked at the data and guessed how likely it is that it would rain, that your preferred candidate will win, or that you will pass the test. Meanwhile, reasoning also takes information and uses it to make three types of observations.

In deduction, you work with premises (a set 04:34 - of assumptions that hold true anytime and anywhere) which you then logically follow to make a conclusion. These statements, called a syllogism when taken together, would always give the right conclusion as long as our assumptions are true and the process is valid. For example, we can assume that if it rains and you have nothing and nowhere to shield you from the downpour, then you’ll get wet. And if it happens that you forgot your umbrella and you find yourself in the middle of an empty street, you might as well dance in the rain because you’ll get wet anyway. Meanwhile, you could say that resources are limited so they will run out when they get used up. Then you can add that happiness is a resource.

05:13 - So does that mean we should stop people being happy so we won’t run out of happiness in the world? The process of deducting happiness conservation is valid based on philosophical principles of argumentation, but the contents of the arguments do not reflect how the world actually works so you ended up with a weird conclusion. Next, induction works with specific observations or cases then tries to infer a generalizable rule or a prediction based on what the evidence says. Induction is actually a key process in judgments, so you’ll notice some overlap between them. Examples of inductive judgments would be predicting that it will rain tomorrow because it rained most of last week, or guessing that you’re not violating any laws when crossing the street at one part where everyone else seems to be going the same way. You might notice that it won’t necessarily rain tomorrow, or that people may not actually be crossing correctly - and you’d be right.

That’s 06:07 - because induction and judgments are only probably true: We haven’t proven that our generalizations actually work for everyone, and so we’re confident about our conclusions only to the extent that our evidence is reliable and valid. That is, when we have more observations to work with, when the evidence was collected using more precise procedures, and when the data is based on a representative set of what could possibly happen, then we’re more likely to trust our inferences. That’s why we trust weather forecasts though they’re a bit like our daily inductions: Meteorologists take centuries of weather data and large amounts of readings from their tools to inform their judgments on whether a typhoon is on its way - such that citizens and the government can prepare for the worst. A third type of reasoning is called abduction. Unlike following assumptions to a conclusion as in deduction or inferring a universal rule like in induction, abduction gives you pieces of data (whether observations or evidence) and your job is to figure out what led to the current situation.

Literally, you’re trying to abduct or capture the best explanation 07:11 - or cause for what you’re experiencing. The problem is that there are many possible reasons for why things are the way they are, so you work with whatever limited information you have to suggest explanations which you then rule out as new evidence comes in. The likelihood principle in unconscious inference, weighing your prior against their likelihood in Bayesian inference, and divergent thinking in problem-solving are quite similar to abduction because they all involve trying to figure out the most likely cause for things. Hypothesis testing in research and solving Frances Glessner Lee’s “Nutshell Studies of Unexplained Death” (like crime investigations in general) are examples of abductive thinking. Finally, after you’ve reasoned through the data and made judgments about the likelihood of things happening, you can now engage in decision-making (to choose the best among the alternatives available to you) which, like problem-solving, helps you to achieve the goals or end states you want.

And these processes work together in our daily lives. For example, when deciding where to eat or what food to get, you use deduction to see what your cravings or budget options want or allow you to get. Then, you judge how long it will take to get food, how much it will cost you, and whether you’ll be satisfied with whatever you choose. You can also induce and abduce satisfaction by asking your friends where they want to eat and what they suggest that you try, if you decide to go to a food place you’ve never been to. Finally, with all the food choices available, you take your judgments, inductions, deductions, and abductions to make a decision on what to order.

All of 08:44 - this processing happened to help you solve the problem of satiating your hunger in the most satisfying way available. Oh, wait. By the way, important footnote. Our problem-solving, judging, inducing, deducting, abducing, and decision-making abilities seem to be fool-proof, always giving us what we need and want because we systematically and disinterestedly consider all relevant information before acting. Actually, we don’t do that - not all the time, at least. See, Kahneman later elaborated on his work with Tversky and proposed that we have a dual system of cognition which guides how we process information. What we’ve been talking about so far is the controlled, deliberate, and systematic System 2 which processes information extensively, yet at the cost of effort and attentional resources.

While we do want to be as careful as possible when thinking about 09:40 - things, we have to admit that we’re quite lazy and limited as humans to do that all the time. Controlled processing requires that people are able to do such deliberate thinking (meaning that they have the time, energy, and mental capacity to sit with their thoughts), and are motivated to do so (that is, they really want to exert energy on such a demanding task). When either or both ability and motivation are absent, people would have no resources to deal with a large amount of data. So, they look at these observations using the automatic, intuitive, and heuristic System 1 which gives fast and effortless answers - but at the price of accuracy and possible usefulness. That’s because heuristics are general rules that we apply across many situations which then allow us to get quick conclusions, decisions, and solutions.

But since these rules don’t really apply across all cases, we get approximately 10:32 - fitting answers at best and harmful circumstances at worst. Still, it’s not a bad thing that we’re actually on auto-pilot System 1 most of the time: as we said in attention (on automaticity and proceduralization) and memory (with its nondeclarative forms) we can let automatic processes take over for the small things that keep us running everyday, such that our more effortful System 2 can kick in only for the things that we really need to process in full. With that, social psychologist and judgment and decision-making researcher David Dunning (of Dunning-Kruger effect fame) says that years of research in his field can be summarized in five insights which shed light on the benefits and biases of our cognitive abilities. We’ve already looked at the first insight: Because people aren’t always able or motivated to engage in controlled processing, our judgment, reasoning, and decision-making processes often rely on heuristics that give fast answers, while making some sacrifices on accuracy and relevance. As we’ll see further in the next sections, we do have a lot of pitfalls to overcome when thinking about things, but hopefully knowing more about our limitations can help us correct for them when we need to.

11:47 - We use induction and abduction to make judgments about how likely it is that things will happen and why things are the way they are. Remember that induction and abduction require you to look at large amounts of observations or evidence to try and predict what will happen in the future or infer a cause for some situation. However, regardless of whether we use controlled processing or heuristics to process information, we do take everything that is available to us - but sometimes fail to consider if it is relevant, useful, or detailed enough to help us reach a reasonable conclusion. We can see this trend in the strategies we employ when considering large amounts of information. One of them is called the availability heuristic: More salient or accessible events in memory are deemed to be more likely. Let’s look at a possibly morbid example.

I’ll list 12:37 - four medical conditions, then rank them based on how many Filipinos died due to them. Pneumonia, diabetes, neoplasms, pregnancy complications. Ready? Let’s check your ranking. The availability heuristic will say that you’re likely to rank pneumonia and diabetes highest, pregnancy complications somewhere in the middle, then squint while figuring out what neoplasms are. In fact, based on 2017 Philippine Statistics Authority mortality data, neoplasms (or tumors, often malignant) are the second leading cause of death. Pneumonia and diabetes trail behind at 4th and 5th, while pregnancy is nowhere in the top 10 because, unlike other conditions, pregnancy complications affect only the 50% of the Filipino population capable of bearing children, whereas the other diseases can affect everyone equally.

Pneumonia, diabetes, and 13:32 - pregnancy are very easy for us to think about because they’re conditions that our loved ones may have or are things often talked about on media, while neoplasms (though referring to a manifestation of some types of cancer) is quite a technical term that we may not have heard before. If I said “tumor”, maybe your ranking might have been higher. Still, if your goal was just to give which diseases tend to contribute most to the nation’s mortality rates, then the availability of some conditions over others can give you reasonable estimates. Notice that the way we described the causes of death could influence how accurate our judgments could be. The neoplasm-tumor case is an example of a failure to unpack, of making judgments without first specifying or clarifying what exactly is being asked of you.

Had you clarified that neoplasms are a manifestation 14:18 - of cancer, surely your judgment would have placed the condition higher up on the list. But if we specified further that neoplasms can be benign or malignant, you may have doubted yourself even more because you’d realize that not all tumors are deadly. Failures to unpack are actually important in our lives when completing a long- term project, because it explains the planning fallacy: When we commit to a task without first understanding how many phases and subgoals have to be completed, we may underestimate how long it will actually take to complete the project and be overconfident in our ability to deliver high-quality results on time. That’s why it’s recommended to always break down any project into subtasks before starting so you’d get a more reasonable estimate of the time and resources needed. We also see failures to unpack in partitioning, how people break down the question given to determine what outcomes or circumstances they have to consider.

One of the reasons why we 15:11 - end up double-scheduling is that we’re often asked about our availability in the vaguest terms: “Are you free for a meeting next week?” “Next week” is 7 days, 5 if you count workdays. Within each day, you typically have 8am - 5pm. It’s all too easy to commit to “next week” without realizing that you also had something scheduled on the same day of that “next week”. Asking about a specific date and time would have cleared confusions. Another example is the question: “When is it likelier that it will rain - on Tuesday, or any other day?” A gut-reaction response would be they’re equally likely, 5050. Actually, no: You’re comparing one day of the week with the other six, so there’s more chance of rain across six days than one day, and the actual probabilities are around 14% on Tuesday and 86% for any other day (taken together).

16:02 - Finally, because of focalism, we can be too focused on the hypothetical situation that the question asks while neglecting alternative comparison points or background events which could weaken how impactful our response would have been. As we’ll see in decision-making, we’re not that good at predicting how we’ll feel when we get to an event we’ve been waiting for: We can get too excited or too afraid about doing something because of what we know at present, but feel more apathetic when we actually get to doing it because we focus too much on the event itself while neglecting the impact of other experiences (like other things that happen that day or the reactions of others). As another example, “Are you happy with your work?” and “Are you finished with your work?” both ask about the state of your completion of a task, so you might think that people would hand in their work either way when asked. Actually, when people are in a good mood, their positive state lets them say they’re happy and done, but motivated enough to continue so they’re never really finished. Meanwhile, people in a negative mood will stop working when asked if they’re finished, but keep on going when asked if they’re happy with what they’ve done - because they feel bad overall and are not yet satisfied.

Essentially, these two questions both look 17:09 - at the state of completion, yet the simple difference in comparison point (satisfaction versus finishing) makes a large difference in people’s judgments. However, availability can also lead people to believe in illusory correlations: People can lead themselves to believe that two events, which have no relationship with each other, are actually related in some way because they can distinctly remember events when they co-occurred in the past. Superstitions and stereotypes are examples of illusory correlations. Maybe you did a ritual before an exam or an interview and you happened to get a high score or favorable reviews. So, in the future, you do the same actions because you believe that it gives good luck, even if the beneficial effects happened in less cases than nothing happening at all.

Or maybe you encountered people from a certain social group and you had not-so-nice 17:56 - experiences with them, so you conclude that everyone from that group is equally unpleasant to be with. In both cases, you paired ritual with success and “bad trait” with “social group” despite having an insufficient number of experiences to actually prove that your conclusion holds for all cases. The point is that availability helps us make quick estimates, but we shouldn’t confuse what is likely for us to remember with what is actually likely to happen in real life. Another strategy is the representativeness heuristic: We’re likely to believe that something belongs to a category because the specific case resembles the category’s defining characteristics - or at least, what characteristics we perceive they share. Let’s say you’re seated at some public place, waiting for a friend you’re meeting with.

Then, you see a person wearing black everything (from their 18:43 - shoes to their shirt), complete with black painted nails. What genre of music do they tend to listen to? Another person walks past you, quite tall and medium built. What sport do they play? Then, two people holding hands while walking. How are they related? Because of our preconceptions about metalheads and emo kids, of basketball and romantic relationships, we tend to assume that people are something they not necessarily are. Of course, it’s useful for us to make attributions and inferences about others (like those regarding their personality, position, and preferences) because we don’t have time to learn so much about each person we meet.

If we’re going on a job interview, we assume that the most well-dressed person 19:26 - would be the highest executive (based on our representations of what different people on the corporate ladder look like), and we can’t afford disrespecting anyone at the risk of losing a job opportunity. So, the representativeness heuristic tells us how we should interact with people and things in a certain way as a precaution, then update our actions if we get the opportunity to interact with them more. The problem is that the representativeness heuristic can be skewed by some biases in how we process information. One thing we have to pay attention to is the base rate: The objective probability in the population that a person or thing really belongs to a certain category. Definitely, black has been associated with the fashion and culture of certain music genre fans, but there are people who just love black and enjoy other types of music.

20:13 - Surely, height is an important factor in being drafted into a basketball league, but we have a lot of players who are proficient despite being shorter, and there are a lot of tall people who don’t like sports. And there are many people who hold hands as a sign of endearment, even if they are not in a romantic relationship. Because of base rate neglect, we judge that a person belongs to a category (the prior in Bayesian inference) even if likelihood evidence says it isn’t so. Here’s another problem. You’re a recruiter for a company and your job is to check the background and qualifications of applicants. One applicant writes in their CV that they graduated top of their class in college, was very active in org and student government work, organized various fundraising events, was in charge of looking for sponsors and advertisers in their events, is knowledgeable about accounting and financial management, and likes interacting with people for both work and socials - that’s all the information you have for now.

Which of the following is 21:08 - more likely: They’re applying for a marketing position, or they graduated as a business major and is now applying for a marketing position? We know that people aren’t one-dimensional so we tend to occupy multiple roles, positions, identities, personalities, and groups in society. So, what the conjunction rule says is that the likelihood of a person belonging into either one of two categories would always be higher than a person belonging to both. You might think that you have a promising business major candidate who would have good organizational fit with your marketing department. But the rule says that either “business major” or “marketing position” is more likely than both together. Think about it: There are many college graduates who go into marketing even when coming from other degree programs, and there are a lot of business graduates who go into other careers that may not be relevant to their college experiences.

A person who does both would be rarer than 22:02 - a person who is just either one. Finally, one of the ways we can overcome the biases of availability and representativeness is by basing our judgments on larger and more generalizable samples, following the law of large numbers: When we have a larger set of observations, we’re more confident that our conclusions actually reflect the real state of things, and we’re more likely to encounter the same trends when going outside the sample or predicting something in the future. As psychology students, this is something familiar to you - this is exactly why you spend so much time discussing sampling strategies when conducting research. Now that we know the biases that can skew our reasoning, we have to admit that sometimes, people can still resist yielding to the evidence that they’re wrong or that their line of thinking is flawed. Because of confirmation bias, we can sometimes look for and pay attention just to information that supports our beliefs while disregarding those that negate them.

Additionally, the myside bias causes us to 23:02 - reevaluate evidence in a way that confirms and further strengthens our beliefs. And unfortunately, because of the backfire effect, we can actually hold on to our beliefs even more when we’re faced with opposing information, regardless of how strong they are. We can pick out what confirms our stance, disregard and downplay the validity of opposing arguments, and rest assured in the veracity of our beliefs. It is exactly these processes that fuel polarization in all types of beliefs and the continued influence of disinformation in our social and political lives. Essentially, induction and judgments, in all their strengths and problems, inform not only how we estimate likelihoods and probabilities but also how we make sense of the people, events, and things we encounter everyday.

23:50 - Deduction and logical reasoning tell us whether our observations follow rules or principles we assume they should follow. We said that syllogisms consist of premises or assumptions which, when they are true themselves and are followed through using valid principles of logical reasoning, lead to a generalizable conclusion. Also, syllogisms come in two forms. Categorical syllogisms make statements about group or category membership and are usually marked by the words all, some, and no or none. For example, the classic “All humans are mortal; Socrates is a human; Therefore, Socrates is mortal” shows a categorical statement about Socrates’ mortality. Meanwhile, conditional syllogisms present arguments about conditions or causes, typically in the form if-and-then.

24:42 - Our opening example on “If it rains and you have nowhere to run, then you’ll get wet” is an example of this. Again, these reasoning processes look deceptively straightforward, but we have a lot of roadblocks to bypass to make sure that we get true and valid conclusions. Our main issue is that, as we’ve seen about confirmation bias, we have a tendency to prefer confirmatory over disconfirmatory information. We tend to remember and hang on to observations and conclusions which match our expectations, even if the arguments that give rise to them are invalid or, much worse, our so called “evidence” doesn’t actually match what’s happening in the world. Confirmation bias shows up in a lot of ways. The belief bias shows us that syllogisms don’t have to be valid in structure for us to believe them - they just have to be believable.

The sequence “If it’s raining, then I need 25:32 - an umbrella. I need an umbrella, therefore it’s raining” seems plausible because rain would be one of the reasons why you’d have an umbrella with you. However, following principles of logic, this is a formal fallacy called affirming the consequent because you already assumed that the consequent conclusion “I need an umbrella” is true though you haven’t established its cause “It’s raining”. Of course, there are many reasons why an umbrella would be useful, but the multipurpose nature of the umbrella would never make this sequence of arguments valid. Another problem is that because of confirmation bias, we tend to test out whether a rule holds true or is valid more often by giving examples of cases that conform to the rule rather than figuring out where it falls apart.

That is, 26:17 - like hypothesis testing, we should test rules following the falsification principle, where we identify cases that both confirm and negate the rule so we can see where it applies. For example, in one task, the experimenter will give you a series of numbers and your job is to figure out the rule. You can provide a series of numbers yourself and the experimenter will tell you if it follows the rule or not. One series they’ll give is {2, 4, 6}. What’s the rule? You might think that it is “even numbers increasing by 2”, so you give {10, 12, 14}. The problem of this is that if the experimenter said yes, you don’t know if it’s because the rule you thought of is the actual rule in place, or your sequence happens to conform to another rule which the experimenter is actually asking you for.

To 27:01 - definitively identify the rule, you need to give sequences that falsify your assumptions and see what the experimenter’s reactions are. Had you given {13, 365, 1896}, {-51, -12, 0}, or {i2, √2, 𝜋}, you’d have found that the experimenter’s rule is simply three numbers arranged smallest to largest (left to right on the number line). Confirmation bias is important to overcome because we need to be critical when interpreting the evidence available to us. If you’d look at it again, syllogisms present us with a cause-and-effect relationship, where we hope that an intervention would lead to a desirable outcome. And that’s where the Cell A bias comes in: we tend to focus solely on cases where our intervention leads to good effects while failing to consider what would’ve happened if we didn’t do anything.

This bias got its name because there are actually 27:59 - four scenarios we have to consider together, which we can see in action through a classic demonstration. Assume that you’re a doctor during a time when medicine wasn’t yet that advanced. You don’t have access to the medications and procedures developed through many years of research. Instead, you have bloodletting, an early practice where diseases were thought to be caused by bad or excessive blood, so healing a person involved letting them bleed the sickness out. You, being the empiricist that you are, decided to test the effectiveness of bloodletting by having some sick patients bleed and a control group left to their sickness.

There are four outcomes: They bleed and they 28:37 - get cured, they bleed and nothing happens, you don’t do anything and they get better by themselves, and you just leave them and they get worse. Cell A, a positive/positive match between treatment and outcome, is not informative because it doesn’t tell us how effective our actions are when compared to other alternatives. Instead, Cell D, a negative/negative match, is most crucial to consider because it strongly affects recovery statistics. For example, as shown on screen, we have the four outcomes and a hypothetical number of people who were either given or withheld bloodletting and what happened as a consequence. While all the numbers are the same across both scenarios, a simple change in Cell D drastically shifts how effective we think bloodletting is.

In the first scenario, it turns out that bloodletting 29:25 - (even despite complications and other problems) results in a greater chance of recovery rather than leaving the patient alone (which only works 1 in 5 cases), so we have tempered hopes that bloodletting will help 1 in 3 patients get well. But in the second scenario, patients who are left to heal by themselves do so 2 out of 3 times, and bloodletting actually harms those chances because only 1 in 3 survive; in this case, we’d likely stop using bloodletting and just hope that participants get well by themselves. What these biases tell us is that logical reasoning will only help us reach valid and true conclusions when we consider all the information available to us, both those that conform to our expectations and disconfirm our assumptions, such that we can pursue actions that will reliably help us achieve our goals. Even when we try to make good and well-informed conclusions and decisions, we still have to be aware of biases and limitations which can skew how we think. We’ve said that we often use heuristics when processing information which can lead us either to useful approximations or biased thinking when we fail to consider alternative options or disconfirming evidence.

Nowhere have these quirks of human thinking been more 30:42 - explored than in decision-making. For the longest time, researchers believed that humans made decisions based on expected utility theory: We are rational and systematic thinkers who would look at all the information available, and choose the option that would give us maximum gains or utility. Because of optimization, we compare alternatives on relevant dimensions to figure out the best choice. And if the options both look good, we follow the dominance principle where, if one of the options is superior on just one domain, we don’t hesitate to choose it. But they’re wrong. We’re cognitive misers who have limited capacities and prefer heuristic thinking, and motivated tacticians who would put in the effort when something we value is at stake with whatever decision we make.

We follow informal reasoning, dealing with probabilities 31:37 - rather than certainties given the dynamic and tentative nature of information available to us, so it’s hard to ascertain the validity of arguments given to us. We rely on fast-and-frugal heuristics and a take-the-best-strategy, making decisions based on the quick processing of just enough information based on the most salient dimensions of comparison to come up with any answer. We’re prone to satisficing, choosing whichever option we deem the best at the time we make a decision based on a quick evaluation of what data is available, despite better options possibly being available had we waited a bit longer. Because of motivated or hot cognition, our emotions and motivations can shape how we make decisions by influencing what information we pay attention to, what memories we retrieve, and how we process evidence. For example, people in a positive mood tend to look at the world in a top-down fashion, letting their expectations and previous experiences take over because more systematic processing can lead them to information that reduces their satisfaction in their choices.

Meanwhile, people in a negative mood tend to think in 32:47 - a more bottom-up fashion, taking more time to process everything because they hope that incoming data, mixed with relevant knowledge, can help them make choices which would bring them out of their negative state. Approach motivations can foster decision-making geared towards better goal attainment, while avoidance (due to fear and anxiety) can override rational thinking leading to choices that promote short- but not necessarily long-term safety. Still, though we do consider how we think we will feel in the future when informing our choices, we have to remember that a lot of things can happen between now and then, so we may end up feeling nothing like we think we would. At the same time, we have to be careful about incidental emotions, because how we feel can sometimes be irrelevant to the decision we’re making. Taking these trends together, we see that when we make decisions, we tend to focus on and prefer things that are certain (over those that are likely), those that are happening now or soon (over those that won’t manifest until much later), and those that concern things we can lose (rather than what gains we can achieve).

These are actually the key predictions of Kahneman and Tversky’s prospect 33:54 - theory, an example study of which we looked at through the problems we saw at the beginning of this episode. We saw the certainty effect (of people holding on to certain gains rather than gambling for the possibility of gaining more) when, in the original 1981 article, 72% chose just saving 200 people rather than risking a 1-in-3 chance of saving all 600. This is also because of loss or risk aversion, the feeling that we will be more negatively affected by losing something, as compared to the feeling of happiness derived from gaining something of equal magnitude. Notice, though, that the two problems we started with had the same options but worded as saving versus letting people die. This demonstrates the insight that reference points matter, how our choices can shift all over the place depending on how we make comparisons.

Indeed, only 22% chose the certain death of 400 people 34:47 - (which is the same as choosing the assured safety of 200), with 78% (from the original 28%) now choosing to risk it all, when the problem is framed in terms of what is lost against what can be gained. Rationally, people should choose the same option regardless of how things are worded, but the framing effect tells us that people are more risk averse in a gain-framed problem where they prefer the certainty of whatever benefits are available, but would be willing to gamble in a risk framing when they think that loss is inevitable anyway. Together with future discounting (of preferring the certainty of current events, while failing to consider how impactful future events can be on one’s decisions and outcomes), risk-seeking strategies in a risked-framed situation can then explain the sunk- cost effect where people keep on pursuing a particular option or decision no matter how negative the outcomes they keep on facing are: Because they’ve already invested a lot in that endeavor, they believe that they might as well keep on going with the hope that things would get better (even if they don’t). In other cases, when people know that the decisions they make have a good risk of doing harm than good, they demonstrate the omission bias where they opt for things to happen as they do rather than try to get a better outcome. Similarly, because of the status quo bias, people tend to follow the default choice already made for them by others, or keep on making the same decisions as they’ve done in the past even if the situation has allowed for other possibly more beneficial alternatives.

These phenomena show that regret, 36:19 - the feeling of dismay in having made the wrong decision, can literally paralyze people from making good choices especially given the uncertainty of better outcomes after sunk costs, doing something rather than omitting action, or moving away from the status quo (what they’re used to and sure of). These influences on how we take account of gains, losses, frames, and options also show up in how we make decisions across domains such as life milestones, social relationships, and purchasing behaviors. For example, the attraction and compromise effects demonstrate how irrelevant choices, called dominated options, can actually predispose us to choose among other intended alternatives by the virtue of them just being there. For example, imagine that you’re going to order a drink from a shop (whether tea, coffee, shake, lemonade, whatever you want), and they ask you for what size you want: small, medium, large, mega-ultra-jumbo. You feel that you’re quite thirsty that day but you hesitate on spending too much, so you opt for the small size.

The shop employee 37:20 - tells you that for just a minimal fee, you can upgrade to medium which is just about twice in size. This is the attraction effect: The small size is a decoy which, when compared to the medium drink, seems to be an inferior choice so a minimally- priced upgrade to the better medium size feels like a steal. You’re still unlikely to choose the large drink because it’s too far in size and price, so the small drink increases your attraction for the middle one. Without it, medium versus large would depend on how thirsty you are. Meanwhile, no one will buy the mega-ultra-jumbo size because it’s too large and expensive for a single person to consume healthily.

However, it’s also a decoy intended to capitalize 37:55 - on the compromise effect: Because this size is unreasonable, the large drink looks like a good compromise in terms of volume and cost and would especially be preferable for a person who’s very thirsty. The medium and small sizes would feel like severe downgrades, so you opt for the large size which is just more manageable than the largest drink. Either way, you didn’t get a bargain upgrading to medium or downgrading from mega-ultra-jumbo because you spent money anyway. Similarly, reference points, no matter how arbitrary, can skew the inferences and decisions that we make due to anchoring effects. For example, when people are given random low or high numbers then asked to estimate things like product prices, people give significantly lower estimates when given low rather than high numbers; the opposite trend “high reference - high estimate” also happens.

Going back 38:41 - to the ordering drinks example, you found that the drink size you chose is sold at 89.50. Why the.50? For most people, they anchor the price against the next whole number 90 (which is essentially what you’re paying for) and look at their drink, thinking they’re paying for something in the line of the 80s. This arbitrary.50 difference may seem inconsequential, but the comparison between the initial 8 and 9 digits can give people a false sense that they’re buying something at a cheaper price. That’s also why some stores will sometimes announce sale prices without actually dropping the price, just giving customers an arbitrary number which is supposedly the original price. This allegedly “original” price must be larger than the sale price, but it should also be a realistic anchor.

It shouldn’t 39:28 - be too small that it feels like nothing much was saved, and it shouldn’t be too large or it will feel like an artificial price cut. Well, that’s marketing for you. Ultimately, decision-making, like problem-solving, aims to help us achieve an end state that meets our goals and gives us great satisfaction. That’s exactly why it’s sometimes hard to make choices, because of the fear that we won’t be satisfied, and because of our tendency for counterfactual thinking where we consider what would’ve happened if we made different choices. The paradox of choice shows that more isn’t always merrier: We tend to be faster and more satisfied with our decisions when we have few options to begin with, since we have fewer alternatives to compare and we’re better able to predict our happiness with our choices. Moreover, because of the spreading of alternatives, we temper our dissatisfaction by rationalizing our decisions: We emphasize the benefits of the option we chose, while also selectively focusing on the negatives of what we didn’t take.

Then, effort justification demonstrates that even when things aren’t worth the investments 40:33 - at times, we use the mere fact that we worked hard (maybe too hard even) to get something - to justify (after everything’s over with) that we wanted it all along. Psychological reactance says that the feeling that we’re being forced or coerced to do or choose something makes us less willing to do as others say, because we don’t like the feeling of our freedom to choose being denied from us. And because of the endowment effect, we place more value on things (regardless of their inherent worth) because of our mere ownership of them: They gain value for us because we decided to call them ours. Balancing the benefits of our judgment, reasoning, and decision- making abilities with the biases that sometimes compromise their accuracy and usefulness, we find that we’re remarkably able to make good enough decisions and reach good enough conclusions regardless of the limits of our being human. That is, because of bounded rationality, we’re able to employ these heuristics and biases to our advantage by letting automatic cognition take over the small things that keep us alive, working within and even bypassing our limits, then engaging in controlled processing when our abilities and motivations push us to do so.

The fact 41:44 - that we rely on shortcuts and quick judgments of what information is important doesn’t mean we’re thinking incorrectly. Instead, we’re pragmatic and efficient, working hard when it matters, understanding that we don’t always have the mental or environmental resources we need to make fully rational decisions. In the end, we solve, judge, reason, and decide as best as we can amid an uncertain and dynamic world. Taking off from what we’ve learned about information processing and problem-solving, we looked at how judgment, reasoning, and decision-making are similarly important processes which involve taking information in the world and knowledge in our heads, weighing what is important and relevant, sometimes relying on heuristics when we’re not able or motivated enough to think deeply about things, and finally engaging in actions that help us reach the goals we want to achieve. Of course, given the limits to our cognitive abilities, motivational resources, social context, and environmental conditions, we’re not always able to engage in effortful processing - and that’s okay.

Drawing from our large knowledge and experience 42:55 - base, heuristics and top-down processing try to give us quick decisions and conclusions which, because of bounded rationality, are often of sufficient accuracy and usefulness. However, when our decisions concern more crucial domains of our personal and social lives, we opt to engage in systematic and deliberate processing, taking more information and analyzing them more deeply, hoping that we’ll get more reasoned and informed answers. With the end of this lesson also comes the end of our series exploring the main ideas and contexts that have shaped cognitive psychology in its long and interesting life as a field. We looked at how history, culture, technology, and thought have intersected to define what psychologists and researchers in allied fields study about human cognition. Then, we saw how attention and perception kick starts the process of transforming the world around us into information that we can make sense of.

After that, we found out how memory structures 43:52 - and processes help us remember, organize, and use information for the long term while guarding ourselves against forgetting and distortions. Next, we talked about how we use our memories and knowledge to categorize and communicate about the world around us. And finally, following the belief that thinking is for doing, we saw how we take new information and past experiences to solve and decide our way to reaching our goals, now aware of the limitations and possibilities of human cognition. With that, this has been cognitive psychology - a journey into our mind’s straightforward paths, messy roadblocks, and interesting side-routes. Now go out there and don’t stop thinking. Thanks for watching! .