In this two-part episode, you will get a better idea of how we use data as humans and understand how our brains work to analyze the data that we acquire every day in powerful and effective ways and the possible mistakes we might make along the way. Learn more about the four biases and what you can do about them:

  1. Pattern-seeking – What you can do: Triangulate your data
  2. Causality – What you can do: Experiment more
  3. Confirmation bias – What you can do: Ask “What data would change my/your mind?”
  4. Attribution – What you can do: Keep your language within

Knowing these and finding out the signs to look out for and what to do gives us a chance to become better when it comes to decision-making not only at work but also in our personal life.


What You Can Do

  • Work on identifying the red flags from the four bias habits so you recognize when they come up.

Resources:

Alexandra’s LinkedIn – https://www.linkedin.com/in/alexandramannerings/

Using Data While Presentation


Click to read the auto-generated transcript of episode 59

Alexandra: I want to start by making it very clear. Our mind is unparalleled in its ability to receive, parse through, and act on incoming information. After all, we are exposed to millions of bits of data from our senses every second of the day, and without any conscious effort at all, our brains can pick out the tiny fraction that’s important and can understand it, and then apply it.

Too many conversations on bias share what’s bad about the way that we think, making it sound like we are broken and flawed. We do need to understand that we are extraordinarily powerful and it’s often the very things that make us so powerful and high functioning that get called out as bugs in our system as if they need to be squashed out of existence.

But we can’t change the core setup of our. And not only can’t we,we shouldn’t want to. Instead, we can find ways to mitigate the downsides of these strengths while embracing who we are as humans. Consider the fact, for example, that computers still can’t match us in image recognition. Even with these incredible advances in AI that we’ve been having recently with the best technology today, it takes hundreds of thousands of images for computers to get good at telling the difference just between, say, cats and dogs.

You have to say, this is a cat and this is a dog over and over again in all sorts of different contexts, inside, outside, different backgrounds, different colors, and it’s only recently, like very recently, can this be done with accuracy and speed? That even gets close to our own ability to make these distinctions.

In other words, as we help computers work more like our brains with neural networks and advanced computing, they’re getting more, not less powerful. And my kids who have grown up with cats and dogs could recognize and correctly identify the difference. After seeing only a couple. Think about this. After exposure to five or six real dogs and real cats, plus some very different drawings of them in books.

At 15 months old, they knew what made a dog a dog, and a cat, a cat. Yet computers have to have so much more training to be able to make this level of clear distinctions, that extraordinary ability to make sense of a world in an ordered categorical ways. That means at a glance you can know whether to throw a tennis ball or a ball of yarn to a four-legged creature you’ve never met before.

So we are pattern making champions. And that means we can make sense of the world very quickly. The downside, of course, is that it means we find patterns when there aren’t any there. So we can make sense of the world quickly, right? Because we’re so good at making patterns, but that’s bad for dealing with the few situations when those patterns actually don’t exist.

So I was enjoying the fact that the other day my son asked me why it looks like there is a man in the moon. And as a fun note, the last time I talked about this, someone told me that in China, actually the story is that there’s a rabbit in the moon. But there’s two answers here to why we see things when we look at the moon, and the first and boring explanation is that there are craters that just coincidentally happen to sort of resemble something like a face or a rabbit or whatever you see.

But the second part of it is that our brains are wired and primed to recognize faces. We come actually pre-programmed to find those kinds of patterns in the world. As infants, for example, it is really,really important for us to be able to understand what’s a human face and which humans we’re looking at.

Are they strangers? Are they family? Are they people? We’re safe with this wiring to see faces and other important patterns means we see them everywhere. We see Jesus in burnt toast and we see a man in the moon. The ability of our brains right to come, not as blank slates, but pre-programmed. To expect those kinds of patterns is what makes us so good at finding them quickly.

It’s why after five or six times we can make that pattern instead of hundreds of thousands, that computers, which cannot be pre-programmed to expect anything, need so much exposure, right? Our toddlers are still light years ahead of what our best computers can do, and this ability to create accurate categories on minimal information, but.

Being able to find patterns so quickly means, it helps us know more about our world, but it also means that we see them when they’re not there, right? We’re pre-programmed to find them, and so when they aren’t there, we expect them and we will look for them anyway. So we do this not just with, you know, Jesus and toast, but we do it when we have data that is actually randomly distributed.

We expect there to see patterns, so even if the data are really noisy all over the place, we will make them have patterns. The great example of this was in World War ii, so London was getting absolutely hammered by German bombers, and as these bombs fell all across the city, the Londoners started noticing that certain areas got hit really hard and other areas were left untouched.

Well, how could this be? Wasn’t Germany just trying to level London? Why would they leave some areas untouched? And they decided it must be because there was something really important to the Germans in those areas, like a spy, and they didn’t wanna kill their own spies that were embedded in London. And this seems pretty reasonable, right?

Except that the bombing was Randa guided missiles didn’t exist then. And so in a truly random spread of bombs, it would actually be really weird not to have some areas that were heavily hit and some areas that were untouched after all. Think about when you flip a coin, does it alternate heads, tails, heads, tails?

No. You’ll get clusters of heads and clusters of tails, even if the coin is completely fair. That is how randomness works. The problems with us. We expect random to mean uniform. And the moment we see anything different areas with more numbers or less numbers, or more events, or less events, whatever it is.

We make that mean something. We listen to the noise, that random variation of up and down, and we think that it’s music most of the time, right? The directions that we get from the world come in, like the worst phone connection you could imagine with words just buried under that static. And we’re so good at picking out those hidden words and getting the information we need from this crazy, chaotic world.

But it means that when we’re actually just listening to static, we still think there are words. So what can we watch for? How can we be aware of when we’re stuck in a situation where in fact, there isn’t a signal to pick out of that noise? The first red flag is something with very high vari. If something you’re considering in your data changes a lot, like, that’s just the nature of something you’re watching.

You’re watching a very volatile stock. you’re watching changes in your donors, and you happen to have donors that will donate wildly different amounts. It’s gonna be difficult to tease out real patterns, right? Real ones not ones that you think are there just because we have this ability to create patterns.

If you don’t know how variable something might be, that’s a problem too, right? If you’re looking at a new set of information or data and you’re unsure, it’s new to you, you don’t know whether variability is normal or not in this, how are you gonna be able to best tell whether you should look at a few days worth of data?

A few months, a few years, right? So you need to be paying attention to identifying variability in your data. The next thing you have to look at is when you have made a conclusion locally that you leap to applying globally. Right. So there’s a joke about a philosopher, a biologist and the mathematician go on a walk and they see a cow in the field, right?

And the philosopher’s like, oh, in this area cows are brown. And the biologist says, this farmer has brown cows. And the mathematician says, look, that cow is brown on this side, right? One piece of information could lead to a number of different conclusions, and you have to pay attention to how broadly you are defining those conclusions, from the information that you have.

So are you extending those conclusions in a way that is justifiable by the information that you’ve had or are you extending them too far? And we have a strong sense to take information that applies in a very specific situation and make this global assumption about it. Cuz again, we’ve decided this is a pattern and therefore universally applicable.

So for example, you send out an email with a new subject line to 10 donors and you get eight donations from those 10 donors. You could decide this is the best subject line ever and we should use it all the time. Or is the correct conclusion this subject line worked really well for these eight donors?

Or is it somewhere in between this subject line’s Very good for donors who meet these criteria, you have to start noticing how far you’re reaching from the information that you’re getting and see if you’re making a pattern out of a coincidence or taking a big pattern from small facts. So if you recognize one of these red flags, what can you do?

The first thing is to be more mindful of your language and how you talk about the information that you have. And this can feel pedantic, but it actually is gonna help you recognize how much do you actually know and how much are you extrapolating or extending beyond the data that you’re using. An example, instead of saying, young kids love this program, we’re doing, you could say over the last year, kids that were aged five to 10 we’re staying enrolled twice as long as those kids aged 11 to 17, right?

That’s actually the data you had on hand, and by using the words that are tied to the data you actually have on hand, it’ll make you more conscious of how far you might be extending that. The next thing is to look for multiple sources of evidence about that pattern. The saying goes once is nothing, twice as a coincidence and three times as a pattern.

It’s not about having three individual data points. It’s about do you have three sets of data points that can help you reinforce this pattern from three different situations and contexts? Right? So if you think this program works on youth, great. Have you tried that in different locations? Have you tried that with different groups of youth?

Have you tried that program on different ages to see if it wasn’t in fact that it worked better with youth, but that your youth ended up being much more in one demographic than your older students? And it’s actually that demographic. It works well for not the age range, et cetera, right? Find multiple ways of triangulating that evidence for your pattern to see if it persists.

Then in situations where you have high variability, there are actually statistical methods you can use to still tease out meaningful patterns. These might not be ones that you yourself know, but you can work with somebody who could help you identify those sorts of stats that would help you manage noise in your data so that you don’t jump to conclusions that aren’t justified actually by the numbers.

 All right, so trait number two. We know how to make things happen, and this is why we live in one of the most incredible ages of the humankind has known, right? We drive forward advancement scientifically and culturally because we can figure out how to actually make things happen.

The downside for that is that it means that we create these causal connections when they don’t exist. You only can make things happen when you figure out that doing X makes Y happen. And that’s how we can drive that advancement. But that gets in the way when we think that Y causes Z. And in fact, there’s no connection whatsoever.

So there’s a great website called spurious correlations that takes the real data and shows how closely these things can move together, right? And I love showing these because the data pattern here is real, These aren’t falsified numbers. These are actually the numbers that go together and they move so tightly.

They have something called correlation, right? We see a relationship that when one goes up, the other goes up, or you can have an inverse relation when one goes up, the other goes down, and that it always tracks. But when we look at this, there’s no actual way these could be causally connected. They are only correlated.

The age of Miss America is connected to murders by steam, hot vapors and hot objects, No, there’s no causal mechanism there. There’s no way that one of those things could influence the other, but the pattern is incredibly compelling and what happens is it’s easy for us to see here when they’re what’s called spurious, right?

They look true, but are clearly wrong. We can tell that because I can look at that and say there’s no possible way, but we get ourselves in situations where we could think of a way that two things that are correlated might actually be causal, and we jump to this conclusion that they therefore, because they move together, because they occur at the same time or they increase or decrease together, that that must mean they are causal.

The very tragic example of this is that in 1998, Andrew Wakefield and several co-authors published a scientific paper. With falsified evidence, right? Made up evidence that the MM r vaccine caused disorders on the autism spectrum. Now, eventually the publishing journal pulled the paper when they recognized that the data had all been made up, and they’ve done huge studies since then on hundreds of thousands of children showing that there is zero causal relationship between receiving the vaccine for measles, mumps, and rubella, and for your child developing any symptoms on the autism spectrum.

But this thought continues persist to this day, and in fact, people are still making decisions about whether to vaccinate their children because of this false idea. Now, there are a lot of facets on this issue, not the least of which is how we interpret and relate to neuro divergence, but relevant to what we’re talking about today.

There’s a significant reason why this persistence continues because the symptoms of autism spectrum disorders often show up around the same time that children get their first round of the M M R vaccine. It is a complete coincidence. It’s just that both those things happen around the same age, but it is very compelling to adults who have children that seem to be, and I’ll put this in quotes, developing normally suddenly start exhibiting completely different behavior than they had before.

That’s why we must always treat correlations as nothing more than coincidence, no matter how suggestive they might seem, until we have strong evidence for causation rather than just correlation. So what can we watch? Honestly, anytime you hear somebody make a causal claim, you should sit and question it for a second, right?

Whether it is a headline about bacon or C coffee causing cancer, or an advertisement saying, our new C r M increases donations by 20%, right? Any causal statement should make you at least raise an eyebrow. You should stop and think about it. You especially should think about any supposed causation. When the primary association between the thing happening and the thing it’s supposed to cause is temporal, meaning they happen at the same time, right?

Lots of things happen at the same time because there’s other things that cause them to happen at that same time. One of them is that murders go up along with ice cream sales and they drop when ice cream sales go down, Ice cream sales and murders go up. Ice cream and murders go down at the same time, so they have this temporal relationship.

Things happen at the same time. The problem is those things are both driven by temperature and weather. Apparently, the more people get out and they’re around other people, they’re more likely to kill other people, but they’re also more likely to buy ice cream. So the thing is you have to be very suspicious of things.

Where the main association is that they happen at the same time, even if they consistently always happen at the same time, there could be something else completely different that’s making them happen and they have nothing to do with each other.

The next thing is ihai really careful about easy or only like tangentially related measurements about things that are really complex. Now, as a data person, I often have to lean into suboptimal measurements, right? Things that maybe aren’t exactly what we wanna be measuring because the world is really complicated place.

But you wanna be careful if those really easy measurements about something really complicated are telling you that there is a supposed causal relationship because it’s hard to truly tease out that causation when you’ve oversimplified the situation. So, for example, it’s very easy to take your temperature if you’re feeling ill and decide if that means that you’re sick, right?

A high fever is a clear and often accurate indication of illness, but we can get the causation wrong here, right? That correlation’s very tight. If I’m running 103 degree fever, I’m sick. But we oftentimes then jump to the fact that the fever is causing the illness, right?

I have a high fever, it’s making me feel ill, I’m gonna treat the fever and that will treat the illness. But in fact, there’s a lot of great research coming out now that fever is your body’s way of fighting the illness. And that for some illnesses, if you let the fever run for a couple of days, it will shorten the length of time that you are sick.

So illness is a very complex situation. Fevers very easy to measure, and so we have to make sure that we’re not using simple measurements to help us jump to conclusions about therefore how they’re connected. So what can we do if we notice one of these three red flags, again, like we talked about with the first one or our first trait with our patterns?

Being more careful in the language you use will help you be more aware of what you’re stating. So I’m gonna introduce you to a phrase that statisticians and scientists use all the time, which is associated with, so rather than saying fevers cause illness, we would say fevers are associated with illness.

Or rather than saying, High social media engagement causes more donations. We can say, we notice that people who are engaged with us on social media are associated with a higher chance of them donating, but we don’t know which way it might go. Or maybe something else that’s driving it entirely. So when we used Associated With it helps us remember that we might not have that great evidence for causing, and just that we see two things that happen together.

The next thing is like we triangulate our evidence for our patterns. See if you can repeat it. So if you think flipping this switch makes that thing happen, does it happen every time, right? Like every time I do this, do I see this other thing happen? Can I do it in a different scenario or situation and also have it happen?

Or best of all, see if you can experiment, If you think this causes that, can you create a scenario that would help you show that? Yes, in fact, changing this one thing. Results in a different outcome. For example, if you think that a new donor form on your website is going to be causal to an increase in donations, right?

It’s going to cause you to get more donations. You can do what’s called an AB test. You can actually use some very simple platforms where you can create two versions, the old version of the form and the new version of the form. And when a person comes to your website, they will randomly be shown one or the other of those forms.

And if you truly see a measurable difference in donations on your new form versus your old form, that is great evidence towards the fact that the new form is in fact different, right? It makes a difference. It causes better donations. But if you just throw that new form up there, no evidence, and you see an increase in donations, what if you put that new form up in December?

And you’re comparing it to November and October’s numbers. Well, it might be the fact that it’s just December and year end giving that caused the increase and it had nothing to do with your new form. But if you had done the AB test in December and you still see your new form performing better, then you can rule out that it was December making it, your donations go up and had something to do with the form.

The last thing is, again, to just remain skeptical. Ask lots of questions about causality. What actually is the evidence when someone says, my CRMs gonna drive increased donations by 20%? Well, what’s your evidence for that? c r m company, are you just telling me that when somebody buys your c R m, they get 20% more than before?

Are you letting them try things in your new platform and their old platform? How do you not know that? they’re on an upward climb, right? they’ve been increasing donations. They see this trajectory. It’s time to invest in a new system, and whether or not they had changed that trajectory Would’ve continued, right? You need to know what the evidence for and you shouldn’t just accept causal statements. Alright, third trait here. You have to see it, believe it to see it, right? We always like to say I have to see it to believe it, but in fact, the way we work is I have to believe it to see it.

So the reason we have this trait is it allows us to avoid overwhelm. If we had to constantly question every piece of information that came into us and say, is this real? This way? I’m processing the world, this framework I have. Does this piece of information support it or not? You would not function going through your day because there is too much information to be able to handle that level of scrutiny on everything coming in.

However, our great ability to maintain consistency in moving through a chaotic world means that we’re not so good for when it comes to actually needing to update what we think. 

So, we’re gonna play a little game here. and for those on the call, if you’re up for it, you can drop things in the chat for the question that I’m gonna ask you.

if not, you could just play it in your head,and answer this. So imagine that you work at a toy factory, and you’re getting reports that there’s more broken toys than usual coming off of your assembly line, and you need to figure out what the cause is. And one of your quality control folks says, I think it’s all the animal toys, right?

The animal toys are the ones that are actually, driving these broken toys. You’re running late, you’re trying to check before you report to your boss about what it’s gonna be. And you can only check two things. You’ve got a truck toy that you could look at to see if it’s broken. You’ve got an animal toy that you could look at to see if it’s broken.

You’ve got a bag that has one of the broken toys, so you could open that up and look to see if it was an animal or a truck, or you have a box that’s ready to go out that’s not broken, and you could check that toy that’s not broken to see if it’s an animal or a truck. You’ve only got time to look at two of these four things.

Which two would you look at to try to test and prove your hypothesis that it’s the animal toys that are responsible for needing to all be repaired, right? they’re responsible for all the broken toys. You can drop it in the chat if you want, what you think you would check or write it on a little sticky note.

Alright, so the most common answer, and I’ve done this a bunch of times, is the first one you check is the animal, right? You check the animal toy to see if it’s broken, because that gives you evidence about is it broken, right? The most common is you check the animal to see if it’s broken, And you check the broken toy to see if it’s an animal, because both of those things would reinforce your idea that they are, in fact, the animals that are breaking things makes total sense, right? We wanna find evidence that supports our hypothesis. Well, here’s the problem. If trucks are contributing to the breaking of the toys, none of the evidence that you looked at would show you that, right?

All you’ve collected is evidence in support of your statement. You haven’t tested it against something that could disprove the statement that it’s only the animal toys responsible for needing all the repairs. So the most effective way of doing this would be to check the broken toy, right? Is it an animal?

And then to check the truck to see if the truck is broken. The reason we don’t think that way is because we don’t like trying to disprove ourselves all the time. What we have is something called confirmation bias. In confirmation bias, instead of thinking about things that would disprove the position we have, we continually seek out information that will uphold and confirm what we believe.

One step worse than that, it’s not just that we don’t like seeking out information that could challenge us. We will much more readily discount information that does make it to us if it contradicts the beliefs that we already have. The upside to this is it helps us maintain our own sense of confidence and security and avoid an incredibly challenging mindset of cog, what’s called cognitive dissonance.

When you have two things in your head that don’t line up, that’s very disorienting. It makes it hard to navigate through the world. So we have confirmation bias to help us hold ourselves together and go through a world that oftentimes just doesn’t make sense, but it means that when it is time to update some of our beliefs and our perceptions about the world, it’s very hard for us.

So what can we do? What do we need to watch out for when we’re slipping from comforting sense making and having the world actually function for us into the dangers of confirmation bias that are keeping us from being able to fix a belief, that is holding us back or is no longer correct. The first one, and this is a little disconcerting, right?

Is that anytime you’re like, oh yeah, that totally makes sense. That actually is a red flag for confirmation bias, right? Anything that is new that you know, hits your brain and you’re like, yep, fits in perfectly with my worldview, just makes. You’re probably evaluating it from a point of view of confirmation bias rather than actually processing that information.

Now again, you don’t need to reevaluate every single piece of information that comes in, but if it’s relevant to an important or complex decision, and you’re like, yep, you have all that data to make sense, you may wanna take a step back, slow down and say, hang on. am I legitimately looking for information that might challenge me or am I only sticking with stuff that makes sense.

Then when you find, go out and find the data that actually does challenge the position that you have or challenge, what just naturally makes sense to you about a position, you’ll find yourself having this knee-jerk reaction of like, oh, there must be something wrong with that data. No, no, no, no, no. I know our donor numbers had to have gone up this new C R M was definitely supposed to work.

The fact that we see donor datas going, donor numbers going down, there must be something wrong with the system. If you find yourself doing that and immediately just discounting information coming in, that’s challenging. Also a flag for confirmation bias. So what can we do? I have a phrase that I love, but it does sometimes annoy a lot of the people that I talk about, which is when they’re very convinced of something, instead of trying to disprove what they think right now, I ask them what data would change your mind?

You’re very sure that your new website is going to drive engagement in your new program. What data, if you saw it, would change your mind that the website isn’t going to improve engagement with this program? And I have that person tell me what it would. Because then if I know ahead of time they’ve stated, here’s the bar it has to reach.

If I have that data, if I go find it and I bring it back and I can show them, they’re much more likely to have to say, wow, okay. Yes, you’re right. I said, if I saw that, it would help me change my mind, and you just showed that to me. Whereas if you don’t ask that and you bring them that data that shows it doesn’t work, it’s much easier to discount incoming information because you haven’t primed yourself to accept it.

This works on yourself too. So when I’m really convinced of something, sometimes I have to sit back and be like, okay, hang on. What would make me stop believing this? What would make me change my mind on this position? And then I have to go out and get it. I have to go check that thing. It’s like checking the truck.

You’ve gotta go find out, does that information exist? If I’ve said, this is what it would take, is that out there? And you can formally do this in groups that you’re having discussions with. Where you can assign what I call the devil’s analyst role. So they talk about how the role of like an ombudsman was to always disagree with whatever the leader was suggesting, right?

They have to play that devil’s advocate where they say something and we’re like, well, what about, and you have to sort of present the other side. The data equivalent of this, the D double’s analyst, is to say, what data would disprove this position? Do we have it? What’s wrong with whatever data’s being presented?

What are the limitations of it? So that there’s always somebody who’s there thinking about that disproving rather than just being comfortable with proving things we already expect. The more you can get comfortable with conflicting information, the easier those first two steps will be. And so I recommend, it’s sort of like,Avoidance exposure therapy, right?

If there’s something like spiders you really don’t like hanging out around like the tarantula at the zoo, even if it’s behind glass, taking a look at it can help you get more comfortable about being around spiders. Same with information that you disagree with. The more you can have exposure to high quality information that you disagree with, the more comfortable you’ll be in that space.

I hope you enjoyed the first half of this presentation about using data while Human Tune in next week and we’ll cover the second half and the remaining challenges that we face, as humans navigating this complex world. If you would like to see the YouTube video recording of this presentation or download the slides or link to the second half of this recording, once it’s out, you can do so at our show notes page, which is heart soul data.com/using data while human all one word.

Please also feel free to leave me comments about your experiences with some of these strengths and weaknesses, and if you have any questions or places you’d like me to dive a little bit deeper, I also love hearing that as well. Thank you so much for your time today, and I wish you the best on your analytic journey.

Click to read the auto-generated transcript of episode 60

Alexandra: Today we are continuing from where we left off in episode 59, discussing how to use data while human. So if you haven’t listened to episode 59 and part one first, hit pause on this episode. Head back there. Catch up to here and come back and join us where we are going to continue the discussion of the incredible things that our brain can do and how sometimes that can lead us astray.

We’re covering the red flags to watch out to so that you can avoid making these same cognitive mistakes over and over again.

Hello and welcome to Heart, soul, and Data where we explore the human side of analytics to help amplify the impacts of those out to change the world with me. Alexandra Maning.

The last part here is a really specific tactical thing that you should do with big decision points when you’re going to use data to evaluate the outcome. So you bring in a brand new program, you start a whole new pilot. What defines success in that pilot? What defines success or failure in that program?

What would you have to see in order to want to have that program or pilot, expand or continue next year? Decide that before you get those results. There’s. A great story, about a Scottish physician named Archie Cochrane. And he pioneered the use of evidence in medicine and he was working in a cardiologist ward.

He was practicing, just after World War ii. And he became quite suspicious that the overwhelming, best practice, and I’ll put that in quotes, of keeping patients who had suffered from a cardiac arrest or cardiac event in the hospital for two weeks or whatever they were doing. He felt that was actually potentially harmful to the patient.

And so he went to his fellow cardiologist and said, I’m not sure them being in the hospital is actually helping their outcomes, and I think we need to evaluate this. We need to understand this. And his colleagues were like, I don’t know what you’re talking about. You can’t go experimenting on people, you’ll kill people, and this will all be on your head.

But he really believed that he needed to evaluate this and understand it. And so he started a random trial and he said, all right, half of my patients I’m gonna send home after we get them stabilized, we’re gonna let them recover at home. And the other half, I will let recover the standard procedure here in the cardiac unit.

And he tracked what happened, and then he collected all the data and he went in front of his cardiology colleagues and he said, here’s my data, right? We’re gonna, we’re gonna talk about this data. And he put up on the chart,up on the chalkboard, a chart that showed the number of patients in each group and the number of deaths in each group.

And the number of deaths in the group that had gone home was higher than the group of patients who had stayed there in his chart that he put up on the chalkboard. And he goes, what do you guys think about this data? And they all explode. And they’re like, we told you, Archie, we knew you were killing patients.

We can’t believe you did this. And he says, all right, so you believe in the data. You’re happy with this data. You’re happy with these benchmark, right? The fact that this number of patients dying would lead you to assess that that was the bad choice. And they’re like, absolutely. And he goes, great cuz I switched the headers of the columns.

He goes, the group that’s dying more often are the ones who are staying in the cardiac unit and the group that is going home as the one are the ones that are dying less frequently. So if you just told me that you would make a decision about where to send patients based on this data, then you better be making the decision on where you send patients based on this data.

By setting those benchmarks and those decision points before you actually get the official results, it will force you to honor those rather than sliding the end zone points rather than lowering your expectations or changing how you measure success. Right? If you agreed on that beforehand, when those come in, Then you’re gonna be less susceptible to confirmation bias, less susceptible to saying, well, I wanted this program to work, or, I knew this program was never gonna work.

You’ll follow the, what the data tell you more effectively.

The corollary to this, I learned because my husband’s actually a detective and he points out the fact that the mindset you have when you approach a question will impact how you go about evaluating. So in the first case, I told you that you were responsible for making toys and you wanted to figure out which ones were broken, but we all have a tendency to try to confirm our points of view.

But if I tell you that like my husband, you are a detective and you are walking into a bar and you want to know if everyone drinking alcohol is over the age of 21 and you can only check two things, right? You can check the drink that the 25 year old. You can check whether the person, the age of the person drinking coke, you can check the age of the person drinking beer, or you can check the age, or the drink of the person who’s 16.

You automatically go, well, I’m gonna check the age of the person drinking beer, and I’m gonna check with the 16 year olds drinking, which is the opposite of what we did with the toys you were trying to disprove here. That everyone over the age of 21, is the ones drinking. Right. Before we naturally tried to confirm our belief that it was the animal toys being broken and not the trucks here.

Even though I stated the question of is everyone drinking alcohol over the age of 21? Instead of checking with the 25 year olds drinking and checking how old the beer drinker is, we know we wanna check the 16 year old because that would disprove our. So having that mindset of being the detective, of trying to disprove the things that are really important will help you seek out that information and be less susceptible to confirmation bias.

Alright, so our last one here is our tendency to ascribe the same kinds of, assessments of goodness or badness based on how much we like things are people, right? So good people do good things. Bad people do bad things. Good programs have good outcomes. Bad programs have bad outcomes. Programs I like always have good outcomes Programs I don’t like have bad outcomes, This is called attribution bias. It’s great for maintaining relationships. It’s what allows us to forgive our friends when they make few mistakes. It’s what allows us to stay safe from people who have hurt us before. But it’s bad when we have to evaluate things more objectively. Like a program that don’t innately have good or badness.

So attribution theory, which is what describes this idea of attribution bias, is when we give a why to something beyond what the facts actually say, and thatthat why is based on how we personally feel about the thing. So think about, you know, you have a donor that has donated five years in a row, and then that donor doesn’t donate this year.

What are you gonna think about it? Are you gonna go, huh? Okay. Donor donated five years very consistently, same time every year. Didn’t donate this year. Okay? No. What you’re gonna do is if you like this donor, you’ll go. I wonder if something happened to that donor. maybe they lost all their money and they can’t donate.

what if they’re in the hospital and that’s what they couldn’t donate? Or if you really question yourself, if you have issue, you know, if you were concerned, you had some really bad phone calls that went down or whatever, you’d go, I bet they didn’t donate because I screwed up. Or maybe you don’t really like this donor, and you’re like, I knew they were gonna flake they flakes this year because they were gonna, they’ve been threatening to do that.

I’ve always thought that they’re not gonna stick with us. Right? And you make up a why based on what you think about that person or based on what you think about yourself in the situation. Now the only evidence you have might be that there isn’t a donation from them sitting in your bank account. A corollary to this was, my brother told me a story how he had met a girl and the girl had given him her number.

So he texted her to say, Hey, do you wanna meet up again? And she ghosted it. And he was feeling so hurt and so burned and was like, I thought she liked me. I can’t believe she ghosted me. And then he is a rocket scientist, so he thinks about it for a little bit and he goes, you know what? I immediately assumed that the description of the situation was she ghosted me, but that’s really attribution bias.

All I have is I texted her and I did not receive a text back. It could have been that she did text me back and I somehow didn’t get it. Or it could be that I sent a text and she didn’t get it. And so he was brave enough to text her again, and sure enough, when they met up again, they compared text threads and she actually never received his text.

So by being clear of how you’ve jumped to a conclusion by ascribing something based on what you think happens, right. My brother is an incurable introvert. He’s sure no one ever wants to talk to him. So not getting a text message back immediately, he assumed he’d been ghosted. But he didn’t actually have data that supported that position.

It was based on what he thought and felt about the situation. So how can we be aware of when we might be falling for attribution bias? The first thing is that we have to watch for all those same things that we see in confirmation. So if any of your confirmation biases gets flagged, right? Same thing for attribution theory, because attribution, the bias of attribution is just a sort of subcategory, in many ways of our general confirmation bias, right?

We look for information that reinforces what we already think about people or things or programs, And so if a piece of information comes in, we’re gonna interpret it in a way that validates what we already think about it, rather than giving it the chance to actually present new information in a new light.

The next is anytime. Your conclusion involves how other people’s other people think or feel. It’s almost guaranteed to be attribution bias because we can’t really know what others are thinking or feeling very well. If you’re really,really close to somebody, you can sometimes get pretty good at understanding what they’re thinking or feeling.

But when we’re talking about donors, when we’re talking about participants and programs, even if they fill out a survey, that’s only a partial piece of information of how they might really think or feel, and certainly if you haven’t specifically asked about a particular kind of thought or feeling, if you’re just basing what you think they’re thinking or feeling on an action, they took it’s attribution bias, almost guaranteed.

So pay attention to. When you jump from, here’s the action that a person took to here’s the why, this is what they were thinking and feeling, because that is a great bridge to attribution bias. The next is paying attention to when you put a value judgment on something, They’re spending too much money or they’re not trying hard enough to get volunteers.

That is probably an attribution because you’ve unconsciously assigned some kind of benchmark that the people are supposed to reach, but what evidence do you have that it’s the right benchmark if you have it right. They were supposed to get 150 volunteers and we only got 20 volunteers and they only worked two hours instead of 10 hours on that project.

Great. They failed in their benchmarks, so you could say they didn’t do enough, but most of the time we don’t have those kind of clear benchmarks and we’re putting a too much or not enough, a good or bad assumption on it based on how we feel rather than evidence that we have. So the problem with this is that if we like the people or the programs, we’ll cut them some slack and we’ll say, oh, they, that was good enough.

If we don’t like them or we don’t like the program, or we don’t like what’s going on, we’ll always find them wanting.

The last thing you have to watch out for with attribution bias is when you find yourself sort of comforting yourself by making sense of what’s going on, where you explain the situation because you are feeling anxiety or uncomfortableness about something, being unknown or not understanding why something happened.

Now we need to do this in order to survive, right? We talked about in most of the situations through here, that there’s a relationship to us being able to handle a very chaotic and senseless. That in order to really actually still make things happen and control the parts of our lives that we can control, we have to make sense of a world that doesn’t make sense.

We need to do these things in order to maintain our relationships, right? Being willing to assume that our friend meant the best when there’s a mistake that happened allows us to continue a very important and valuable friendship in our life. The problem, of course, is that when we’re dealing with really complex or nuanced decisions justifying why something happened, because it makes you feel more comfortable, becomes a red flag, and so we have to make sure that we pay attention to those things in those decisions that make us go, wait, I knew it.

What else could I expect? Right? Just like we had with confirmation bias, the things that make us feel comfortable about the order of things can also be misleading when it comes to making decisions, that might be complicated or challeng. So what can we do? It’s nice to be aware of the fact that we wanna make ourselves feel comfortable by making sense of things in a way that fits how we already believe about the world.

But what can we actually do when we have to do more complex decisions where we wanna be able to more openly evaluate data that’s in front of us? The first is to get clear about what’s a fact and what’s a judgment, right? What actually are the facts of the situation and what have you made interpretations about?

And if you start to listen how other people talk, you’ll find that people will seamlessly go from a fact to a value judgment without realizing it. And this is Anytime you have two people arguing about it, and they’re both throwing facts at each. Facts at each other, and they’re not getting anywhere half the time.

It’s because the fact has been transmuted into a value statement, right? They’ve taken a core fact, but they’ve applied their lens of values onto it. And so now it’s slightly different than the original fact, So an example that we might have, that you might face is you’re sitting around the table and the budget person’s, like, we’ve spent $150,000, on this program.

And the CEO goes, all right, well, we need to figure out why we’re spending too much on this program. That immediately has gone from a statement of fact to a value judge, right? We talked about that. One of the flags is the too much or not enough, but it’s very easy for you guys just move on for a group sitting around the table to just accept that assumption of, we are spending too much on a program, let’s fix it.

But the fact that was presented was an amount of money being spent on a program. So when you can get clear about what’s the fact and what’s the value judgment, then there’s a space in between where we say, what’s the right amount to spend on a program, right? what did we want? What can we, what are we capable of spending on a program?

What outcomes did we want? Are we getting all the outcomes that we want out of this program? Maybe we’re getting four times the outcomes from a program we spend $150,000 on than the one that we spent $50,000 on. So actually, it’s a very cost effective way of doing this, right? And that by separating that fact from the value judgment, you’re able to insert new pieces of information or assessment along the journey to that value judgment.

I’m not saying you don’t get to the value judgment, you just wanna be clear when you’ve made the value judgment and feel satisfied that you have enough evidence to justify that statement. When you’re summarizing or presenting information to yourself or to a group. See if you can stop yourself before you get to the because or the why.

And allow everyone the chance or allow yourself to just look at the information first. Almost always your first pass at the, because will be your attribution bias. It won’t be actually from the data that you set. So you need to give yourself a space to really look at the numbers before you move on to the because.

And then when you do get to the statement where you have to say, well, why do we think it’s this way? Because you need to, you wanna come up with multiple ones, even if you’re really sure it’s this reason, go ahead and come up with more than one and see if you can disprove them right? And the one you can’t disprove, you can move forward as the most likely explanation of why.

Then lastly, you’ve gotta get comfortable with not knowing exactly why you can make your best. But sometimes you just have to feel okay with saying, I don’t know why our consistent donor didn’t donate this year. I just don’t, and I’m gonna be okay with that. I could make up a reason, but maybe I’ll just accept that it happened and if I get more information in the future, that helps me better understand why.

I’ll update my feeling about it then, but I’m gonna just accept that for the moment. I don’t know why.

So there were our four features that are the sort of dark side of the superpowers that we have as people. So to review. We talked about pattern seeking, right? We see things that aren’t there, and that comes out of our amazing ability to find information buried in noise and chaos. We also have an a tendency to jump to causality, right?

We assume influence from coincidence or just a secondary connection, like things happening at the same time. The upside of this is it means that we make things happen in the world that have never happened before. We have driven incredible scientific progress. We’re able to control so much of our own lives and make them what we want.

Because of this, because we figure out when I do this thing happens. When I talk to donors this way, I’m able to double the amount of donations. We get those sorts of abilities to find those causal things make us very powerful. The problem is we extend it a little too far. We have a tendency to extend it in ways that confirm what we already believe, right?

Confirmation bias. We support. We seek support for things of what we already believe. We look for information that validates and justifies the positions we already have. I have to believe it to see it. This is really important because it allows us to go through the world without constantly questioning who we are and what we do and why we do it.

But when it comes time to finally update some of our views, we’re a little bit slow to do that, and we need to be more open to information that might contradict what we’re having. And we talked about some ways we could do that. Lastly, as sort of a cousin of confirmation bias, we have attribution bias where we assign a why or an explanation of feelings and thoughts beyond the evidence of actions or outcomes that we actually have.

Really nice for maintaining our relationships. A little bit problematic when we’re dealing with things that are more objective, like a program that we’re trying to evaluate. So what can we do to help try to manage these situations, where we are, pattern seeking or seeing causality where we don’t.

The first is triangulate your data, right? Try to find multiple examples in different situations from different contexts that continue to uphold this pattern that you’re seeing. When you have that right, that helps you say that this is a real persistent pattern, and not just randomness that looks like the rabbit on the moon, but just because the pattern is real, we shouldn’t assume that it means it’s causal, right?

So when we then wanna take the next step from saying, this is a real pattern out in the world, and we can rely on this pattern, it keeps happening, that doesn’t mean we wanna assume that there’s a causality in that. That one of those pieces causes another one of those pieces in that pattern. And a way to get better about this is to experiment more rather than just continuing, as we’ll talk about with confirmation bias to find correlative evidence, things where we see them happening together.

See if you can actually set up a situation like Archie Cochran did, or with AB testing on your website. And I’m not saying experiment on people in the hospital. Well, I mean, can you randomize things that you do? So if you think a program might actually have a better impact on a certain group, can you take, you know, accept applications from 50 of those kinds of people and then have half of them get the old program and half of them get the new program?

Things like that where it allows you to test an experiment on did this actually make a difference? Can we actually see the fact that the new writeup is more effective for younger kids than the old writeup? Rather than just saying, well, when we switched a hundred percent, it seemed to go up. There could be many other things that are causing that, right?

So if you really, truly need to identify causality, you have to upgrade the quality of your evidence from correlation to more experimental. Sometimes that is not possible. And I get that. But when it is considered, when you do get evidence that says a program’s more effective or a program, you know, didn’t reach a bar that we were expecting, or our new intervention increases donations or whatever it might be, we have to be careful when we get that data that we’re not analyzing it through our lens of confirmation bias.

And so asking yourself what data would change my mind? Right? Or change your mind. So if we wanna do an experiment to see whether our new program is more effective with a new handbook than the old handbook, what data would we have to see if we’re expecting it to be effective? That would make us think the new one didn’t make that big a difference?

How much of a difference are we expecting to see in what measures do we expect? You know, if it’s reading literacy, are we expecting them to be able to read level like two G books? Like kindergartners just got into these levelings of books. Or are we happy if they only move up, to one yellow as an increase in literacy?

What would we expect to say that yes, this made a market difference in literacy of participating children. You decide that beforehand. you made it clear, if I expect this to work, if I see this data, it would change my mind. Then finally, for attribution bias, we wanna think about how we keep our language within our evidence, right?

Rather than adding because, or why or value judgments. You know, this was too much, it was too low. we’re doing, you know, this program was ineffective. Let’s actually say what happened. The program worked on 10 of the 30 participants. That’s actually what happened By saying it was ineffective. You’re putting a value judgment on it.

What was your measure for effectiveness? Maybe 10 out of 30 is great if you’re trying to do a really hard thing. so you wanna make sure that your language is aligned with actually the evidence that you have and that you’re not adding on your levels of interpretation based on the values of how you think about things.

All right. So having summed all of that up, I do have a few resources for you. So, Nate and I were talking about podcasts before this. I do have a podcast, called Heart, soul, and Data that talks all about how community organizations, nonprofits, can work with data, especially through the lens of how we do it as humans.

So, I actually have an episode about confirmation bias as an example. and if you go to, miras.com, actually, and I apologize, it’s not resources, take the resources out, it’s just data while human, I updated the link. what you will find, is you can download all the slides.

You’ll see a link, to the confirmation bias. Podcast episode, which you can listen to. and I’m happy to get any emails or questions or anything that you might, like to talk about or learn about. I’d be delighted to continue the conversation.

now that we’ve got sort of 20 minutes having gone through a lot of deep stuff, what sort of questions people have, comments, thoughts? Totally open to it. I am a data person, so I don’t mind people disagreeing with me either.

I have two questions if people, for you guys, if no one has questions for me. The first is, what is this presentation? What you expected? Did you have an idea of what you thought you were gonna have and did it meet or not meet that expectation? And then my second question would be, do you see a place where you want to apply something that you’ve learned either professionally or personally?

You go, oh, I have that same problem. I’m gonna use one of those new tools to approach it a little bit differently.

There is a question in the chat, Alex, which is, are there any resources for looking at data through a culturally responsive lens? An example might be how implicit bias may play into reviewing data. It’s a great question. So one of my favorite resources for that is a company called We All Count Run by a woman named Heather Krause.

And she has a whole data equity framework. The framework is publicly available on her website. If you go to we all count.com, you’ll see data equity framework, and she talks about the seven steps about it. And implicit bias is a big thing. what I will say is, again, implicit bias isn’t tied just to culturally responsible interpretation of data.

What I was trying to talk about here is we all have to have some level of bias, right? Some level of interpretation of information that comes in and filtering out of information we isn’t necessary or we would not function. So we don’t wanna view the fact that we have biases as a sign that we are bad.

What we wanna do is make those implicit biases explicit so that we can identify when they’re problematic and update or change them. So you’re never gonna not have biases. What we can do is get more effective biases, if that makes sense. And so Heather Krauss’s method is really great for making that implicitness explicit.

I also have, so forgive me cuz I’ve done this with two different organizations. I’m 99% sure Nate, that we did the data-driven value. Yeah, I did it at the conference. It was, yeah, that was the conference one. Yep. So that’s another framework that I have where by identifying the values that guide you decisions, you can take those things that happen in the shadows, that happen behind closed doors, or happen completely unconsciously, and you surface them.

What are the values that we’re using to make this decision? Because no data can tell you what the right choice is. Data can help you understand what the impact of a given choice might be, but you’re evaluating that through the lens of being human and you’re evaluating against. A desire or a decision that is entirely value-based always, that is the only way you can make decisions.

So the trick again, is not to get rid of those biases, but to make them explicit so that you can understand when one of those may be harmful to particular people or to particular situations. So that, I don’t know if we have a recording cuz that was a live presentation. but on my podcast I do a summary, of that values-based decision making.

It’s episode 48. So, if you do find my podcast, you’ll see a Values plus Data equals Win episode, and that talks about that framework as well. Another question. What are some recommendations you have for presenting information to a group stakeholders, board of directors that has historically been swayed by confirmation bias and attribution theory, right?

So you have the like nice way of doing it and you have the mean way of doing it. So that’s why I like the Archie Cockran story is if you can kind of catch people in their own confirmation bias, so you get them to admit like, yes, I believe this because of this data, and you’re like, great. So you find that kind of data compelling to make your decision.

You think it because of this, well in fact here’s the exact kind of data, but it goes in the opposite direction because you guys have been interpreting this completely wrong. Right? I don’t recommend that, because most of the time it will make people so defensive and so upset that even if they find themselves having to update, they’re not gonna do it.

In a way that will get you a positive outcome, they may even out of spite, not want to continue on in the direction that you want because it’s very, you experience cognitive dissonance when you have to update a view that you’re suffering from confirmation bias on, right? There’s no way to go from an old point of view to a new point of view without traveling through that cognitive dissonance period.

Meaning you have to have two conflicting things in your head at the same time. And it’s incredibly dis so a better way than in my mind, than the kind of brutal Archie cocking way. which is where mo most of us try, and I’m guilty of this. My husband constantly points this out that I like frame this argument and I take apart all of his.

Points of view, and I’m like, here’s why I’m right. He’s like, this isn’t fun. I don’t enjoy having this conversation with you. So instead, what I think is really helpful, when you’ve got people who are sort of processing things through a certain confirmation bias lens, which we all do, right? So to say, oh, I have one group who’s doing it.

No, no, no. We all do this. It just may be in one group that confirmation biases, inhibiting forward progress, is instead find the shared value you have, right? If you’re in front of your board, you guys all believe in something, you’re there to drive the mission of your organization, right? You’re there to help children learn how to read.

You’re there to help end hunger. You’re there to make sure that all unhoused people have a home. So start with that because that can become your shared confirmation, right? Like you are all there to do that. And then you can say, what do we think would be a good measure for success on this thing? Can we all agree what we want to have happen?

Is it that we wanna make sure that this program houses at least 80 people? Do we think that we need to see those people stay in their homes for at least four months? What is it? Right? And this goes to setting the benchmarks before you present the data. Because if they all agree, yes, if I saw 80 people inhomes, I would back this program.

And if previously they’ve been saying, it’s not worth it. It’s not worth it, it’s too expensive, whatever. And you said it’s the confirmation bias, it’s getting in the way of them being able to accept this program. By agreeing, you’re all trying to achieve the same thing. By agreeing on how you’re gonna measure whether the thing you’re gonna do is gonna achieve that, and then showing them the data that in fact it achieves that.

Now you’ve sort of walked this journey together on the same side of the table where we’re like, Hey, we all wanted to get there. GPS says, we’re getting there. We all. And it’s a much more peaceable, collaborative way of getting there, rather than arguing this program’s great. No, it’s not. Yes it is, right?

Like you’re not gonna, you’re not gonna get progress as readily in that way.

Thank you so much for your time today, guys. I really, really appreciate it and I do see a few people talking about how they wanna, you know, bring this to presentations and stuff. Awesome. And if you want me to come share a talk about this or facilitate a conversation about it or whatever, you can tell, I love talking about this.

So any chance I get, I would more than welcome. So let me know how I can support you as you guys take this out into the world and continue to do amazing things in your organizations and in your personal lives.

So across episode 59, in this episode, episode 60, we covered four critical ways that our brains can function at incredibly high speeds and incredibly efficiently, but also where they can lead us to make some recurring mistakes in how we navigate the world and understand the information coming into us.

We’ve reviewed pattern seeking causality and confirmation bias and attribution. These, you will notice show up in so many different places where we make decisions and I’d love to hear in your comments, or you can contact [email protected]. What are your experiences with these biases and strengths?

How do you start to notice where these are skills, where there are things that help you and where are you noticing where these things hurt you and make you make mistakes? Thank you so much for joining me today. Again, if there is anything that I can help you with in understanding how these biases impact you at work or in your life, of course, I love doing training seminars, having open conversations, hosting coffee chats virtually.

So please feel free to reach out to me if there’s anything I can do in that regard. And as always, I hope that you have a wonderful next step in your analytic journey.

If you would like to see the video recording of this presentation, or if you’d like to download the slides that go along with the presentation, you can do that from our show [email protected] slash using data while human all one word run together.

You have been listening to Heart, soul, and Data. This podcast is brought to you by Moroccans, an analytics education, consulting, and data services company devoted to helping nonprofits and social enterprises amplify their impacts and drive through data. You can learn more at Moroccans Dot. M E R a k i n o s.com.

Merakinos

Merakinos provides analytics as a service, data strategy consulting, analytic project work, and data fluency training for individuals and organizations seeking to build and strengthen analytic capacity and data-driven decision-making. We can work with your organization to identify the key analytic insights you need from your data, develop and maintain actionable dashboards or reports, and deliver them across your organization monthly, weekly, or on your schedule. Or if you would like to develop internal capacity, we can provide training on key platforms like Power BI, Tableau, or Alteryx.
And if you aren’t sure where to start, our advisory services can help you design and implement effective analytic strategies to amplify your impact.


Try It Now:

Thinking of bringing Alexandra in to lead a seminar for your organization? Connect with her at Merakinos!

Comments are closed