learn from the mistakes of others, you can't make them all yourself!

To celebrate my 50th episode of this podcast, I’m sharing four major mistakes I make in analytics and life. It’s part of a resolution I made for 2023, to share one mistake every Monday on LinkedIn for the entire year. I HATE making mistakes, but I am committed to try to learn everything I can from them and hopefully help others learn something too. 

In this episode, I talk about how I’m not the best at slowing down and making sure I did all the little things right, I would rather default to a passive decision than actively make one with incomplete data, I love being the ‘data superhero’ instead of building consistent data practices and team capacity, and how I’m often a terrible listener. 


What You Can Do

I haven’t solved these things yet, but I’m getting better. I’d love to hear from you if any of my mistakes and lessons resonate with you, or if you have your own lesson to share! You can contact me or connect with me on LinkedIn. Thank you for sharing this journey with me!


Expand to read (auto-generated) episode transcript.

Eleanor Roosevelt once was quoted for saying, Learn from others mistakes because you can’t live long enough to make them all yourself. So to celebrate my 50th episode of Heart, Soul and Data, today I’m going to share with you a few of my biggest analytic mistakes. What I am trying to learn from them and hopefully something you can learn from them as well.

So we’ll get a little personal. You’ll hear about things that I have screwed up along the way and we’ll think about how we can try to maybe find some lessons in those screw ups so that we can continue to improve.

Hello and welcome to Heart, Soul and Data, where we explore the human side of analytics to help amplify the impact of those out to change the world. With me, Alexandra Mannerings. So I spent a lot of my career as some kind of health care analyst or other, and one of the biggest mistakes I ever made was royally screwing up an analysis about emergency department visits and the day that those visits happen.

The idea is we were wondering whether we would see jumps in emergency department visits out of office hours, meaning when primary care centers are closed, when your doctor’s office is closed, you have an ear infection, whatever might happen, and you can’t get care in a more appropriate setting. And so you go to the emergency department instead. And we wanted to see in our data whether we could find that out, whether all of these different emergency department visits that we had records of, did we see spikes in them out of hours on the weekends overnight.

So I spent a bunch of time cleaning up the data, getting it ready, doing analysis, making a graph. And sure enough, we saw twice as many visits proportionally that happened on the weekends for these unnecessary visits, right, ear infections, things like that, that could be treated in different settings. And so we went, Eureka! Look, the data shows this.

We knew what was going to show that. And we went all over talking about it. Shoot it all over the place. Hmm. And then I had that moment where I was looking at how I had calculated it and I had screwed up the denominator for the different time periods, meaning that my denominators were too small in these, you know, out of office hours period.

And that’s what had made the rates of visit look so much higher for these unnecessary ed visits or visits that don’t have to be treated in an emergency department. And I remember my stomach just falling into my feet, realized that I had made this mistake. And there’s a small mistake. Easy to make. But it had radically changed the findings, and I hadn’t checked my work closely enough to catch it before we released it.

And I did. I felt like I wanted to just throw up when I found it, because we had been talking to people about this, we’d been sharing these findings, and I had messed it up. And it turned out when I fixed it, the findings were much less compelling, much, much smaller differences in these numbers of EDI visits for for minor injuries or illnesses.

And so, you know, our discussion about this, the findings that we had just really got to be much smaller than the original analysis and visualization that I had done. And one of the things that I really learned from this is that I’m very good at solving complex analytical challenges. I am not as good about being really detail focused and going through everything with that fine tooth comb and making sure that everything is just perfect.

And I think a lot of us as analysts might find this challenge where we are one or the other, right? We’re either very, very detail focused and we’re going to get all of the little tiny pieces of an analysis right? But we might struggle to see how to put it together into a big picture that makes cohesive sense, that tells a story.

And then for those of us who are storytellers, we find it difficult to spend the time and really focus on those very small parts, the row by row, checking that you have to do when you’re doing analysis to make sure that every line of the data is correct, that each step of the process is absolutely right. And what I noticed is that I and many others are particularly about it is when the findings confirm something that we already believe.

This is called confirmation bias. So we talk about this in episode 13 with Dr. Rick Caymans. If you’d like to check that episode out. We talk about confirmation bias and attribution theory. But the idea is that we have preset notions about how the world works. We make a hypothesis, but we think that there’s one answer for that hypothesis.

It’s much more likely than the other. And so with the emergency department question, you know, we thought that there was a good chance we would see this increase in usage of the emergency department. Out of hours from typical primary care offices. And so when the data showed that I was much less likely to spend extra time looking at it, whereas when you get some wildly unlikely answer or something that really surprises you, you go back and you triple check it to be like, really, really is not the case.

And so one of the things that I learned is that when I’m doing an analysis, obviously you always want to do a reasonable check, right? Does this answer make sense? But when you go, yeah, that answer does make sense, you actually need to then go back and check it again because you’re more likely to let small things slide when your findings confirm what you already believe.

So that is a lesson that I’ve continued to try to put into play here, where I don’t let my gut just say, Yeah, that looks right, we’re good to move on, but instead I’d say, All right, have I done all of the things that I need to do to make sure that the foundation of this analysis is correct and knowing that this is something I’m not good at.

I try to put systems in place or collaborate with people who have different strengths from me to help make sure we get the highest quality analysis it. So as an example, I have an analytics tool. I use Alteryx. We have an episode about Alteryx. If you are interested in learning more.

Our last episode actually 49, but in that tool I can make tests and messages to myself so I can say, Hey, alert me if I lose records when I’m joining these two data centers. Records don’t match because that would be something being me I might miss as I glanced through my my analysis. But if I create an alert, it’ll say, Hey, not all these records joined to this other data set problem.

Go ahead and check that out. I also collaborate with people who are much more detail oriented than I am, and I’ll do the main analysis work and I’ll have somebody else go through it who’s much more detail focused to say, I do all of these numbers make sense. Do I see anything somewhere that doesn’t look right, or do I see a mistake in a formula or anything like that?

So it’s important to recognize what we’re good at and what we’re not good at and create systems that can help support us and those things that we know we’re just not all that great at and define those teams that make us even more effective. And as a reminder, we want to make sure that we also don’t fall into the trap of accepting findings that just confirm what we already believe with a much lower standard of verification and validation.

It’s really tempting, right? We triple check the ones that look really strange, and then we let the ones that just make sense to us slide. And oftentimes that makes sense because they make sense to us because they’re accurate, Right? They reflect what we know about the world and what we think about the world. And most of the time, what we think about the world is fairly accurate.

But sometimes we then perpetuate information that fits our world view without the same level of critical thinking. And even if you’re not an analyst carrying out, you know, intense analytical calculations or analysis. We do this in our daily lives as well, right? We read something on social. We see a tweet, we forward an email from a friend. We share information that we just see that fits what we think.

And we go, Yeah, see, I told you I knew that politician was a dirt bag or I knew that this program was a terrible idea. Or whatever it is. You see a fact, and I’m putting that in air quotes that validates that position. And you readily share it without taking the time to verify where that information came from.

What was the source? Did it get taken out of context? Is there more that we need to learn about that piece of information that would help us contextualize or realize that actually that piece of information is wrong or a mistake or wildly out of context? And so even if you’re not doing an analysis, taking that time to pause and say, where did this information come from?

Did it come from a reputable source? Is this a case where everyone’s just quoting some random person? You know, way back that said something. And in fact, you know that. Yes, you can say The New York Times reported blank. But it turns out if you look at the New York Times article that they’re citing, some source that’s quite questionable, taking that time to actually trace information back to its originating source is time consuming and not that much fun.

But if we actually are interested in promoting real information, action and real understanding of complex issues, sometimes we’ve got to take that time to do it. And I’ll be the first to admit it’s not fun, but it is an important component of being accurate and promoting more nuanced and more effective understandings of our world. The next mistake that I oftentimes make is that right?

I’m a data person. I like to have information about making decisions that makes me feel comfortable. I’m making the best decision I could, the most informed decision that I feel confident that the decision that I’m going to make is going to achieve the outcome that I really want. And guess what? In the real world about most things, you never get that level of data.

You never get that level of accuracy or detail or completeness in your data to where you really feel like, yes, I understand exactly what’s going to happen when I make this decision. It’s going to go completely according to plan. One of my favorites, examples of this is trying to buy a house. So when you are trying to buy a house, you have a set of criteria that are going to be what define a good house for you might have to do with location, school districts, you know, proximity to amenities you need like grocery stores or health care might have to do with being close to certain people.

It might have to do with location, your jobs. There’s all sorts of things that you might need. And then obviously, all the characteristics of the house itself on a cul de sac, the size of it, the layout of it, all those sorts of things. But you don’t get to see every possible house you could buy. At the same time, a house will come on the market and you have to evaluate it.

And it’s unlikely that you’re going to find a house that actually has everything you want sometimes because the things that you want are actually mutually exclusive. Right? You might want privacy, so you might want to be a little bit farther away from things, but you also want, you know, quick access to grocery stores and schools. And those things aren’t really easy to get in the same place because the privacy is driven by distance from things.

And being close to stuff is about not being far away from things. So you might have to optimize, right? You compromise on these things and you figure out which things you’re willing to give up a little bit on to get more of things that matter to you. So when a house comes on the market, you do that evaluation and you see does this house actually fit, you know, the things that we’re looking for?

How well does it accomplish the stuff that we want? And you have to decide without knowing what houses might come on the market in a week or in a month. Is this house a good one or not? And it’s actually not possible to have all the data in one place, because by the time you’ve waited for another house to come on the market, that first one may have sold already.

And so you have to make these decisions with incomplete information that no matter how hard you try, you’ll never have complete information. And for me, that is a very, very difficult thing to do. And oftentimes my default when put in that situation is to not want to make a decision. I don’t want to make a decision about it because I feel like I don’t have enough information and I’m uncomfortable trying to make that decision and I don’t feel confident in it.

But what I’ve learned is that not making a decision is making a decision. When you try not to make a decision, you’re going to get forced into an outcome, right, That it gets made for you. But you’re still you. It didn’t actually get made for you because you chose not to make the decision. So in the case of the House example, right, if I say, oh, I don’t want to have to decide whether or not to buy this house, I’m not going to buy it, which means that I have chosen not to buy it without actually deciding if that was the right choice for me.

And so the lesson that I’ve tried to learn out of making this mistake many times where I just sort of default to the inactive choice because I’m overwhelmed with trying to make it out of this incomplete information is that you have to get comfortable in this chaos. You have to be comfortable in not knowing fully. You have to be comfortable in inconclusive data.

And that’s a hard place to be whether we’re making decisions about programs that we want to expand or shut down, whether we’re making decisions about which donors to reach out to or not being in that place of incomplete, inconclusive data is uncomfortable. And so really, all there is for us to do is to find a way to be comfortable when that discomfort or to still operate, at least even if we don’t enjoy it, to figure out how to still make the decision intentionally.

I read a lot and follow a wonderful data guru named Emily Oster, and she wrote a newsletter once about the fact that we oftentimes unconsciously hope that there is a secret option. C So when you’re faced with two decisions like buy this house, or don’t buy this house that aren’t easy to answer or are conflicted, maybe there’s there’s hard things about each of them or downsides to each of them that we sort of hold out and don’t make the decision in the hopes that like magically this third option C, that’s just perfect and we know everything about will show up.

And so I know that that’s incredibly true about these dilemmas, right? Things that are really hard either way when you’re stuck between two terrible decisions. But I realize that’s the case, too, when we just don’t have enough information that we just hope somehow the information and data will just fall from the sky and it will make everything clear.

And that just doesn’t happen. There’s no secret magic option three, you know, there’s no analysis that if you just ran it one more time, it would make everything clear. In a lot of these tricky situations. And so if you can find a way to say what will be good enough, what information could I get that’s realistic to get that once I get it, I will at that point make a decision.

Now, part of what probably feeds into my discomfort with not having enough information is that I take very big pride in being a data hero. So Luke Comiskey, who does Data drive amazing company about analytics, introduced this idea of how we celebrate the data hero right? The analysis breaks, the data doesn’t load, and the executive needs this piece of information right now and somebody jumps in and makes it happen.

And I’ve always taken huge pride in being that data hero and being able to whip up whatever data is needed. Whatever questions we’ve got, we go and find that answer in the data. No matter how hard or trickier complicated it is.

But the problem with being the data hero, in addition to feeling my need to then have the data to make decisions, is that we celebrate the heroic rescue rather than the boring building of infrastructure and support and team development that makes us consistently able to deliver without the drama, without the need of the hero. These findings and analysis that we need.

So my desire to be that hero to swoop in and solve the analytic problems has really led me to make some serious mistakes in developing my team members in failing to create lasting systems that can operate without having somebody who just swoops in and makes them work. And as I have developed more academics as my own company, I have tried to learn from that mistake of mine, that tendency of mine to want to just come in and solve all the problems and say, how do I make sure that not only do I deliver a very good analytic product, but that I help transfer further the skills so that the users of that analytic product are

going to be able to do more themselves. They’re going to be able to enhance that analytic product if they need to change it. Maybe they don’t need to reach out to me, but they could change it themselves. Or the next time a similar question comes up, they’ll be prepared to take on more of it before they need to reach out for assistance.

And not only that, how do I celebrate the stuff that’s more mundane? The stuff that doesn’t seem as exciting. It’s not as sexy to say, Look at this cool challenge I solved. I turned it around over the weekend, but instead say no. The consistency, that desire to make sure that we actually have everything lined up the way we need it and it runs on its own without these, you know, data emergencies that need to be solved.

How do we work towards that, even though it’s not as exciting, it’s not as much fun to feel like you’ve solved your database administration. In fact, in episode 43 with Kristen Ning, we talk about how analytics maturity is not about complexity, it’s about consistency. How well do you do the right things most of the time? And so that’s been something that I’ve made a lot of mistakes about wanting to be that hero and that I’ve been trying to work on, really creating that consistency of the good work to have things run without me so that I can focus where my human energy is going to be most valuable on the strategic pieces, on upskilling my

team members, on thinking about the more critical questions about where we want to go and what data. Therefore we need to get us there. Rather than spending all my energy on whipping up a new analysis over the weekend. Because somebody needs that.

My last mistake comes from, as you may be able to tell, if you’ve listened to some of my episodes that I really like to talk. My younger brother teases me that I learned to talk very early on and I just never stopped. And part of the problem with this writing, especially also the fact that I have such a penchant for data and I love building data based arguments and having all the facts at my command to be able to to build up this case of why what I think is the best thing that when I’m talking with somebody, I often am not really just listening to what they say.

Someone may start to talk and pose a position that they have or an opinion that they have, and I’m immediately starting to marshal my facts if I disagree with it. Right, I’m coming up with how I’m going to disagree with it and how I’m going to disprove the facts that they brought and how I’m going to bring my facts up.

And I stop actually just listening to what they have to say. And the problem with this is that it’s one, I’m not going to convince anybody that I haven’t truly heard and proven to them that I’ve heard them. If I’m trying to change their mind, I’m never going to convince them to change their mind unless they feel like I really get where they’re coming from.

And when it comes to doing analytic work, if I’m so quick to jump to, Oh, okay, you have this problem, I know how to solve this. I’m going to solve it this way. And I’ve already jumped to the solution before I fully listened to the challenge or to the problem. I’m probably going to miss the stuff that’s underneath that problem.

Right. So when people have a data issue or a challenge, a question they need answered, oftentimes they’ll come with a surface question. But there’s going to be a deeper question underneath this or a deeper challenge underneath it that needs to solve that caused that first one to come to the surface. And if you don’t take the time to really listen to what is underneath that, you might solve that soup surface problem and then realize that you haven’t actually made any difference at all yet because you didn’t address the problem that was underneath it.

For example, I was helping one organization who needed to merge several sets of lists of health care facilities together, and there was no idea that linked them. And so they were talking about how they were having some issues because, you know, in one list someone might be the medical center of Aurora and the other one they might be the med center of Aurora and they weren’t joining Right.

Because joins data have to be exact to join. And I immediately jumped to oh I could do fuzzy matching. I’m going to figure out like we’re going to be able to use this algorithm that’s going to know that med center and medical center are similar and it’s going to match them and will score everything. And we’ll figure out the right scores and we’ll be able to merge these lists together.

And I jumped so quickly, right to the solution that I missed the fact of. Well, hang on, why are these systems set up in such a place that there isn’t an idea? Could there be a place where actually what we need to address is how our data are developed and stored instead of immediately to this fancy algorithmic solution?

So taking the time to really listen to people, whether it’s in relationships or friendships, I mean, certainly my husband would love if I listened to him better, like truly heard what he had to say, not just stared at him while he talked and then interrupted him. We can find out one where commonalities are. It strengthens those relationships and makes you more likely to help somebody learn something new if you’re trying to help them see a new opinion.

But you may also find that you learn something new. Right. You were so sure of your opinion and of your facts that if you actually listen, you may find out that there was some information you didn’t have, that there was nuance to that position that maybe you want to develop a little bit more or you might even completely reverse your position.

But also with analytics specifically, and not just our relationships, listening will make sure that you’re solving the problem. That’s going to have the biggest impact. So I would like to thank you for listening to me on this podcast for joining me on this journey of learning and development. Thank you for taking the time to learn new things with me if you’ve been here for 50 episodes.

Thank you for hearing different perspectives from people who come from completely different backgrounds and get to analytics in so many different ways, and for being open to hearing about new questions, maybe asking something you hadn’t thought to ask before. Analytics like life is a journey, not a destination. We’re here to work on incremental improvement and not wholesale overhaul.

It’s about blending the best of what we’re good at. Honoring the cost of those strengths and our sometimes mistakes. And as I hope that you’ve appreciated through this episode. Analytics, just like life is about stumbling and getting back up, trying to learn why we tripped in the first place. So maybe next time we’ll put our foot down in a better place.

So thank you for following along my potentially painful journey of mistakes I’ve made. Hopefully there’s been a few lessons you’ve learned and as you probably heard, I’m still learning in these and as a continued commitment to my own improvement, I have also agreed to myself, made a promise to myself every Monday on LinkedIn to share a mistake that I make, and a lesson either that I’ve learned or ask to see if there is a lesson I could learn because I haven’t yet.

So if you’re interested in following along, you can find me at Alexander Meanderings on LinkedIn and let me know what you think about those mistakes. I’d love to hear if if there’s any insight that is valuable for you. I certainly myself am learning more from being willing to admit mistakes that I can learn from them. And it’s hard for me.

I don’t like admitting I make mistakes and so I am hoping that there will be a silver lining in sharing this and that you’ll learn something too. So thanks again for being on this journey with me. You have been listening to Heart, Soul and Data. This podcast is brought to you by Moroccan. It’s an analytics, education, consulting and data services company devoted to helping nonprofits and social enterprises amplify their impact and drive through data.

I also recently made the mistake of letting my darling husband be in charge of repainting the dining room. I learned just how hard it is to scrub primer paint off a toddler!!


Try It Now:
Try It Now:

Join my year-long journey of sharing one mistake a week on LinkedIn.

Categories:

Tags:

Comments are closed