Measuring the (Data) Culture of Medicine
with Dr. Anupam B. (Bapu) Jena, Co-author of Random Acts of Medicine
Dr. Anupam B. (Bapu) Jena
Co-author of Random Acts of Medicine
Anupam B. Jena, MD, PhD, is an economist and physician whose research involves health economics and policy including natural experiments in healthcare, as well as the economics of physician behavior and the physician workforce, malpractice, healthcare productivity, and medical innovation. He hosts the Freakonomics, MD podcast, which explores the “hidden side of healthcare.”
Satyen Sangani
Co-founder & CEO of Alation
As the Co-founder and CEO of Alation, Satyen lives his passion of empowering a curious and rational world by fundamentally improving the way data consumers, creators, and stewards find, understand, and trust data. Industry insiders call him a visionary entrepreneur. Those who meet him call him warm and down-to-earth. His kids call him “Dad.”
Producer: (00:00)
Hello and welcome to Data Radicals. In today's episode, Satyen sits down with Bapu Jena, an economist, physician, and professor at Harvard Medical School. He bridges his professions to explore the economics of healthcare productivity and medical innovation. Bapu is also a faculty research fellow at the National Bureau of Economic Research and practices medicine at Massachusetts General Hospital. In this episode, Bapu and Satyen discuss leveraging data and healthcare, applying AI in medicine, and measuring the productivity and innovation of doctors.
Producer: (00:31)
This podcast is brought to you by Alation. Successful companies make data-driven decisions at the right time, quickly, by combining the brilliance of their people with the power of their data. See why thousands of business and data leaders embrace Alation at alation.com.
Satyen Sangani: (00:56)
Anupam Bapu Jena is a physician, economist, researcher, and Harvard Medical School professor. His research spans health economics and policy, including physician behavior and workforce economics, medical malpractice, and innovation. He's the author of the recent book, Random Acts of Medicine, and is the host of Freakonomics MD podcast, which delves into the hidden aspects of healthcare. Dr. Jena — Bapu — welcome to Data Radicals.
Bapu Jena: (01:21)
Thank you for having me.
Satyen Sangani: (01:24)
Since you have recently published the book, why don't we dive in and start there? Tell us about the book and what motivated you to write it.
Bapu Jena: (01:29)
The book is about a lot of my research over the last 10 years or so, and the title is Random Acts of Medicine, and it's about how chance occurrences, chance events impact our health, our lives, and then what we can learn about it. I tell the story of a guy I met recently, that's about a month ago now, who met his wife at the DMV, Department of Motor Vehicles. It's a totally random thing. He was in line for about two hours, met this person who he ended up marrying, and that's like a totally random thing. You would not tell your son or daughter that this is actionable, this is a way that you can meet your future soulmate, is to go to the DMV. And same thing: Kind of random things happen in health all the time. People are hit by cars, or environmental disasters, or cancer without any risk factors, and in those settings, it's also sometimes not actionable. We can't learn anything from it, but this book is about a bunch of random events that impact our lives, our health, where we can actually learn something in terms of what makes health care work, or doesn't work.
Predictable randomness
Satyen Sangani: (02:29)
And so, this sounds a lot like, I don't know if you've seen that Gwyneth Paltrow movie [Sliding] Doors]. There's one avenue of chance, and then there's another avenue of chance, and your life is totally different based upon these two totally idiosyncratic events. But if they're that idiosyncratic — and that sort of chaotic — then is what you're talking about sort of that random, or are these random in instance, but predictable across a population?
Bapu Jena: (02:55)
I think that they're the latter. I mean, they're random in the sense that any given person who's exposed to it, and I'll give you an example in a moment, but any given person who's exposed to some random event, it's random to them. But it is predictable, and a large part of the book is this idea that if we know what to look for, we can identify these sorts of occurrences all the time, and we can learn something from them. One of the chapters in the book is about marathons and mortality, and it recounts the story of my wife who ran this race years ago which started in one part of Boston, and then went past Mass General Hospital, which is where I work, and she asked me to watch her on the race route. I tried to park at the hospital to do that, but I couldn't get to the hospital because the roads were blocked that day, and they were blocked because of the race.
Bapu Jena: (03:38)
I mentioned this to her hours later, and she says to me, “What happened to everybody that needed to get to the hospital?” Fast forward several months, we had a paper in the New England Journal of Medicine that showed that people who happen to live along marathon routes have higher cardiac mortality during the dates of marathons because people can't get to the hospital. That's random. It is random whether you have a cardiac arrest on the day a marathon is being held near your home, but it tells us something about how delays in care and infrastructure setups can have impacts on our health.
Satyen Sangani: (04:09)
There's kind of a corollary to that, right? I think there were studies, or have been studies, and maybe you can correct me if I'm wrong, but there have been studies where people who live in rural areas farther from healthcare have much higher incidence of mortality or casualty than people who live close to hospitals. So that stands to reason. I mean, distance would be one of those things. During COVID, one of the things that struck us — my wife's also a physician — is that you really wouldn't want to get sick during the times where COVID was at a peak because the hospital system would also be at a huge surge. If you had something that might not be immediately actionable or totally emergent, or even if you did, maybe that wouldn't be a time to get care. Did you see this in your work? Like, how did you actually come up with the idea for this book? As you were practicing medicine, did you see these random acts actually affect how you provided care or your ability to provide care?
Bapu Jena: (04:58)
All the time. A lot of the chapters come from things that I've seen in the hospital or out of the hospital caring for patients, or things that either I or family friends or family members have experienced. And in your case in particular, we've actually looked at this question. It's hard to study, but the question of how busy a hospital is and what impact that has on your outcomes. Because if you fall, you'll see you're running outside and you slip up on some ice or you hit the curb wrong. You don't do that knowing that on that particular day, the hospital could be incredibly busy for whatever reason versus another day where it might be less busy. You are, by chance, exposed to hospitals that have varying levels of capacity on any given day, and that allows you to learn something about how, well, if a hospital is really busy does that lead to worse outcomes? And it seems intuitive that it might, but there's also situations where being busy might actually lead to better outcomes if it means that there are certain things that we will sometimes do in the hospital that are harmful to patients. And when the hospital is really busy they might not do those things because they have to prioritize. And so it could be that for some people, they actually do better when the hospital is busy.
When a busy hospital is a better hospital
Satyen Sangani: (06:07)
Oh, interesting. And they do better when the hospital is busy because...
Bapu Jena: (06:13)
Let me give you an example from the book and it'll kind of spell it out. So there's a chapter in the book that's titled “What Happens When All the Cardiologists Leave Town?” The basic finding is that during the dates of major cardiology conferences, when the number of cardiologists who are in a hospital might decrease because cardiologists are at these big international meetings like the American Heart Association meeting, you might think that care would get worse in the hospital that day or those days because the balance between the acuity and number of patients who are coming with cardiac problems and the staffing or skill or expertise of the cardiologists who remain behind might not be optimal compared to the rest of the year. You would think that outcomes get worse. What we find actually is that outcomes are quite a bit better. They're actually a lot better. What we show is that rates of certain procedures fall by about 30% on the days of those meetings.
Bapu Jena: (07:04)
What it means is that during the rest of the year, there are instances and there are people that we are intervening on, who we think it's a black-and-white decision. We think that this person will benefit from this procedure. But what this sort of analysis tells us is that, you know what? Actually this person, the risks outweigh the benefits. Because there's a lot of things in medicine that are black and white. You know you have to do it. But there's an enormous area where it's actually quite gray and you don't know whether you should do something or not, but you kind of operate under the assumption that more is generally better. That, sometimes, is the case, but it's not always the case.
Satyen Sangani: (07:36)
In that example, as an economist, I think about incentives. It would strike me that the incentive structure is to, in general in medicine, especially if you're procedure-based, do more procedures, frankly. There's a financial reward. Did you extend the work to prove or disprove or do you have any hypotheses around whether or not that is actually true?
Bapu Jena: (07:56)
Two thoughts on that. One is we primarily look at teaching hospitals for two reasons. One is, in teaching hospitals, many of the people who are working there are the types who would go to these meetings, in the first place. That's a place that you would want to look for this sort of effect if it exists. The second thing is, in many teaching hospitals, the way that doctors are paid is often different than outside of a teaching hospital. Outside of a teaching hospital, you might see a more fee-for-service environment where the incentives are such that you do more, you get paid more. And that does happen in teaching hospitals, but it's much, much more attenuated compared to that. But the general question, though, of how much we see in medicine is because of financial incentives. My assessment is that I think that we spend a lot of time talking about financial incentives, but they're not really as important as we think they are. And we talk about them because it makes sense, right?
Bapu Jena: (08:46)
It makes sense that if you pay someone more to do something, they will do more of it. And if you can't measure the quality of what's going on or the need for that procedure, you would expect to see more of it. It's just like, when I went to the car repair shop the other day, I got a huge list of things that I had to do, and maybe I needed them, maybe I didn't. But I have no idea one way or the other. But the shop does get paid for those sakes. In medicine, it's a little bit different, though, because doctors take an oath to practice in certain ways. Empirically, if you just look at places and environments where doctors are paid fee-for-service versus not, or when fees increase for certain procedures, you do see a little bit of an increase, but you're missing the forest from the trees. Most of the variation is not driven by the way that doctors are paid. It's driven mostly by just differences in their practice styles. Some doctors are very invasive, others are not. Some doctors are risk-averse, others are not.
The bias toward action
Satyen Sangani: (09:38)
Given the data that you just mentioned, it strikes me that on average, there's a bias towards action, where action is then defined as a sort of procedure. In some ways, that makes sense because if you've got a hammer, then you use it. I would imagine that these physicians are trained in a particular way to use those skills. Is that how you would explain it, or do you think there's another explanation for it? If you look out on the face of it, it's like, let's have less procedures, or at least let's figure out the criteria so that we can up the bar for when a procedure is recommended.
Bapu Jena: (10:12)
That's a great point, and several thoughts there. One is that the analogy that we give in the book is the analogy of a soccer goalie who is deciding whether to stay in position, or go left, or go right when there's a penalty kick. If you look at the data, what they often do is go one way or the other, because they feel like they have to act. And sometimes the best thing is just actually to stay put, because the ball often goes right up the middle. Now that is sort of an endogenous decision. This is a chicken-and-egg problem, but there is a human tendency to always want to do something, in part because if someone is doing poorly in front of you, if you don't do anything, what would you have to say at the end if something goes wrong?
Bapu Jena: (10:48)
“Oh, I didn't think I should do anything,” and by the way, this person did poorly. That's not a good place to be in defensively. There's often a desire to try to do more than less. This is a place where data can be helpful, and we see guidelines that try to say what are the areas that we should do certain things versus not. But those guidelines are just sort of very aggregate in nature. They're not specific to any given person. There's where I think the art of medicine sort of comes in. It's like, “All right, look, I've seen this problem happen before. Maybe I shouldn't do a procedure in this type of person.”
Data points tracking intensity of care
Satyen Sangani: (11:19)
This bias toward action is fairly interesting because you could see it being true, and to your point, like, if you're a mechanic, you might prescribe more services. If you're a lawyer, you might prescribe more contracts to be written because both... Forget about the economic incentives, although there could be mild influences there. It sounds like there's also this mild influence in just what you know and what you're willing to therefore recommend. Have you seen this pattern emerge in other parts of medicine and of your work?
Bapu Jena: (11:47)
We've done a lot of work that looks at variation in how physicians practice. I'll give you two data points which are interesting to me. One is a data point with respect to the intensity of care. If you look at people who come to the hospital, the way that patients are assigned to doctors in the hospital is as good as random. If you come on a Monday, you get Tony. If you come on Tuesday, you get Lisa. If you come on Wednesday, you get Chris. And you come on Thursday, you get Christine. It's pretty random. What that does for us is it allows us to see two things. One is how much variation is there in the way that doctors practice that is a function of their practice style, not a function of the types of patients that they see.
Bapu Jena: (12:30)
Because some doctors will have very invasive care or intense spending or intense care on patients, but that's because their patients are sicker and they need that. We got to sidestep that problem somehow. The way we do that is by finding situations where people are as good as randomly assigned to different doctors. That tells us something about their practice styles. We see enormous variation, 30% to 40% difference between the same types of doctors in the same hospital. An inpatient doctor in one hospital might have a colleague that spends 40% more on similar procedures. The other work that we've done, which is another data point, is in opioids. We see that in the same emergency department, again, where people are essentially randomized to the emergency doctor who happens to see them. If you go and see one physician, the likelihood that you walk out with an opioid prescription is 5%.
Bapu Jena: (13:15)
If you see a different physician, it's 25%. So, a huge variation there. Now, what is driving that variation? That's hard to say. There is some work that suggests that things like risk aversion matter. In a case of spending, doctors might just be risk averse. It's not that they get any benefit from spending more, but they're just averse to missing something if they don't get a CT scan, if they don't get an MRI, if they don't get some additional labs, if they don't keep the person in the hospital an additional day. I'd call that risk aversion as a driver.
Work versus analyzing the work
Satyen Sangani: (13:48)
It makes a lot of sense. These are all pretty interesting phenomena because you have a unique perch in that you understand the medicine and you also understand the systems that economists are naturally intended to study. How much of your time do you spend in your work doing the work versus actually analyzing, thinking about these trends of phenomena? Are you doing clinical medicine a lot or?
Bapu Jena: (14:10)
It's evolved over time. When I was a resident, a trainee, I was in the hospital all the time. We were working 80 hours a week for several years. In the first few years after residency training, I was probably working about 25% of my time clinical, seeing patients. It's fallen since then, but I still see patients now. But I work only in the inpatient setting, so I don't see patients in an outpatient setting, like a primary care doctor. The types of people that I would see are people who are hospitalized for something like pneumonia or heart failure, other infections, other general medical conditions, but that are acute enough that they require hospital care. Those are the types of people that I would see.
The art of medicine
Satyen Sangani: (14:46)
I would imagine that this work would allow you to also have a view on how medicine is practiced more globally and thinking about protocols and differentials for diagnoses. Where and how are you able to apply that work and tell us a little bit more about that.
Bapu Jena: (15:02)
I think two areas. One is just a general observation that I've had more recently is that, in medicine, we focus a lot on things like cost of care, how expensive is it to get a certain medication, to see a doctor, get a procedure. We think about things like access. Do people have insurance that would allow them to do those things? Are the types of providers they need to see accessible to them, depending on where they live? Those are things we talk about. One thing that we don't spend a lot of time talking about, though, is the time that's required. Many visits these days, the average is like 17 to 20 minutes, and that might be fine if you've got a runny nose, but it's not fine if you've got a fever and weight loss for three months.
Bapu Jena: (15:39)
There are a lot of situations where minutes are not sufficient to, A, arrive at a diagnosis and, B, make someone feel comfortable that you're doing everything that needs to be done to understand what's going on and explain everything to them. I mean, that could take hours. But how often do we see a doctor and a patient spending hours together? A large problem with that is that's a function of our system. So, that's the first thing I've been thinking more about on the practice side. The other thing that I think a lot about is, and it sort of dovetails in a lot of ways with the research, is what I might think of as the art of medicine. It relates to this idea of clinical guidelines and in sort of cookbook strategies. So, imagine you have three physicians. The first is a doctor who has no idea what the clinical guidelines say about a particular condition. And so, they just sort of rely on their own experience. They rely on what they learned back in medical school and residency a long time ago, but they are not able to keep up to date on the most recent guidelines.
Bapu Jena: (16:36)
So, that doctor is gonna have certain outcomes. Then you've got another doctor who's the polar opposite, who is up to date on all of the clinical guidelines. I mean, they just know them really well and they stick to the guidelines. They never deviate. That's the second type of doctor. That doctor is gonna have some outcomes associated with the way that they practice. And then you've got a third doctor who, let's say, 70% of the time they follow the guidelines. They know the guidelines. So, 70% of the time, if you looked at their practice patterns, they would be consistent with the guidelines. But 30% of the time, they're not. As a researcher and as a clinician, I'm thinking to myself, “Which of those three doctors would I expect to have the best outcomes?” I think a lot of people in medicine would point to doctor No. 2, because doctor No. 2 follows the guidelines exclusively.
Bapu Jena: (17:26)
But my instinct would be that doctor 3 might actually be better, because doctor 3 knows the guidelines. It's not they don't know the guidelines. They know them. And there are instances where they deviate from the guidelines with some intention. There's a reason why they did that. I've got to believe that what they're doing is using their training as a doctor, the experience, what they learn from their colleagues, what they learn from other places, supplanting that information with what's in the guidelines and saying, all right, do I need to be thinking about this third of patients differently? It is an empirical question that no one has looked at this question of whether or not the phenotype of the first doctor, second doctor, or third doctor, which one delivers better outcomes. But we as a profession are pushing us towards doctor 2. I think what we want to do is push ourselves towards doctor 3. Make sure people know the guidelines and the evidence, but also give them some latitude to see how the art of medicine sort of unfolds.
The consumption of healthcare
Satyen Sangani: (18:19)
We talk a lot about this phenomenon where it's sort of data as we think about this idea of data culture and enabling people with data. There's this role that data can take you up to a certain point, but ultimately human judgment has to take over and decide whether or not the data is relevant or applicable in a particular context. And obviously the human body and even the environment that people exist in are so complex that you can't always capture the richness of sort of reality in a protocol. Maybe switch to the other side of this equation and think about the people who are showing up at hospitals during marathons or trying to get care during COVID — or I guess you can't really decide when you get a heart attack — but like are trying to, you know, sort of get care at these moments.
Satyen Sangani: (19:04)
How does this affect the consumption of healthcare from your perspective? I mean, we're all consumers of healthcare. How do we think about what we should know or do in these instances?
Bapu Jena: (19:12)
That's a great question. That's like one thing that people ask me a lot about these research findings. Often the findings, for lack of better words, are sort of cute. It's interesting that people do poorly when there's a marathon, but what do you do about it? Or it's interesting that people do better during the dates of cardiology meetings, but what do you do about it? The solution is not to ban marathons. The solution is not to have cardiology meetings all year-round. That's not the solution. I guess what I would say is, there's two things that we can take from it. One is sort of a scientific point, which is a large part of medicine is designed to understand a couple of questions. The first question is how quickly do we need to act in any given situation? If your child is at home and it's late at night and they've got a fever and a headache, do you have to call the pediatrician immediately?
Bapu Jena: (20:01)
Do you wait an hour? Do you take them to the emergency room? What do you do? You can never get an answer to that sort of question in a data-driven way because you can't randomize people to say, oh, okay, here's a thousand people who have chest pain. Half of you go to the emergency room right now, half of you stick it out for an hour or two, listen to a podcast and then go to the emergency room. You'd never do that, right? The marathon is interesting because it sort of gives us that natural experiment where we can actually replicate that experiment for any number of medical conditions and say, here are the ones where minutes do matter and here are the ones where minutes do not matter and you could actually take your time, but get to that in a data-driven way. So that would be sort of the insight that you get from that kind of study.
Bapu Jena: (20:41)
The cardiology meeting study, what it tells me is that sometimes we do too much in people. What's the take-home for a patient or a family member of someone? The take-home is whenever someone you know is suggesting you to do something, whether it be to take a new medication or to receive a new procedure or a procedure, generally, I think it's always good to have a discussion about what are the risks, what are the benefits, and am I the type of person as a patient who this is, for the doctor, a no-brainer where like, “You gotta get this done” — or is this a question where the doctor might say, “You know what? I could kind of go either way on it.” If a doctor says to you that they could go either way on it, it's maybe something where you actually think to yourself, “Maybe I do go either way on it.
Bapu Jena: (21:24)
“Maybe I don't do it.” I think that what my work has shown is that doctors, when they're given that discretion, they do come to the right decision. In the cardiology meetings paper, we know that cardiovascular procedures are on average beneficial. We know that from clinical trials. So how could it be the case that if you don't do procedures that people do better? The only way it can be true is if those people who we are not doing the procedures on now, they were the ones who are sort of at the margin they were gonna be harmed. Their benefit is outweighed by the risk. And guess what? The cardiologist, if push comes to shove, they are figuring that out. How do we leverage that sort of innate learning and the expertise of the doctor? Maybe that's just by having patients ask questions about whether or not they need this particular thing to happen.
The summer birthday causation
Satyen Sangani: (22:11)
Makes a ton of sense. You have another example in your book with regard to children with summer birthdays and their experiencing higher health risks and ADHD. Tell us about that example. Because that example strikes me as not falling within this idea of sort of “minutes matter,” nor within this context of understanding whether or not there should be intervention or not. What's the learning from that particular case? And maybe also start by telling the audience what that case is.
Bapu Jena: (22:34)
The general finding is that every state has a cutoff for school entry. And so in my state, Massachusetts, if you are five years old by September 1, then you can enter kindergarten. If you turn 5 on September 3rd, then you have to wait a year to enter kindergarten. And the finding is that kids who have August birthdays are about 30% more likely to be diagnosed and treated medically for ADHD compared to kids with September birthdays. And the reason why is that the August born-kids, they're almost a year younger than their peers in the same kindergarten or first or second grade class. When the teacher observes some fidgetiness or hyperactivity, some of these kids, the perception is that this might be ADHD, not that this child is just a year younger relative to his or her peers.
Bapu Jena: (23:21)
That's the finding. Then the question is, all right, what do you do about it? I think one is it speaks to the subjectivity of diagnosis and how to show that over-diagnosis might occur, and then to quantify how often subjective factors might be happening. So if this were like a 2% effect, that's very different than a 30% effect because it suggests that there is actually a lot of discretion going on in that diagnosis. And in areas where there's a lot of discretion, that's a data-driven place where you say, “Look, maybe we need to close out this discretion. Maybe this is too much discretion that's being applied to the diagnosis.” There are some, I think, tangible to-dos that parents and doctors could do. For example, if someone is discussing the diagnosis of ADHD and your child has an August birthday and you're in a state which has a September 1 cutoff, I think it would be very reasonable to say, recognizing that there's a lot of research that now shows this.
Bapu Jena: (24:12)
Is this something where we should pull the trigger, make a diagnosis, start treatment now, or should we give it three to six months and let that relative age difference sort of collapse a little bit and see where this child ends up? Same thing could be true on the doctor's end. The doctor's literally clicking a diagnosis of ADHD, the computer, the electronic health record can say, “Are you aware that this child has an August birthday and research shows this?” That might just give them a moment of pause to pull back and say, “You know what? I am kind of marginal on this diagnosis right here.” Again, this is only for those cases where it's not black and white, but I think doctors have the ability to figure out where are the black and white cases and where are the ones where it's a little bit more gray?
AI in medicine
Satyen Sangani: (24:55)
All of these instances strike me as an example where AI within the EMR, or at least in the context of the EMR, could be helpful. I have multiple friends, one of which is working on a company to summarize the EMR, and particularly the sort of unstructured notes and read back to the physician what's happened in these complicated cases. Another case where folks are trying to use AI in order to be able to guide practice and — the way they put it — practice at the top at their license. Is this something that you see a lot of hope and excitement in? Are you more pessimistic about AI in medical practice? How do you see this evolving and how much have you considered the impact?
Bapu Jena: (25:33)
It could be helpful in a lot of different ways. There are areas that are more pattern-recognition–based. Imaging electrocardiograms, that sort of data, eye imaging, those are places where it's really pattern recognition that's so important. If the physician doesn't recognize the pattern, then AI will be helpful. There's also pattern-recognition problems where I think AI can be helpful in a way that we don't yet appreciate. What I mean by that is there are certain features that we look for in elements of data. For example, if we have 1,000 or 100,000 electrocardiograms EKGs of the heart, there are certain features that indicate the likelihood of certain diseases. And we know those things and we know to look for them, whether we look for them or not, it's a different story. And that's where a machine could help.
Bapu Jena: (26:18)
But there are also features in the electrocardiograms that we do not currently know are indicative of some disease. And an algorithm could elucidate that. It could find out, look, okay, we compare a million people with the disease and many million people without the disease. We compare the electrocardiograms and we identify features of the electrocardiogram that humans previously had not even known were signatures, if you will. So I think that's another place where AI can be very useful. And then the other place, which I think it can be helpful, is also in the diagnostic process where you have a patient that comes to you, they have a set of symptoms that they describe, and they have a huge medical history that comes before that. That could include past hospitalizations, doctor's visits, prior medications that they've been on, done well with failed laboratory imaging, all this information, some of which a doctor would be able to look through, but much of which a doctor cannot look through.
Bapu Jena: (27:14)
I see a lot of opportunity for AI to say, “This is the set of symptoms that this person has with all this information that we know about this person. Here's the differential diagnosis of conditions that you should consider.” The doctor can then look at that and say, “Here are things that I had considered and here are things that I had not considered, and here are things that I had considered, but I didn't really weigh them high enough. But — oh wow — there's a data point right here that suggests that this should actually be higher than I had previously placed it in my mind. One of the core things we do in medicine is diagnosis. Diagnosis relies on information that you have available to make a decision. That means you have to have the knowledge to think about all the things that are possible as a differential diagnosis. That's a place where a machine I think could do a really good job of helping doctors.
Outcomes vs. innovations
Satyen Sangani: (28:04)
I'm sure you get pitched this all the time. Given your role as faculty at Harvard and bridging the divide of sort of behavior as a population, I would imagine that you get pitched on tons of technology and you write a little bit about this idea of innovation medicine. The thing that's interesting about innovation is that there's technologies at this point about for everything in the galaxy, there's a new data tool being born every day. I'm sure the same is true for physicians and the amount of technology that they get. How do you correlate the outcomes of these technologies with the actual innovation itself? And how do you think about that work? Because I would imagine that people are always trying to sell something or get something new or come up with a new idea. What can you do to measure sort of what the outcome of these things are?
Bapu Jena: (28:46)
It's very difficult in the standard way that we do this for medical technologies like drugs or devices, is we do randomized trials, where we compare people who receive the technology versus not and look at an outcome that we care about. Often, that's difficult to do. So we do observational approaches to answering those questions. Often those are not that well done, so one thing I've advocated for is to try to use some of these more natural experiment methods to try to establish causality when it comes to new interventions. But at the end of the day, I think that there has to be some demonstrable outcome benefit to any technology. One thing that is quite interesting is that we have put a premium on the innovativeness of the technology. So there could be a new molecule that has attacks, a pathway that has never been attacked before.
Bapu Jena: (29:36)
Well, if that molecule doesn't improve life expectancy or improve quality of life, then there's not a lot of value to me in that innovation, even though it's certainly innovative. I care more about whether or not it impacts patients' lives. The correlator to that is that you could have a medication which does not appear to be that "innovative" at all because it's just a reboot in some respect of other medications. But it's taken in a way that people are more likely to be adherent to, or you give it once every week or once every month. Those types of technologies are sometimes poo-pooed on, but they could be very valuable because what ultimately matters is the outcome of whether or not a person gets better when they're on that medication, not how innovative it is. This is also a problem when it comes to sort of data-driven interventions as well because there's a lot of interest in AI and I'll just call it non-medical technologies or non-life science technologies. The key there is you've got to demonstrate that there's some outcome benefit.
The politics of standards of care
Satyen Sangani: (30:35)
It's super interesting. I think that idea of sort of innovation for innovation's sake, I think takes over in many cases in terms of how people think. But then looking at the population based outcomes tends to be really hard to do. Then you look at a procedural input versus looking at the end state output. You've been into some, you know, I guess in this day and age, what feels like somewhat touchy territory where you have also done research where you've talked about whether or not political party affiliation affects standards of care. And so you write an article there. You've also done some work to sort of understand how affirmative action bans hurt or help. I guess in your case, you sort of make a conclusion for “hurt” health equity. Tell us about some of this work because even though I know people would like to believe that physicians in medical care is blind, if you will, and its application sounds like it's not, or at least in some cases is not.
Bapu Jena: (31:25)
What I try to do is, in the work, try to steer clear from some of the political undertones of the work. So for example, I'll pick on the affirmative action one in particular. We had a study that looked at what is the impact of affirmative action bans on representation of underrepresented minorities? Does it change? When a ban comes into a place, you do see a reduction in underrepresented minorities in medical schools. Now directionally, that shouldn't be too surprising. I think the core question is, what's the magnitude of the effect? Is it small? Is it large? I think we find an effect that's pretty meaningful in its effect size. In that work we talk about, all right, well why might you care about affirmative action bans? There's a lot of different reasons people may or may not care about them. They may care about them from an equity perspective.
Bapu Jena: (32:14)
I would say I have my own beliefs on that. But generally my own view is that that's for the world to decide, right? You give them what the relationships are, what the effects of various interventions are, and let them decide, let people decide whether or not they think it's important from a social perspective to adopt a certain policy or not. But the other point that we've made is, aside from any equity consideration, there is possibly also what I would call an efficiency consideration. Meaning there is pretty good work that suggests that if you haveBlack patients, they may get better care or better outcomes if treated by Black doctors. There's all sorts of ways you could take that sort of finding and say, “Okay, why do we need to have black doctors to treat black patients? Aren't white doctors just as good or the same thing could be true for male and female doctors?”
Bapu Jena: (33:04)
And all those things might be true. But at the end of the day, it is an empirical question, right? You can have an opinion about what should and should not happen or what things may or may not look like. But there are questions that you can actually answer with data. And people have looked at this issue. If you look at patients and you essentially quasi-randomize them to doctors who look more like them as opposed to more different from them, do we observe better outcomes? The answer is that we do observe better outcomes. So there might be an equity consideration that people talk about and I would agree with that. But there might also be sort of an efficiency consideration as well. Can we improve health outcomes by allocating resources in a different way? And that's sort of a different perspective, and that is a little bit more sort of politics or policy agnostic. Even in the controversial areas where I'll sometimes dabble, I do try to say, look, this is a diverse country. People have a lot of different views and I certainly don't try to let my views influence the kinds of implications that we find in our work. I wouldn't consider myself to be an advocate in that kind of way, though I do see why it's important.
The reception to Dr. Jena’s work
Satyen Sangani: (34:09)
Yeah, those second-order effects are hard to call. And we call them externalities in economics and they're difficult to predict in these complex systems. In all of these findings, you are trying to bring them back into the world of healthcare. How have people reacted to your work? On the face of it, all seems quite obvious, quite straightforward; one would want to make change. On the flip, at least in my experience. And I think in common experience, healthcare is remarkably difficult to change because the systems are so rooted in lots of incentives and history and background and blah, blah, blah. What have you found in terms of the reception?
Bapu Jena: (34:49)
It depends on the type of question that we are looking at. The two types of feedback that I get — the positive feedback is the work is very interesting. It's sort of creative, it's clever, it doesn't lack for creativity or cleverness. That part I think we've got down. For applicability that I think is a place where there might be some debate. What I try to say is, “I don't consider myself to be building policy.” Like if you wanna develop policy or evaluate policy, you evaluate policy. And there's ways to do that. I think of this work as almost more sort of basic science. It helps us understand how doctors think, how nurses think, how patients think, and how various factors in the healthcare system might affect outcomes in ways that we hadn't thought about before.
Bapu Jena: (35:30)
That's where I really see my role and what excites me about this kind of work. But I do that, recognizing that often there's not a direct policy implication because I'm not literally testing policy A versus policy B. Then there's another strand of work which we kind of just talked about. I'll give a specific example. We've done a lot of work on pay disparities between men and women in medicine and there's a huge literature in economics about the gender pay gap. One of the things that has struck me as being quite interesting about our field in medicine is that unlike most other occupations, we actually have very detailed data on what are the things that might be inputs to someone's compensation. In medicine for example, the work that we've done, we know where a person trained, do they go to a prestigious or less prestigious medical school?
Bapu Jena: (36:22)
What is their specialty? Where do they train in that specialty? How old are they? How many procedures or what is the clinical volume of patients that they see every year? What's their insurance mix? Do they have malpractice citations against them? Do they write scientific articles? If so, how many? Where are they published? Good journals, not-as-good journals. Do they run clinical trials? Have they got NIH funding? We've got an enormous set of factors that we can control for which, you know, in most other occupations, you wouldn't be able to do that. Like how would you figure out whether a data scientist, what is a "data scientist" doing? What is their productivity? How would you measure that? Here we actually have tools to measure the productivity of these workers and we still find these large differences.
Bapu Jena: (37:11)
When we publish a study that says holding 15 different factors constant, females get paid about 15% less than men in medicine, there’s sort of two camps.One camp is very like, okay, we know this exists and this is an important thing to show. Another camp is like, well there's all sorts of other reasons why this could be true. Most of them we actually account for in the work itself. I approach that question not from a political perspective or an equity perspective to say, I believe men or women should be paid equally. I have feelings on that, but that's sort of not where I kind of sit. Here's what I come in, I was like, this is the data, here's what you see, you should make of it what you will, but we shouldn't ascribe these differences to things that are not, not drivers.
The challenges of influencing change
Satyen Sangani: (37:46)
Even though you don't have a political motivation — and anybody who is in the realm of science would say, look, this is just cause and effect; I've got a dataset, I come to a conclusion; I've got a dataset, I come to an insight; I've controlled for these variables — the implication of the output of this work can be political and it's the result because if one were to try to close that gap, some men would get paid less and some women would get paid more. And there's a splitting up high element of it. I think you find that a lot in medicine. I feel like, particularly with insurance companies and even in some cases you can see physicians, that people are reticent to change. What have you found in terms of being able to break through those walls in your own experience? What's worked for you, what hasn't and how do you face the challenges?
Bapu Jena: (38:34)
One thing that I'm sure you've thought about, and I've thought about it a lot, which is in the work where, for example, we show that female doctors get paid less. We had a paper with Ashish Jha and a few others, Yusuke Tsugawa, years ago, which showed that female doctors in the hospital setting had better outcomes. The pushback that we got from that — there was a lot of positive feedback — but the negative feedback that we got was, “Could you have published this paper if you showed that men had better outcomes?” That's a reasonable question to which I would say, “I don't know if you could've published that paper.” We would've had to check and see. It might've been publishable in an economic journal and perhaps not in a medical journal, where I think that there are more sort of ingrained beliefs about what a medical journal will support versus not.
Bapu Jena: (39:18)
But what I push back on, as I said, I totally get their point about whether or not this paper would have appeared, had it found something different. But in addition to that, maybe you should try and outline 12 different other things that are incorrect about the finding itself. That's where I think when someone says, “All right, this is politically convenient to show this, that's why it gets published. Would you have found something different if there's a different result? I totally buy that.” That is an issue, but you have to also attack the question on its substance. If that's the only thing that you have to hang your hat against the study, that to me is not a very strong position to be in, a much stronger position to be in, to say yes, I think that is true. But also there's these other issues with the study, which I think limit the way that it's been interpreted.
Satyen Sangani: (40:05)
That criticism feels like a pretty obvious deflection from having to discuss the realities of the finding. Let's say men have better outcomes than women, then why don't we go prove that and let's then decide what we wanna do about it, and where and when. That's true. Although, I don't know, having been a recipient of medical care, I'd be surprised if that were the case.
Bapu Jena: (40:26)
And it could be in some cases, yes. Some cases, no. It's ultimately an empirical question. This is sort of a gestalt here, but I feel like medicine is probably more driven by beliefs about results and what we should see than I would like it to be. Economics has a lot of faults, but one of the things that I think it does pretty well is it’s intellectually very open to the types of findings that might come out of a research study, whether they're "controversial” or not. You could never publish a study that shows using the same methods that current studies have, that coffee is not associated with X, Y, Z, or that peanuts or quinoa or red wine is not associated with X, Y, Z. It is a better finding, for lack of better words, to show that it is positively associated with some health outcome. And I think that's a problem that is not unique to medicine, but we see it a little bit more than I would like to see.
How comfortable are clinicians with data?
Satyen Sangani: (41:19)
Do physicians in their work, do you find that when they get these datasets, when they get sort of this technological capability to indicate that they might be biased, do you find that people are generally receptive in their practice taking on these tools? What has been your experience in getting people to actually use data more in their work, and particularly physicians?
Bapu Jena: (41:40)
Part of the problem, there's a number of different issues about why these sorts of studies even exist in the first place. Economics has a lot of things that it needs to work on — so I don't wanna hold economics up as a gold standard of how science should be conducted — but one thing that it does pretty well is that it takes seriously the issue of cause and effect in most of the work that economists try to do. That is not the case in most medical studies outside of randomized controlled trials. That's essentially what they're designed to do. But there's this whole swath of studies that we would call observational studies where there's no semblance of cause and effect that you could reasonably take from any of these studies. And yet they get published and often in good places, and you kind of have to say, well, why are they getting published?
Bapu Jena: (42:23)
Is it because the researchers don't know the methods, don't know why the particular questions that they're asking or approach that are taken are problematic? I think that's probably some of it. They may not have the training, but also it probably falls somewhat on the journals. And the journalists probably do have the training to be able to suss things out. But journals have a different set of incentives. They might respond to what the public is interested in knowing, which is the only rationale I could ever think of for why we see so many low-quality nutritional studies and even the most preeminent journals of medicine. It doesn't make sense why we'd see these pretty bad studies get published in really good places. The only thing I can think of is that either A, the journal editors aren't aware of the problems, which I do not think is true. I think that they know the problems, but there is a demand that they're responding to, and that is an appetite for this kind of information.
Measuring healthcare quality
Satyen Sangani: (43:15)
Your work is really interesting to me because to the extent that we're trying to build data cultures in institutions and organizations and people are trying to do that work, doctors are a particularly interesting population because there's kind of two sides to the data culture coin. One of them is this idea of how do we get doctors, people in general, to go use more data in their work, and what are the barriers to data literacy? What are the barriers to using data as a habit? That strikes me as some interesting ground, because there's, even though you would think scientists by nature, I mean, all doctors are fundamentally scientists, or at least in western medicine, you have that interesting behavioral pattern. On the flip, the other thing that you've done a little bit of work on is measuring productivity of a doctor. So the other part of building a data culture is like, well, how do we build performance based measurement? And measuring performance of a doctor can be pretty challenging. What have you found as you've studied some of that work? Because I can imagine there's near term outcomes and there's long term outcomes, and there's outcomes you see and don't see. I mean, how do you think about that?
Bapu Jena: (44:15)
It's very difficult. The way I think about it is, first we have to figure out what we mean by quality. What is it that determines quality? There's a lot of frameworks for thinking about what quality care looks like. So it should be safe, meaning you're not gonna get the wrong leg operated on. It should be affected, meaning that if you have something that's ailing you, you get better after it being done. And there's other kind of domains of quality that we care about. So first, how do we define quality? Then the next question is, well, how do we measure it? And we measure it often either by looking at the processes that go into it. So if someone comes in with pneumonia or an infection, do they get antibiotics? That's a process measure of quality. If they do, that's a check mark in the good direction.
Bapu Jena: (44:58)
If they don't, that's a ding. The other way to think about quality is to measure the actual outcome itself. Did the person die from a pneumonia? Were they hospitalized for an extended period of time because the pneumonia was not adequately treated? Those sorts of things that you know already is complicated to do. And it's not only because of a data issue, because now we do have data to be able to answer those types of questions. Did you get X, Y, Z? Did you live or die? How long were you hospitalized? Did you come in and outta the hospital? Lots of different ways to measure those sorts of outcomes. One of the fundamental problems that we have to deal with is not really a data problem, but more what I'd call an empirical problem is how do we know that the quality differences are a result of something that the doctor, the nurse, or the hospital did versus something that would have happened anyway because of the type of patient that was being treated?
Bapu Jena: (45:50)
If we observe that someone has bad outcomes from pneumonia, is it because the doctors failed to do certain things or because the patients failed to do certain things? Or is it because there's other risk factors that we couldn't adequately account for that explain that finding? That's where I think the rubber hits the road, trying to figure out what are the things to measure, and then how do we measure them in a way that is causally relatable to the unit of observation that we care about, meaning the doctor, the hospital, the nurse, the therapist, whatever it is. That's where we require really good data, which is where data companies are doing a great job, but we also require some level of creativity. It's not just plug and chug. We see a patient seen by this provider and we compute their outcomes, and oh, there you go. That's the quality of that provider. There's got to be some attention and thought paid to. How do we know that what we observe is because of the provider versus any number of other factors? And there is some creativity that's required to answer that question.
Satyen Sangani: (46:50)
Where do these productivity measures and outcome measures come up? Where do you see them now being most applied and who's asking for it? Is it hospitals that are trying to measure the quality of their provider care? Is it the insurance companies that are trying to measure whether or not they should be believing or trusting what a provider tells them?
Bapu Jena: (47:06)
It is coming up everywhere. It's coming up in those two domains. It comes up from the federal government, from a regulatory perspective, the CMS, big payer of care, Centers for Medicare, Medicaid services. They want to know whether or not hospitals are delivering high-quality care. There's a lot of interest in measuring the quality of care. There's a lot of interest in paying for better quality care. It's not a crazy thing to think that if there's poor quality, you don't wanna pay for it. If there's good quality, you wanna pay for it. I think the biggest problem is figuring out what do we mean by good quality? And then how do we accurately measure it so that it actually is a byproduct of the activity or the organization that we're trying to incentivize? That's where I think people who are critical of quality measurement, that's where they sort of come in.
Bapu Jena: (47:49)
And then there's just sort of a more macro-level question, which is we have been able to measure things a lot better in the last 20 years than ever before. Quality measurement as a field, as a discipline has taken on a life that is very different in the last 20 years than T-minus 20 to T-minus 40 years ago. We can ask, well, what do we see as a result of that we're measuring quality much more? Do we see quality getting better in a lot of substantive ways? I think quality experts might disagree on that. I think a fair number of various team quality experts would say, we've measured the hell out of quality, but I don't know that quality has actually improved so much.
What’s next
Satyen Sangani: (48:29)
I think also it depends on what your, to your point, ultimately, what your definition of quality is. You can define cap quality as patient satisfaction, long-term outcomes, short-term outcomes, cost of the visit or cost of the care provided relative to the outcome. There's so many different ways to do it. What's interesting about your work is to say all that's great, but at least you should try. Or at least there's the ability to have that conversation. Even if the perfect measure doesn't exist, don't let perfect be the enemy of the good, which I think is true, frankly, of medicine and true for most human endeavors and outcomes that are complicated and complex. You've written this amazing book. You're obviously treating patients, practicing medicine, doing research. What's next on the horizon for you? And what are you looking at in your work?
Bapu Jena: (49:17)
A lot more of the above. [laughter] That's sort of how I spend my time. I do research, and I'm lucky that I'm able to ask questions that I personally find interesting. So, the book: I mentioned a couple of the findings, the marathons and mortality, the cardiology meetings. What happens when cardiologists go out of town? I'm attracted to those questions because they're just quirky and clever, and I really want to know the answer to that question. Then we spend some time to try to unpack, well, why might this matter for healthcare beyond just the initial finding? But what really drives me is I'm just interested in this stuff. It's fun for me, and I'm lucky to be able to do it. So I'll continue to teach, continue to do research, see patients, and do podcasts.
Satyen Sangani: (49:58)
And do podcasts. Thank you a ton for coming on. You're welcome. You're one of the many people who are fortunate enough to call it a Data Radical, and I think the curiosity that you mentioned is exactly I think what everybody who listens to this podcast is going for. I appreciate your time and look forward to having you back on at some point in the future.
Bapu Jena: (50:17)
Thank you. I appreciate it.
Satyen Sangani: (50:25)
When it comes to practicing the science of medicine, many consider data to be the be all and end all. But Bapu explained that he sees three types of physicians. Physician 1 is unfamiliar with a particular condition and relies solely on their experience. Physician 2 never deviates from the guidelines, and Physician 3 is a 70/30 split. They follow the guidelines 70% of the time and rely upon their experience for the other 30%. Bapu believes that the technique of Physician 3 can lead to a culture of data and insights. Those who know the guidelines and are integrating their training learnings from colleagues and medical know-how, deliver better patient outcomes. That's true of medicine and likely true for most knowledge oriented professions in our world. Thanks for listening, and thanks to Bapu for joining today. I'm Satyen Sangani, CEO of Alation. Data Radicals, keep learning and sharing. Until next time.
Producer: (51:18)
This podcast is brought to you by Alation. Does data governance get a bad rap at your business? Today, Alation customers wield governance to drive business value, increase efficiencies, and reduce risk. Learn how to reposition data governance as a business enabler, not a roadblock. Download the white paper Data Governance Is Valuable: Moving to an Offensive Strategy.
Subscribe to the Data Radicals
Get the latest episodes delivered right to your inbox.