[THEME MUSIC FADES]
The book passage I read at the top is from the epilogue, and I think it’s a truly fitting closing sentiment for the conclusion of this podcast series—because it calls back to the very beginning.
As I’ve mentioned before, Carey, Zak, and I wrote The AI Revolution in Medicine as a guide to help answer these big questions, particularly as they pertain to medicine. You know, we wrote the book to empower people to make a choice about AI’s development and use. Well, have they? Have we?
Perhaps we’ll need more time to tell. But over the course of this podcast series, I’ve had the honor of speaking with folks from across the healthcare ecosystem. And my takeaway? They’re all committed to shaping AI into a tool that can improve the industry for practitioners and patients alike.
In this final episode, I’m thrilled to welcome back my coauthors, Carey Goldberg and Dr. Zak Kohane. We’ll examine the insights from the second half of the season.
[TRANSITION MUSIC]
Carey, Zak—it’s really great to have you here again!
CAREY GOLDBERG: Hey, Peter!
ZAK KOHANE: Hi, Peter.
LEE: So this is the second roundtable. And just to recap, you know, we had several early episodes of the podcast where we talked to some doctors, some technology developers, some people who think about regulation and public policy, patient advocates, a venture capitalist who invests in, kind of, consumer and patient-facing medical ventures, and some bioethicists.
And I think we had a great conversation there. I think, you know, it felt mostly validating. A lot of the things that we predicted might happen happened, and then we learned a lot of new things. But now we have five more episodes, and the mix of kinds of people that we talk to here is different than the original.
And so I thought it would be great for us to have a conversation and recap what we think we heard from all of them. So let’s just start at the top.
So in this first episode in the second half of this podcast series, we talked to economists Azeem Azhar and Ethan Mollick. And I thought those conversations were really interesting. Maybe there were, kind of, two things, two main topics. One was just the broader impact on the economy, on the cost of healthcare, on overall workforce issues.
One of the things that I thought was really interesting was something that Ethan Mollick brought up. And maybe just to refresh our memories, let’s play this little clip from Ethan.
ETHAN MOLLICK: So we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it. … We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements. … But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work, and the systems don’t have the ability to, kind of, learn or adapt as a result.
LEE: So let me start with you, Zak. Does that make sense to you? Are you seeing something similar?
KOHANE: I thought it was incredibly insightful because we discussed on our earlier podcast how a chief AI officer in one of the healthcare hospitals, in one of the healthcare systems, was highly regulating the use of AI, but yet in her own practice on her smartphone was using all these AI technologies.
And so it’s insightful that on the one hand, she is increasing her personal productivity, …
LEE: Right.
KOHANE: … and perhaps she’s increasing her quality of her care. But it’s very hard for the healthcare system to actually realize any gains. It’s unlikely … let’s put it this way. It would be for her a defeat if they said, “Now you should see more patients.”
LEE: Yes. [LAUGHS]
KOHANE: Now, I’m not saying that won’t happen. It could happen. But, you know, gains of productivity are really at the individual level of the doctors. And that’s why they’re adopting it. That’s why the ambient dictation tools are so successful. But really turning it into things that matter in terms of productivity for healthcare, namely making sure that patients are getting healthy, requires that every piece of the puzzle works well together. You know, it’s well-tread ground to talk about how patients get very expensive procedures, like a cardiac transplant, and then go home, and they’re not put on blood thinners …
LEE: Right.
KOHANE: … and then they get a stroke. You know, the chain is as strong as the weakest link. And just having AI in one part of it is not going to do it. And so hospitals, I think, are doubly burdened by the fact that, (A) they tend to not like innovation because they are high-revenue, low-margin companies. But if they want it implemented effectively, they have to do it across the entire processes of healthcare, which are vast and not completely under their control.
LEE: Yeah. Yep. You know, that was Sara Murray, who’s the chief health AI officer at UC San Francisco.
And then, you know, Carey, remember, we were puzzled by Chris Longhurst’s finding in a controlled study that the, you know, having an AI respond to patient emails didn’t seem to lead to any, I guess you would call it, productivity benefits. I remember we were both kind of puzzled by that. I wonder if that’s related to what Ethan is saying here.
GOLDBERG: I mean, possibly, but I think we’ve seen since then that there have been multiple studies showing that in fact using AI can be extremely effective or helpful, even, for example, for diagnosis.
And so I find just from the patient point of view, it kind of drives me crazy that you have individual physicians using AI because they know that it will improve the care that they’re offering. And yet you don’t have their institutions kind of stepping up and saying, “OK, these are the new norms.”
By the way, Ethan Mollick is a national treasure, right. Like, he is the classic example of someone who just stepped up at this moment …
LEE: Yeah.
GOLDBERG: … when we saw this extraordinary technological advance. And he’s not only stepping up for himself. He’s spreading the word to the masses that this is what these things can do.
And so it’s frustrating to see the institutions not stepping up and instead the individual doctors having to do it.
KOHANE: But he made another very interesting point, which was that the reason that he could be so informative to not only the public but practitioners of AI is these things would emerge out of the shop, and they would not be aged too long, like a fine wine, before they were just released to the public.
And so he was getting exposure to these models just weeks after some of the progenitors had first seen it. And therefore, because he’s actually a really creative person in terms of how he exercises models, he sees uses and problems very early on. But the point is institutions, think about how much they are disadvantaged. They’re not Ethan Mollick. They’re not the progenitors. So they’re even further behind. So it’s very hard. If you talk to most of the C-suite of hospitals, they’d be delighted to know as much about the impact as Ethan Mollick.
LEE: Yeah. By the way, you know, I picked out this quote because within Microsoft, and I suspect every other software company, we’re seeing something very similar, where individual programmers are 20 to 30% more productive just in the number of lines of code they write per day or the number of pull requests per week. Any way you measure it, it’s very consistent. And yet by the time you get to, say, a 25-person software engineering team, the productivity of that whole team isn’t 25% more productive.
Now, that is starting to change because we’re starting to figure out that, well, maybe we should reshape how the team operates. And there’s more of an orientation towards having, you know, smaller teams of full-stack developers. And then you start to see the gains. But if you just keep the team organized in the usual way, there seems to be a loss. So there’s something about what Ethan was saying that resonated very strongly with me.
GOLDBERG: But I would argue that it’s not just productivity we’re talking about. There’s a moral imperative to improve the care. And if you have tools that will do that, you should be using them or trying harder to.
LEE: Right. Yep.
KOHANE: I think, yes, first of all, absolutely you would. Unfortunately, most of the short-term productivity measures will not measure improvements in the quality of care because it takes a long time to die even with bad care.
And so that doesn’t show up right away. But I think what Peter just said actually came across in several of the podcasts, which is that it’s very tricky trying to shoehorn these things into making what we’re already doing more productive.
GOLDBERG: Yeah. Existing structures.
KOHANE: Yeah. And I know, Carey, that you’ve raised this issue many times. But it really calls into question, what should we be doing with our time with doctors? And they are a scarce resource. And what is the most efficient way to use them?
You know, I remember we [The New England Journal of Medicine AI] published a paper of someone who was able to use AI to increase the throughput of their emergency room (opens in new tab) by actually more appropriately having the truly sick people in the sick queue, in the triage queue, for urgent care.
And so I think we’re going to have to think that way more broadly, about we don’t have to now look at every patient as an unknown with maybe a few pointers on diagnosis. We can have a fairly extensive profiling.
And I know that colleagues in Clalit [Health Services] in Israel, for example, are using the overall trajectory of the patient and some considerations about utilities to actually figure out who to see next week.
LEE: Yeah, you know, what you said brings up another maybe connection to one thing that we see also in software development. And it relates to also what we were discussing earlier: about the last thing a doctor wants is to have a tool that allows them to see even yet more patients per day.
So in software development, there’s always this tension. Like, how many lines of code can you write per day? That’s one productivity measure.
But sometimes we’re taught, well, don’t write more lines of code per day, but make sure that your code is well structured. Take the time to document it. Make sure it’s fully commented. Take the time to talk to your fellow software engineering team members to make sure that it’s well coordinated. And in the long run, even if you’re writing half the number of lines of code per day, the software process will be far more efficient.
And so I’ve wondered whether there’s a similar thing where doctors could see 20% fewer patients in a day, but if they take the time and also had AI help to coordinate, maybe a patient’s journey might be half as long. And therefore, the health system would be able to see twice as many patients in a year’s period or something like that.
KOHANE: So I think you’ve “nerd sniped” me because you [LAUGHTER]—which is all too easy—but I think there’s a central issue here. And I think this is the stumbling block between what Ethan’s telling us about between the individual productivity and the larger productivity, is the team’s productivity.
And there is actually a good analogy in computer science and that’s, uh, Brooks’s “mythical man-month,” …
LEE: Yes, exactly.
KOHANE: … where he shows how you can have more and more resources, but when the coordination starts failing, because you have so many, uh, individuals on the team, you start falling apart. And so even if the, uh, individual doctors get that much better, yeah, they take better care of patients, make less stupid things.
But in terms of giving the “I get you into the emergency room, and I get you out of a hospital as fast as possible, as safely as possible, as effectively as possible,” that’s teamwork. And we don’t do it. And we’re not really optimizing our tools for that.
GOLDBERG: And just to throw in a little reality check, I’m not aware of any indication yet that AI is in any way shortening medical journeys or making physicians more efficient. Yet …
LEE: Right.
GOLDBERG: … at least. Yeah.
LEE: Yes. So I think, you know, with respect to our book, critiquing our book, you know, I think it’s fair to say we were fairly focused or maybe even fixated on the individual doctor or nurse or patient, and we didn’t really, at least I never had a time where I stepped back to think about the whole care coordination team or the whole health system.
KOHANE: And I think that’s right. It’s because, first of all, you weren’t thinking about it? It’s not what we’re taught in medical school. We’re not taught to talk about team communication excellence. And I think it’s absolutely essential.
There’s a … what’s the … there was an early … [Terry] Winograd. And he was trying to capture what are the different kinds of actions related to pronouncements that you could expect and how could AI use that. And that was beginning to get at it.
But I actually think this is dark matter of human organizational technology that is not well understood. And our products don’t do well. You know, we can talk about all the groupware things that are out there. But they all don’t quite get to that thing.
LEE: Right.
KOHANE: And I can imagine an AI serving as a team leader, a really active team leader, a real quarterback of, let’s say, a care team.
LEE: Well, in fact, you know, we have been trying to experiment with this. My colleague, Matt Lungren, who was also one of the interviewees early on, has been working with Stanford Medicine on a tumor board AI agent—something that would facilitate tumor board meetings.
And the early experiences are pretty interesting. Whether it relates to efficiency or productivity I think remains to be seen, but it does seem pretty interesting.
But let’s move on.
GOLDBERG: Well, actually, Peter, …
LEE: Oh, go ahead.
GOLDBERG: … if you’re willing to not quite move on yet …
LEE: [LAUGHS] All right.
GOLDBERG: … this kind of segues into one of, I think, the most provocative questions that arose in the course of these episodes and that I’d love to have you answer, which was, remember, it was a question at a gathering that you were at, and you were asked, “Well, you’re focusing a lot on potential AI effects on individual patient and physician experiences. But what about the revolution, right? What about, like, can you be more big-picture and envision how generative AI could actually, kind of, overturn or fix the broken system, right?”
I’m sure you’ve thought about that a lot. Like, what’s your answer?
LEE: You know, I think ultimately, it will have to. For it to really make a difference, I think that the normal processes, our normal concept of how healthcare is delivered—how new medical discoveries are made and brought into practice—I think those things are going to have to change a lot.
You know, one of the things I think about a lot right at the moment is, you know, we tend to think about, let’s say, medical diagnosis as a problem-solving exercise. And I think, at least at the Kaiser Permanente School of Medicine, the instruction really treats it as a kind of detective thing based on a lot of knowledge about biology and biomedicine and human condition, and so on.
But there’s another way to think about it, given AI, which is when you see a patient and you develop some data, maybe through a physical exam, labs, and so on, you can just simply ask, “You know, what did the 500 other people who are most similar to this experience, how were they diagnosed? How were they treated? What were their outcomes? What were their experiences?”
And that’s really a fundamentally different paradigm. And it just seems like at least the technical means will be there. And by the way, that also then relates to [the questions]: “And what was most efficacious cost-wise? What was most efficient in terms of the total length of the patient journey? How does this relate to my quality scores so I can get more money from Medicare and Medicaid?”
All of those things, I think, you know, we’re starting to confront.
One of the other episodes that we’re going to talk about, was my interview with two medical students. Actually, thinking of a Morgan Cheatham as just a medical student or medical resident [LAUGHTER] is a little strange. But he is.
One of the things he talks about is the importance that he placed in his medical training about adopting AI. So, Zak, I assume you see this also with some students at Harvard Medical School. And the other medical student we interviewed, Daniel Chen, seemed to indicate this, too, where it seems like it’s the students who are bringing AI into the medical education ahead of the faculty. Does that resonate with you?
KOHANE: It absolutely resonates with me. There are students I run into who, honestly, my first thought when I’m talking to them is, why am I teaching you [LAUGHTER], and why are you not starting a big AI company, AI medicine company, now and really change healthcare instead of going through the rest of the rigmarole? And I think broadly, higher education has a problem there, which is we have not embraced, again, going back to Ethan, a lot of the tools that can be used. And it’s because we don’t know necessarily the right way to teach them. And so far, the only lasting heuristic seems to be: use them and use them often.
And so it’s an awkward thing, where the person who knows how to use the AI tools now in the first-year medical school can teach themselves better and faster than anybody else in their class who is just relying on the medical school curriculum.
LEE: Now, the reason I brought up Morgan now after our discussion with Ethan Mollick is Morgan also talked about AI collapsing medical specialties.
GOLDBERG: Yes.
LEE: And so let’s hear this snippet from him.
MORGAN CHEATHAM: AI collapses medical specialties onto themselves, right. You have the canonical example of the cardiologist, you know, arguing that we should diuresis and maybe the nephrologist arguing that we should, you know, protect the kidneys. And how do two disciplines disagree on what is right for the patient when in theory, there is an objective best answer given that patient’s clinical status? … So I’m interested in this question of whether medical specialties themselves need to evolve. And if we look back in the history of medical technology, there are many times where a new technology forced a medical specialty to evolve.
LEE: So on the specific question about specialties, Zak, do you have a point of view? And let me admit, first of all, for us, all three of us, we didn’t have any clue about this in our book. I don’t think.
KOHANE: Not much. Not much of a clue.
So I’m reminded of a New Yorker cartoon where you see a bunch of surgeons around the patient, and someone says, “Is that a spleen?” And it says, “I don’t know. I slept during the spleen lecture,” [LAUGHTER] and … or “I didn’t take the spleen course.”
And yet when we measure things, we measure things much more than we think we are doing. So for example, we [NEJM AI] just published a paper where echocardiograms were being done. And it turns out those ultrasound waves just happen to also permeate the liver. And you can actually diagnose on the way with AI all the liver disease (opens in new tab) that is in—and treatable liver disease—that’s in those patients.
But if you’re a cardiologist, “Liver? You know, I slept through liver lecture.” [LAUGHTER] And so I do think that, (A) the natural, often guild/dollar-driven silos in medicine are less obvious to AI, despite the fact that they do exist in departments and often in chapters.
But Morgan’s absolutely right. I can tell you as an endocrinologist, if I have a child in the ICU, the endocrinologist, the nephrologist, and the neurosurgeon will argue about the right thing to do.
And so in my mind, the truly revolutionary thing to do is to go back to 1994 with Pete Szolovits, the Guardian Angel Project (opens in new tab). What I think you need is a process. And the process is the quarterback. And the quarterback has only one job: take care of the patient.
And it should be thinking all the time about the patient. What’s the right thing? And can be as school-marmish or not about, “Zak, you’re eating this or that or exercise or sleep,” but also, “Hey, surgeons and endocrinologists, you’re talking about my host, Zak. This is the right way because this problem and this problem and our best evidence is this is the right way to get rid of the fluid. The other ways will kill him.”
And I think you need an authoritative quarterback that has the view of the others but then makes the calls.
LEE: Is that quarterback going to be AI or human?
KOHANE: Well, for the very lucky people, it’ll be a human augmented by AI, super concierge.
But I think we’re running out of doctors. And so realistically, it’s going to be an AI that will have to be certified in very different ways, along the ways Dave Blumenthal says, essentially, trial by fire. Like putting residents into clinics, we’re going to be putting AIs into clinics.
But what’s worse, by the way, than the three doctors arguing about care in front of the patient is, what happens so frequently, is then you see them outpatient, and each one of them gives you a different set of decisions to make. Sometimes that actually interact pathologically, unhealthily with each other. And only the very smart nurses or primary care physicians will actually notice that and call, quote, a “family meeting,” or bring everybody in the same room to align them.
LEE: Yeah, I think this idea of quarterback is really very, very topical right now because there’s so much intensity in the AI space around agents. And in fact, you know, the Microsoft AI team under Mustafa Suleyman and Dominic King, Harsha Nori, and team just recently posted a paper on something called sequential diagnosis, which is basically an AI quarterback that is supposed to smartly consult with other AI specialties. And interestingly, one of the AI agents is sort of the devil’s advocate that’s always criticizing and questioning things.
GOLDBERG: That’s interesting.
LEE: And at least on very, very hard, rare cases, it can develop some impressive results. There’s something to this that I think is emerging.
GOLDBERG: And, Peter, Morgan said something that blew me away even more, which was, well, why do we even need specialists if the reason for a specialist is because there’s so much medical knowledge that no single physician can know all of it, and therefore we create specialists, but that limitation does not exist for AI.
LEE: Yeah. Yeah.
GOLDBERG: And so there he was kind of undermining this whole elaborate structure that has grown up because of human limitations that may not ultimately need to be there.
LEE: Right. So now that gives me a good segue to get back to our economist and get to something that Azeem Azhar said. And so there’s a clip here from Azeem.
AZEEM AZHAR: We didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. You know, the idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer.
LEE: And, you know, in the same conversation, he also talked about his own management of asthma and the fact that he’s been managing this for several decades and knows more than any other human being, no matter how well medically trained, could possibly know. And it’s also very highly personalized. And it’s not a big leap to imagine AI having that sort of lifelong understanding.
KOHANE: So in fact, I want to give credit back to our book since you insulted us. [LAUGHTER] You challenged us. You doubted us. We do have at the end of the book a AI which is helping this woman manage her way through life. It’s quarterbacking for the woman all these different services.
LEE: Yes.
KOHANE: So there.
LEE: Ah, you’re right. Yes. In fact, it’s very much, I think, along the lines of the vision that Azeem laid out in our conversation.
GOLDBERG: Yeah. It also reminded me of the piece Zak wrote about his mother (opens in new tab) at one point when she was managing congestive heart failure and she needed to watch her weight very carefully to see her fluid status. And absolutely, there’s no … I see no reason whatsoever why that couldn’t be done with AI right now. Actually, although back then, Zak, you were writing that it takes much more than an AI [LAUGHS] to manage such a thing, right?
KOHANE: You need an AI that you can trust. Now, my mother was born in 1927, and she’d learned through the school of hard knocks that you can’t trust too many people, maybe even not your son, MD, PhD [LAUGHTER].
But what I’ve been surprised [by] is how, for example, how many people are willing to trust and actually see effective use of AI as mental health counselors, for example.
GOLDBERG: Yeah
KOHANE: So it may in fact be that there’s a generational thing going on, and at least there’ll be some very large subset of patients which will be completely comfortable in ways that my mother would have never tolerated.
LEE: Yeah. Now, I think we’re starting to veer into some of the core AI.
And so I think maybe one of the most fun conversations I had was in the episode with both Sébastien Bubeck, my former colleague at Microsoft Research, and now he’s at OpenAI, and Bill Gates. And there was so much that was, I thought, interesting there. And there was one point, I think that sort of touches tangentially on what we were just conversing about, that Sébastien said. So let’s hear this snippet.
SÉBASTIEN BUBECK: And one example that I really like, a study that recently appeared where … they were comparing doctors without and with ChatGPT. … So this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. … But then the kicker is that doctors with ChatGPT was 80%. Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. It should be, as Bill was saying, kind of running continuously in the background, sending you notifications.
LEE: So I thought Sébastien was saying something really profound, but I haven’t been able to quite decide or settle in my mind what it is. What do you make of what Seb just said?
KOHANE: I think it’s context. I think that it requires an enormous amount of energy, brain energy, to actually correctly provide the context that you want this thing to work on. And it’s only going to really feel like we’re in a different playing field when it’s listening all the time, and it just steps right in.
There is an advantage that, for example, a good programmer can have in prompting Cursor or any of these tools to do so. But it takes effort. And I think being in the conversation all the time so that you understand the context in the widest possible way is incredibly important. And I think that’s what Seb is getting at, which is if we spoon feed these machines, yes, 90%.
But then, talking to a human being who then has to interact and gets distracted from whatever flow they’re in and maybe even makes them feel like an early bicycle rider who all of a sudden realizes, “I’m balancing on two wheels—oh no!” And they fall over. You know, there’s that interaction which is negatively synergistic.
And so I do think it’s a very hard human-computer engineering problem. How do we make these two agents, human and computational, work in an ongoing way in the flow? I don’t think I’m seeing anything that’s particularly new. And the things that you’re beginning to hint about, Peter, in terms of agentic coordination, I think we’ll get to some of that.
LEE: Yeah. Carey, does this give you any pause? The kind of results that … they’re puzzling results. I mean, the idea of doctors with AI seeming at least in this one test—it’s just one test—but it’s odd that it does worse than the AI alone.
GOLDBERG: Yes. I would want to understand more about the actual conditions of that study.
From what Bill Gates said, I was most struck by the question of resource-poor environments. That even though this was absolutely one of the most promising, brightest perspectives that we highlighted in the book, we still don’t seem to be seeing a lot of use among the one half of humanity that lacks decent access to healthcare.
I mean, there are access problems everywhere, including here in the United States. And it is one of the most potentially promising uses of AI. And I thought if anyone would know about it, he would with the work that the Gates Foundation does.
LEE: You know, I think both you and Bill, I felt, are really simpatico. You know, Bill expressed genuine surprise that more isn’t happening yet. And it really echoed, in fact, maybe even using some of the exact same words that you’ve used. And so two years on, you’ve expressed repeatedly expecting to have seen more out in the field by now. And then I thought Bill was saying something in our conversation very similar.
GOLDBERG: Yeah.
LEE: You know, for me, I see it both ways. I see the world of medicine really moving fast in confronting the reality of AI in such a serious way. But at the same time, it’s also hard to escape the feeling that somehow, we should be seeing even more.
So it’s an odd thing, a little bit paradoxical.
GOLDBERG: Yeah. I think one thing that we didn’t focus on hardly at all in the book but that we are seeing is these companies rising up, stepping up to the challenge, Abridge and OpenEvidence, and what Morgan describes as a new stack, right.
So there is that on the flip side.
LEE: Now, I want to get back to this thing that Seb was saying. And, you know, I had to bring up the issue of sycophancy, which we discussed at our last roundtable also. But it was particularly … at the time that Seb, Bill, and I had our conversation, OpenAI had just gone through having to retract a fresh update of GPT-4o because it had become too sycophantic.
So I can’t escape the feeling that some of these human-computer interaction issues are related to this tension between you want AI to follow your directions and be faithful to you, but at the same time not agree with you so often that it becomes a fault.
KOHANE: I think it’s asking the AI to enter into a fundamental human conundrum, which is there are extreme versions of doublethink, and there’s everyday things, everyday asks of doublethink, which is how to be an effective citizen.
And even if you’re thinking, “Hmm. I’m thinking this. I’m just not going to say it because that would be rude or counterproductive.” Or some of the official doublethinks, where you’re actually told you must say this, even if you think something else. And I think we’re giving a very tough mission for these things: be nice to the user and be useful.
And, in education, where the thing is not always one in the same. Sometimes you have to give a little tough love to educate someone, and doing that well is both an art and it’s also very difficult. And so, you know, I’m willing to believe that the latest frontier models that have made the news in the last month are very high-performing, but they’re also all highlighting that tension …
LEE: Yes.
KOHANE: … that tension between behaving like a good citizen and being helpful. And this gets back to what are the fundamental values that we hope these things are following.
It’s not, you know, “Are these things going to develop us into the paperclip factory?” It’s more of, “Which of our values are going to be elevated, and which one will be suppressed?”
LEE: Well, since I criticized our book before, let me pat ourselves on the back this time because, I think, pervasive throughout our book, we were touching on some of these issues.
In fact, we started the book, you know, with GPT-4 scolding me for wanting it to impersonate Zak. And there was the whole example of asking it to rewrite a poem in a certain way, and it kind of silently just tried to slide, you know, without me knowing, slide by without following through on the whole thing.
And so that early version of GPT-4 was definitely not sycophantic at all. In fact, it was just as prone to call you an idiot if it thought you were wrong. [LAUGHTER]
KOHANE: I had some very testy conversations around my endocrine diagnosis with it. [LAUGHTER]
GOLDBERG: Yeah. Well then, Peter, I would ask you, I mean last time I asked you about, well, hallucinations, aren’t those solvable? And this time I would ask you, well, sycophancy, isn’t that kind of like a dial you can turn? Like, is that not solvable?
LEE: You know, I think there are several interlocking problems. But if we assume superintelligence, even with superintelligence, medicine is such an inexact science that there will always be situations that are guesses that take into account other factors of a person’s life, other value judgments, exactly as Zak had pointed out in our previous roundtable conversation.
And so I think there’s always going to be an opening for either differences of opinion or agreeing with you too much. And there are dangers in both cases. And I think they’ll always be present. I don’t know that, at least in something as inexact as medical science, I don’t know that it’ll ever be completely eliminated.
KOHANE: And it’s interesting because I was trying to think what’s the right balance, but there are patients who want to be told this is what you do. Whereas there’s other patients who want to go through every detail of the reasoning.
And it’s not a matter of education. It’s really a temperamental, personality issue. And so we’re going to have to, I think, develop personalities …
LEE: Yeah.
KOHANE: … that are most effective for those different kinds of individuals. And so I think that is going to be the real frontier. Having human values and behaving in ways that are recognizable and yet effective for certain groups of patients.
LEE: Yeah.
KOHANE: And lots of deep questions, including how paternalistic do we want to be?
LEE: All right, so we’re getting into medical science and hallucination. So that gives me a great segue to the conversations in the episode on biomedical research. And one of the people that I interviewed was Noubar Afeyan from Moderna and Flagship Pioneering. So let’s listen to this snippet.
NOUBAR AFEYAN: We, some hundred or so times a year, ask “what if” questions that lead us to totally weird places of thought. We then try to iterate, iterate, iterate to come up with something that’s testable. Then we go into a lab, and we test it. So in that world, right, sitting there going, like, “How do I know this transformer is going to work?” The answer is, “For what?” Like, it’s going to work to make something up … well, guess what? We knew early on with LLMs that hallucination was a feature, not a bug for what we wanted to do.
LEE: [LAUGHS] So I think that really touches on just the fact that there’s so many unknowns and such lack of precision and exactness in our understanding of human biology and of medicine. Carey, what do you think?
GOLDBERG: I mean, I just have this emotional reaction, which is that I love the idea of AI marching into biomedical science and everything from getting to the virtual cell eventually to, Zak, I think it was a colleague of yours who recently published about … it was a new medication that had been sort of discovered by AI (opens in new tab), and it was actually testing out up to the phase II level or something, right?
KOHANE: Oh, this is Marinka’s work.
GOLDBERG: Yeah, Marinka, Marinka Zitnik. And … yeah. So, I mean, I think it avoids a lot of the, sort of, dilemmas that are involved with safety and so on with AI coming into medicine. And it’s just the discovery process, which we all want to advance as quickly as possible. And it seems like it actually has a great deal of potential that’s already starting to be realized.
LEE: Oh, absolutely.
KOHANE: I love this topic. First of all, I thought, actually, I think Bill and Seb, actually, had interesting things to say on that very topic, rationales which I had not really considered why, in fact, things might progress faster in the discovery space than in the clinical delivery space, just because we don’t know in clinical medicine what we’re trying to maximize precisely. Whereas for a drug effect, we do know what we’re trying to maximize.
LEE: Well, in fact, I happened to save that snippet from Bill Gates saying that. So let’s cue that up.
BILL GATES: I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, “I’m an organic chemist,” or “I run various types of assays.” I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.
LEE: So, Zak, isn’t that Bill saying exactly what you’re saying?
KOHANE: That is my point. I have to say that this is another great bet, that either we’re all going to be surprised or a large group of people will be surprised or disappointed.
There’s still a lot of people in the sort of medicinal chemist, trialist space who are still extremely skeptical that this is going to work. And we haven’t quite shown them yet that it is. Why have we not shown them? Because we haven’t gone all the way to a phase III study, which showed that the drug behaves as expected to, is effective, and basically doesn’t hurt people. That turns out to require a lot of knowledge. I actually think we’re getting there, but I understand the skepticism.
LEE: Carey, what are your thoughts?
GOLDBERG: Yeah. I mean, there will be no way around going through full-on clinical trials for anything to ever reach the market. But at the same time, you know, it’s clearly very promising. And just to throw out something for the pure fun of it, Peter, I saw … one of my favorite tweets recently was somebody saying, you know, isn’t it funny how computer science is actually becoming a lot more like biology in that it’s just becoming empirical.
It’s like you just throw stuff at the AI and see what it does. [LAUGHTER] And I was like, oh, yeah, that’s what Peter was doing when we wrote the book. I mean, he understood as many innards as anybody can. But at the same time, it was a totally empirical exercise in seeing what this thing would do when you threw things at it.
LEE: Right.
GOLDBERG: So it’s the new biology.
LEE: Well, yeah. So I think we talked in our book about accelerating, you know, biomedical knowledge and medical science. And that actually seems to be happening. And I really had fun talking to Daphne Koller about some of the accomplishments that she’s made. And so here’s a little snippet from Daphne.
DAPHNE KOLLER: This will impact not only the early stages of which hypotheses we interrogate, which molecules we move forward, but also hopefully at the end of the day, which molecule we prescribe to which patient. And I think there’s been obviously so much narrative over the years about precision medicine, personalized medicine, and very little of that has come to fruition, with the exception of, you know, certain islands in oncology, primarily on genetically driven cancers.
LEE: So, Zak, when I was listening to that, I was reminded of one of the very first examples that you had where, you know, you had a very rare case of a patient, and you’re having to narrow down some pretty complex and very rare genetic conditions. This thing that Daphne says, that seems to be the logical conclusion that everyone who’s thinking hard about AI and biology is coming to. Does it seem more real now two years on?
KOHANE: It absolutely seems more real. Here’s some sad facts. If you are at a cancer center, you will get targeted therapies if you qualify for it. Outside cancer centers, you won’t. And it’s not that the therapies aren’t available. It’s just that you won’t have people thinking about it in that way. And especially if you have some of the rare and more aggressive cancers, if you’re outside one of those cancer centers, you’re at a significant disadvantage for survival for that reason. And so anything that provides just the “simple,” in quotes, dogged investigation of the targeted therapies for patients, it’s a home run.
So my late graduate student, Atul Butte, died recently at UCSF, where he was both a professor and the leader of the Bakar Institute, and he was a Zuckerberg Chan Professor of Pediatrics.
He was diagnosed with a rare tumor two years ago. His wife is a PhD biologist, and when he was first diagnosed, she sent me the diagnosis and the mutations. And I don’t know if you know this, Peter, but this was still when we were writing the book and people didn’t know about GPT-4.
I put in those mutations into GPT-4 and the diagnosis. And I said, “I’d like to help treat my friend. What’s the right treatment?” And GPT, to paraphrase, GPT-4 said, “Before we start talking about treatment, are you sure this is the right diagnosis? Those mutations are not characteristic for that tumor.” And he had been misdiagnosed. And then they changed the diagnosis therapy and some personnel.
So I don’t have to hallucinate this. It’s already happened, and we’re going to need this. And so I think targeted therapy for cancers is the most obvious use. And if God forbid one of you has a family member who has cancer, it’s moral malpractice not to look at the genetics and run it past GPT-4 and say, “What are the available therapies?”
LEE: Yeah.
KOHANE: I really deeply believe that.
LEE: Carey, I think one thing you’ve always said is that you’re surprised that we don’t hear more stories along these lines. And I think you threw a quote from Mustafa Suleyman back at me. Do you want to share that?
GOLDBERG: Yes. Recently, I believe it was a Big Technology interview (opens in new tab), and the reporter asked Mustafa Suleyman, “So you guys are seeing 50 million queries, medical queries, a day [to Copilot and Bing]. You know, how’s that going?” And I think I am a bit surprised that we’re not seeing more stories of all types. Both here’s how it helped me and also here was maybe, you know, a suggestion that was not optimal.
LEE: Yeah. I do think in our book, we did predict both positive and negative outcomes of this. And it is odd. Atul was very open with his story. And of course, he is such … he was such a prominent leader in the world of medicine.
But I think I share your surprise, Carey. I expected by now that a lot more public stories would be out. Maybe there is someone writing a book collecting these things, I don’t know.
KOHANE: Maybe someone called Carey Goldberg should write that book. [LAUGHTER]
GOLDBERG: Write a book, maybe. I mean, we have Patients Use AI (opens in new tab), which is a wonderful blog by Dave deBronkart, the patient advocate.
But I wonder if it’s also something structural, like who would be or what would be the institution that would be gathering these stories? I don’t know.
LEE: Right.
KOHANE: And that’s the problem. You see, this goes back to the same problem that [Ethan] Mollick was talking about. Individual doctors are using them. The hospital as a whole is not doing that. So it’s not judging the quality, as part of its quality metrics, of how good the AI is performing and what new has happened. And the other audience, namely the patients, have no mechanism. There is no mechanism to go to Better Business Bureau and say, “They screwed up,” or “This was great.”
LEE: So now I want to get a little more futuristic. And this gets into whether AI is really going to get almost to the ab initio understanding of human biology. And so Eric Topol, who is one of the guests, spoke to this a bit. So let’s hear this.
LEE: So you talk about a virtual cell. Is that achievable within 10 years, or is that still too far out?
ERIC TOPOL: No, I think within 10 years for sure. You know, the group that got assembled, that Steve Quake pulled together, I think has 42 authors in a paper in Cell. The fact that he could get these 42 experts in life science and some in computer science to come together and all agree that not only is this a worthy goal, but it’s actually going to be realized, that was impressive.
LEE: You know, I have to say Eric’s optimism took me aback. Just speaking as a techie, I think I started off being optimistic: as soon as we can figure out molecular dynamics, biology can be solved. And then you start to learn more about biochemistry, about the human cell, and then you realize, oh, my God, this is just so vast and unknowable. And now you have Eric Topol saying, “Well, in less than 10 years.”
KOHANE: So what’s delightful about this period is that those of us who are cautious were so incredibly wrong about AI two years ago. [LAUGHTER] That’s a true joy … I mean, absolute joy. It’s great to have your futurism made much more positive.
But I think that we’re going from, you know, for example, AlphaFold has had tremendous impact. But remember, that was built on years of acquisition of crystallography data that was annotated. And of course, the annotation process becomes less relevant as you go down the pipe, but it started from that.
LEE: Yes.
KOHANE: And there’s lots of parts of the cell. So when people talk about virtual cells—I don’t mean to get too technical—mostly they’re talking about perturbation of gene expression. They’re not talking about, “Oh, this is how the liposome and the centrosome interact, and notice how the Golgi bodies bump into each other.”
There’s a whole bunch of other levels of abstraction we know nothing about. This is a complex factory. And right now, we’re sort of the level from code into loading code into memory. We’re not talking about how the rest of the robots work in that cell, and how the rest of those robots work in the cell turns out to be pretty important to functioning.
So I’d love to be wrong again. And in 10 years, oh yeah, not only, you know, our first in-human study will be you, Dr. Zak. We’re going put the drug because we fully simulated you. That’d be great.
LEE: Yes.
KOHANE: And, by the way, just to give people their due, there probably was a lot of animal research that could be done in silico and that for various political reasons we’re now seeing happen. That’s a good thing. But I think that sometimes it takes a lot of hubris to get us where we need to get, but my horizon is not the same as his.
LEE: So I guess I have to take this time to brag. Just recently out of our AI for Science team did publish in Science a biological emulator that does pretty long timespan, very, very precise, and very efficient molecular dynamics, biomolecular dynamics emulation. We call it emulation because it’s not simulating every single time step but giving you the final confirmations.
KOHANE: That’s an amazing result.
LEE: Yeah.
KOHANE: But … that is an amazing result. And you’re doing it in some very important interactions. But there’s so much more to do.
LEE: I know, and it’s single molecules; it’s not even two molecules. There’s so much more to go for here. But on the other hand, Eric is right, you know, 42 experts writing for Cell, you know, that’s not a small matter.
KOHANE: So I think sometimes you really need to drink your own hallucinogens to actually succeed. Because remember, when the Human Genome Project (opens in new tab) was launched, we didn’t know how to sequence at scale.
We said maybe we would get there. And then in order to get the right funding and excitement and, I think, focus, we predicted that by early 2000s we’d be transforming medicine. Has not happened yet. Things have happened, but at a much slower pace. And we’re 25 years out. In fact, we’re 35 years out from the launch.
But again, things are getting faster and faster. Maybe the singularity is going to make a whole bunch of things easier. And GPT-6 will just say, “Zak, you are such a pessimist. Let me show you how it’s done.”
GOLDBERG: Yeah.
It really is a pessimism versus optimism. Like is it, I mean, biology is such a bitch, right. [LAUGHTER] Can we actually get there?
At the same time, everyone was surprised and blown away by the, you know, the quantum leap of GPT-4. Who knows when enough data gets in there if we might not have a similar leap.
LEE: Yeah. All right.
So let’s get back to healthcare delivery. Besides Morgan Cheatham, we talked to [a] more junior medical student who’s at the Kaiser Permanente School of Medicine, Daniel Chen. And, you know, I asked him about this question of patients who come in armed [LAUGHS] with a lot of their own information. Let’s hear what he said about this.
DANIEL CHEN: But for those that come in with a list, I sometimes sit down with them, and we’ll have a discussion, honestly. … “I don’t think you have meningitis because, you know, you’re not having a fever. Some of the physical exam maneuvers we did were also negative. So I don’t think you have anything to worry about that,” you know. So I think it’s having that very candid conversation with the patient that helps build that initial trust.
LEE: So, Zak, as far as I can tell, Daniel and Morgan are figuring this out on their own as medical students. I don’t think this is part of the curriculum. Does it need to be?
KOHANE: It’s missing the bigger point. The incentives and economic forces are such that even if you were Daniel, and things have not changed in terms of incentives, and it’s 2030, he still has to see this many patients in an hour.
And sitting down, going over that with a patient, let’s say some might need more … in fact, I think computer scientists are enriched for these sort of neurotic “explain [to] me why this works,” when often the answer is, “I have no idea; empirically it does.”
And patients in some sense deserve that conversation, and we’re taught about joint decision making, but in practice, there’s a lot of skills that are deployed to actually deflect so that you can get through the appointment and see enough patients per hour.
And that’s why I think that one of the central … another task for AI is how to engage with patients to actually explain to them why their doctor is doing what he’s doing and perhaps ask the one or two questions that you should be asking the doctor in order to reassure you that they’re doing the right thing.
LEE: Yeah.
KOHANE: I just … right now, we are going to have less doctor time, not more doctor time.
And so I’ve always been struck by the divide between medicine that we’re taught as it should be practiced as a gentle person’s vocation or sport as opposed to assembly line, heads down “you’ve got to see those patients by the end of the day” because, otherwise, you haven’t seen all the patients at the end of the day.
LEE: Yeah. Carey, I’ve been dying to ask you this, and I have not asked you this before. When you go see a doctor, are you coming in armed with ChatGPT information?
GOLDBERG: I haven’t needed to yet, but I certainly would. And also my reaction to the medical student description was, I think we need to distinguish between the last 20 years, when patients would come in armed with Google, and what they’re coming in with now because at least the experiences that I’ve witnessed, it is miles better to have gone back and forth with GPT-4 than with, you know, dredging what you can from Google. And so I think we should make that distinction.
And also, the other thing that most interested me was this question for medical students of whether they should not use AI for a while so that they can learn …
LEE: Yes.
GOLDBERG: … how to think and similarly maybe don’t use the automated scribes for a while so they can learn how to do a note. And at what point should they then start being able to use AI? And I suspect it’s fairly early on that, in fact, they’re going be using it so consistently that there’s not that much they need to learn before they start using the tools.
LEE: These two students were incredibly impressive. And so I have wondered, you know, if we got a skewed view of things. I mean, Morgan is, of course, a very, very impressive person. And Daniel was handpicked by the dean of the medical school to be a subject of this interview.
KOHANE: You know, we filter our students, by and large, I mean, there’s exceptions, but students in medical school are so starry eyed. And they are really … they got into medical school—I mean, some of them may have faked it—but a lot of them because they really wanted to do good.
LEE: Right.
KOHANE: And they really wanted to help. And so this is very constant with them. And it’s only when they’re in the machine, past medical school, that they realize, oh my God, this is a very, very different story.
And I can tell you, because I teach a course in computational-enabled medicine, so I get a lot of these nerd medical students, and I’m telling them, “You’re going to experience this. And you’re going to say, ‘I’m not going to able to change medicine until I get enough cred 10, 15 years from now, whereas I could start my own company and immediately change medicine.’”
And increasingly I’m getting calls in like residency and saying, “Zak, help me. How do I get out of this?”
GOLDBERG: Wow.
KOHANE: And so I think there’s a real disillusionment of, like, between what we’re asking for people coming to medical school—we’re looking for a phenotype—and then we’re disappointing them massively, not everywhere, but massively.
And for me, it’s very sad because among our best and brightest, and then because of economics and expectations and the nature of the beast, they’re not getting to enjoy the most precious part of being a doctor, which is that real human connection, and longitudinality, you know, the connection between the same doctor visit after visit, is more and more of a luxury.
LEE: Well, maybe this gets us to the last episode, you know, where I talk to a former, you know, state director of public health, Umair Shah, and with Gianrico Farrugia, who’s the CEO of Mayo Clinic. And I think if there’s one theme that I took away from those conversations is that we’re not thinking broadly enough nor big enough.
And so here’s a little quote of exchange that Umair Shah, who was the former head of public health in the State of Washington and prior to that in Harris County, Texas, and we had a conversation about what techies tend to focus on when they’re thinking about AI and medicine.
UMAIR SHAH: I think one of the real challenges is that when even tech companies, and you can name all of them, when they look at what they’re doing in the AI space, they gravitate towards healthcare delivery.
LEE: Yes. And in fact, it’s not even delivery. I think techies—I did this, too—tend to gravitate specifically to diagnosis.
LEE: I have been definitely guilty. I think Umair, of course, was speaking as a former frustrated public health official in just thinking about all the other things that are important to maintain a healthy population.
Is there some lesson that we should take away? I think our book also focused a lot on things like diagnosis.
KOHANE: Yeah. Well, first of all, I think we just have to have humility. And I think it’s a really important ingredient. I found myself staring at the increase in lifespan in human beings over the last two centuries and looking for bumps that were attributable.
I’m in medical school. I’ve already made this major commitment. What are the bumps that are attributable to medicine? And there was one bump that was due to vaccines, a small bump. Another small bump that was due to antibiotics. And the rest of it is nutrition, sanitation, yeah, nutrition and sanitation.
And so I think doctors can be incredibly valuable, but not all the time. And we’re spending now one-sixth of our GDP on it. The majority of it is not effectively prolonging life. And so the humility has to be the right medicine at the right time.
But that runs, (A) against a bunch of business models. It runs against the primacy of doctors in healthcare. It was one thing when there were no textbooks; there was no PubMed. You know, the doctor was the repository of all the probably knowledge that we have. But I think your guests were right. We have to think more broadly in the public health way. How do we make knowledge pervasive like sanitation?
GOLDBERG: Although I would add that since what we’re talking about is AI, it’s harder to see if … and if what you’re talking about is public health, I mean, it was certainly very important to have good data during the pandemic, for example.
But most of the ways to improve public health, like getting people to stop smoking and eat better and sleep better and exercise more, are not things that AI can help with that much. Whereas diagnosis or trying to improve treatment are places that it could tackle.
And in fact, Peter, I wanted to put you—oh, wait, Zak’s going to say something—but, Peter, I wanted to put you on the spot.
LEE: Yeah.
GOLDBERG: I mean, if you had a medical issue now, and you went to a physician, would you be OK with them not using generative AI?
LEE: I think if it’s a complex or a mysterious case, I would want them to use generative AI. I would want that second opinion on things. And I would personally be using it. If for no other reason than just to understand what the chart is saying.
I don’t see, you know, how or why one wouldn’t do that now.
KOHANE: It’s such a cheap second opinion, and people are making mistakes. And even if there are mistakes on the part of AI, if there’s a collision, discrepancy, that’s worth having a discussion. And again, this is something that we used to do more of when we had more time with the patients; we’d have clinic conferences.
LEE: Yeah.
KOHANE: And we don’t have that now. So I do think that there is a role for AI. But I think again, it’s much more of a continual presence, being part of a continued conversation rather than an oracle.
And I think that’s when you’ll start seeing, when the AI is truly a colleague, and saying, “You know, Zak, that’s the second time you made that mistake. You know, that’s not obesity. That’s the effect of your drugs that you’re giving her. You better back off of it.” And that’s what we need to see happen.
LEE: Well, and for the business of healthcare, that also relates directly to quality scores, which translates into money for healthcare providers.
So the last person that we interviewed was Gianrico Farrugia. And, you know, I was sort of wondering, I was expecting to get a story from a CEO saying, “Oh, my God, this has been so disruptive, incredibly important, meaningful, but wow, what a headache.”
At least Gianrico didn’t expose any of that. Here’s one of the snippets to give you a sense.
GIANRICO FARRUGIA: When generative AI came, for us, it’s like, I wouldn’t say we told you so, but it’s like, ah, there you go. Here’s another tool. This is what we’ve been talking about. Now we can do it even better. Now we can move even faster. Now we can do more for our patients. It truly never was disruptive. It truly immediately became enabling, which is strange, right, because something as disruptive as that instantly became enabling at Mayo Clinic.
LEE: So I tried pretty hard in that interview to get Gianrico to admit that there was a period of headache and disruption here. And he never, ever gave me that. And so I take him at his word.
Zak, maybe I should ask you, what about Harvard and the whole Harvard medical ecosystem?
KOHANE: I would be surprised if there are system-wide measurable gains in health quality right now from AI. And I do have to say that Mayo is one of the most marvelous organizations in terms of team behavior. So if there’s someone who’s gotten the team part of it right, they’ve come the closest, which relates to our prior conversation. They have the quarterback idea …
LEE: Yes.
KOHANE: … pretty well down compared to others.
Nonetheless, I take him at his word, that it hasn’t disrupted them. But I’m also, I have yet to see the evidence that there’s been a quantum leap in quality or efficacy. And I do believe that it’s possible to have a quantum leap in efficacy in the right system.
So if they haven’t been disrupted, I would venture that they’ve absorbed it, but they haven’t used it to its fullest potential. And the way I could be proven wrong is next year, also the metrics showing that over the last year, they’ve had, you know, decreased readmissions, decreased complications, decreased errors and all that. And if so, God bless them. And we should all be more like Mayo.
LEE: So I thought a little bit about two other quotes from the interviews that sort of maybe would send us off with some more inspirational kind of view of the future. And so there’s one from Bill Gates and one from Gianrico Farrugia. So what I’d like to do is to play both of those and then maybe we can have our last comments.
BILL GATES: You know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.
And now Gianrico.
GIANRICO FARRUGIA: And we seemed to be on a linear path, which is, let’s try and reduce administrative burden. Let’s try and truly be a companion to a physician or other provider. … And then in the next step, we keep going until we get to, now we can call it agentic AI, whatever we want to talk about. And my view was, no, is that let’s start with that aim, the last aim … because the others will come automatically if you’re working on that harder problem. Because one, to get to that harder problem, you’ll find all the other solutions.
All right. I think these are both kind of calls to be more assertive about this and more forward leaning. I think two years into the GPT-4 era, those are pretty significant and pretty optimistic calls to action. So maybe just to give you both one last word. What would be one hope that you would have for the world of healthcare and medicine two years from now?
KOHANE: I would hope for businesses that whoever actually owns them at some holding company level, regardless of who owns them, are truly patient-focused companies, companies where the whole AI is about improving your care, and it’s only trying to maximize your care and it doesn’t care about resource limitations.
And as I was listening to Bill, and the problem with what he was saying about saving dollars for governments is for many things, we have some very expensive things that work. And if the AI says, “This is the best thing,” it’s going to break your bank. And instead, because of research limitations, we play a human-based fancy footwork to get out of it.
That’s a hard game to play, and I leave it to the politicians and the public health officials who have to do those trades of utilities.
In my role as doctor and patient, I’d like to see very informed, authoritative agents acting only on our behalf so that when we go and we seek to have our maladies addressed, the only issue is, what’s the best and right thing for me now? And I think that is both technically realizable. And even in our weird system, there are business plans that will work that can achieve that. That’s my hope for two years from now.
LEE: Yeah, fantastic. Carey.
GOLDBERG: Yeah. I second that so enthusiastically. And I think, you know, we have this very glass half full/glass half empty phenomenon two years after the book came out.
And it’s certainly very nice to see, you know, new approaches to administrative complexity and to prior authorization and all kinds of ways to make physicians’ lives easier. But really what we all care about is our own health and that we would like to be able to optimize the use of this truly glorious technological achievement to be able to live longer and better lives. And I think what Zak just described is the most logical way to do that.
[TRANSITION MUSIC]
LEE: Yeah, I think for me, two years from now, I would like to see all of this digital data that’s been so painful, such a burden on every doctor and nurse to record, actually amount to something meaningful in the care of patients. And I think it’s possible.
KOHANE: Amen.
GOLDBERG: Yeah.
LEE: All right, so it’s been quite a journey. We were joking before we’re still on speaking terms after having written a book. [LAUGHS]
And then, um, I think listeners might enjoy knowing that we debated amongst ourselves what to do about a second edition, which seemed too painful to me, and so I suggested the podcast, which seemed too painful to the two of you [LAUGHTER]. And in the end, I don’t know what would have been easier, writing a book or doing this podcast series, but I do think that we learned a lot.
Now, last bit of business here. To avoid having the three of us try to write a book again and do this podcast, I leaned on the production team in Microsoft Research and the Microsoft Research Podcast. And I thought it would be good to give an explicit acknowledgment to all the people who’ve contributed to this.
So it’s a long list of names. I’m going to read through them all. And then I suggest that we all give an applaud [LAUGHTER] to them. And so here we go.
There’s Neeltje Berger, Tetiana Bukhinska, David Celis Garcia, Matt Corwine, Jeremy Crawford, Kristina Dodge, Chris Duryee, Ben Ericson, Kate Forster, Katy Halliday, Alyssa Hughes, Jake Knapp, Weishung Liu, Matt McGinley, Jeremy Mashburn, Amanda Melfi, Wil Morrill, Joe Plummer, Brenda Potts, Lindsay Shanahan, Sarah Sobolewski, David Sullivan, Stephen Sullivan, Amber Tingle, Caitlyn Treanor, Craig Tuschhoff, Sarah Wang, and Katie Zoller.
Really a great team effort, and they made it super easy for us.
GOLDBERG: Thank you. Thank you. Thank you.
KOHANE: Thank you. Thank you.
GOLDBERG: Thank you.
[THEME MUSIC]
LEE: A big thank you again to all of our guests for the work they do and the time and expertise they shared with us.
And, last but not least, to our listeners, thank you for joining us. We hope you enjoyed it and learned as much as we did. If you want to go back and catch up on any episodes you may have missed or to listen to any again, you can visit aka.ms/AIrevolutionPodcast (opens in new tab).
Until next time.
[MUSIC FADES]