I have to comment as a doctor, because your letter references us several times, and because I naturally feel defensive when this AI stuff starts breathing down our backs. As of right now, in primary care, the juggling of biological, psychological, and social components of someone’s health is just too human of a task. When I coordinate the care of someone with 20+ problems, there are all sorts of levers being pulled in terms of priorities, hierarchical decisions, diagnostic possibilities and treatment recommendations... all delivered with a modicum of charisma and compassion. I think the estimate that 99% of what we do is not diagnostic is a gross underestimation... but it’s true that perhaps 80% of what we do is creative or algorithmic thinking within the boxes of patients’ established diagnoses.
ChatGPT does really suck at citing real sources. It’s one of the biggest Achilles heels, and undermines its reliability and trustworthiness. And trust is at the very foundation of the doctor patient relationship!
Nonetheless, here are a few questions I have typed in live, during the past 2 weeks, while seeing a patient, just to get some quick ideas before double checking veracity:
“is hemochromatosis carrier state associated with an increased risk of pancreatic cancer? Estimate the increased risk in percentage / relative risk.”
“please compare and contrast Interstim procedure versus sling procedure for the treatment of urinary incontinence and overactive bladder.”
“can D-mannose cause candida infection in the esophagus?”
“what could cause swelling of the fingernail beds and toenail beds, with tenderness, associated with blood clots and pulmonary embolus?”
“What are some possible diagnoses for a postpartum woman, presenting with Purpura, petechiae, swollen toes, erythematous skin on the toes, thrombocytopenia and history of antiphospholipid antibody syndrome Who is experiencing generalized abdominal pain?”
“How do statin drugs affect macular degeneration? Include at least 5 medical journal article references”
*
Most of these queries led to helpful ideas and discussions back and forth, except for the last one asking for references which was a confabulation of sources that were not real. I think I’m in a very small minority of primary docs experimenting with this, so I’m not a typical case. But thought you would like to see some of the trench work😊
This is so cool, Ryan, thanks for sharing! ...and I often find myself referencing or at least thinking of healthcare for a few reasons. First, it's interesting. Second, it's one of the few uniformly respected professions. Third, absolutely everyone interacts with it. Fourth, I think it's the biggest sector in our economy, and growing. ...Plus I think in some ways its representative to people of a very complex domain that has to combine cutting edge technology with human judgment every second of the day. So, yeah, I just find it interesting! ...Of course I have no idea how ChatGPT or other AI will impact healthcare, but do you think that idea of allowing doctors to shift even more to strategy is realistic? (Although, as you note, that's a large degree of what's going on already.) A little while ago I saw a presentation about some medical imaging AI, and the speaker was touting how sensitive it was, and one of my thoughts was that it might lead to a lot of over-treatment that might not make sense in patients' lives, so we'd really need doctors in some cases maybe to be helping people understand when intense treatment doesn't make sense. And I know doctors already do that, but I wonder if it becomes a much, much more prevalent need. Just thinking out loud.
More importantly, thank you so much for sharing these queries!! I've really valued your comments, and it's so neat to see some queries from the trenches. Would love to keep hearing about your experimentation.
Hi David, thanks for this thoughtful reply as always! I discovered your writing when you joined Substack, and it continues to resonate with me as a generalist physician, and like you said, a lot of fertile ground to consider in the realm of healthcare!
I think all of us fear becoming obsolete professionally, or even worse as Yuval Noah Harris writes, joining a future class of “useless” humans.
In the time horizon that I hope to continue practicing, which is the next 15 to 20 years probably (since my substack is currently generating a generous amount of beer money 😏) I don’t see Drs. being replaced by this artificial intelligence, but rather using it as a tool to wield, being able to ask the right questions, and being able to filter, present, and counsel about the results. Fortunately, the human brain is just an incredible machine, and human intelligence so difficult to recreate with emotional, rational, and creative inputs and outputs.
By the time most doctors are replaced society will have had some major restructuring , along the lines of WALL-E I’m guessing 🤖
haha...I think Yuval Noah Harris sounds like a great pro wrestler name;) As usual, your points seem eminently sensible to me. ...Your comments also made me think a bit about track coaches. (Just given my own track background, my brain often looks for analogies there.) For many events, you don't even need to spend much time on the internet to be able to get and understand the training regimens of high performers at all levels. There really aren't many, or perhaps any secrets in terms of specific training tactics. And yet, I think the role of the coach, who has a strategic view of how those tactics can be brought to bear with each unique individual, is so crucial that almost no one succeeds without one. I guess I'm being redundant to my earlier point...I have more concerns than I think Cal does about the impact (and speed of rollout) of ChatGPT, but part of me feels like even if, say, diagnosis were eventually completely automated, does the doctor just get to spend more time being strategic with each patient. Yeah I'm being redundant. Anyway, I appreciate this exchange.
Wow this was fun. I read Deep Work last year, so it was really exciting to see you interviewed Newport. I hadn't seen his New Yorker article until you linked to it, and I'm really glad you did. I was struck by (1) how amazingly smart humans have been to figure out the problem solving and ingenuity behind LLMs and ChatGPT and (2) how terrific Newport is at explaining it. I especially love his analogies like the Plinko one you pulled (shoutout to Kepler, our analogical king).
You noted that one of your favorite approaches is going through the historical development of an idea. I'd love to read another example of this. Do any come to mind that either written by you or someone else? Also, if you have the time/energy to explain your master list, I'm all ears. And I'll be sure to check out the other Newport books you recommended. Thanks again!
Ps - just curious: what was the prompt you used to get the cover illustration?
Neural networks in general come from analogical thinking!
In terms of that approach, honestly I usually have to cobble together sources to get it. But Isaac Asimov did this in a bunch of his nonfiction, like The Search for the Elements and A Short History of Chemistry. A book I read recently called A Journey of the Mind did some of this. I have a sort of textbook called Listen about music that does this, and there's an element of it in this really cool book called Art: The Whole Story. ...I'm not doing a great job. I'll try to come up with some others!
Master thought list is still coming;)
The prompt was: "An illustration for an article that looks inside the mind of ChatGPT, use many layers to represent the neural network" ....I tried a number of other things before this. Have you tried Midjourney? Happy to take tips!
Very cool. I'll check those out. I find that a piece of information sticks so much better when I can put it in context, and my brain seems to like chronological contexts as anchors. I just spent some time in Florence looking at Renaissance art, and I found that I could appreciate it so much more in the museums that positioned it in chronological order where I could compare it to the art that had come right before. Human progress leaps off the page that way. So I'm especially excited to look into Art: The Whole Story.
I haven't tried Midjourney before, but I'll have some fun with it now.
Before I even saw this comment, I was about to come back and say that I see this in museum exhibits sometimes, and it's almost unbelievable how a lightbulb can come on once I get a sort of anchoring in how ideas were in conversation with one another. What flowed to and from what, and why. On a personal level, the biggest perk of reporting Range for me was doing some of the background reporting on art and music to write about Van Gogh or Venice or jazz, gave me an anchor point that totally changed my experience of a museum or concert because I have some reference for the progression of ideas. Reading about 17th and 18th century Venice was fascinating...it'd be like: "... the harpsichord doesn't really have dynamics, but then the piano was created and led people to try X, Y, and Z." Just sort of getting some understanding of how new creations or inventions are responding to old ones, and what people then do with them. You know what I'm saying....I think if i had to develop a principle for some kind of curriculum, it might try to incorporate teaching the progression of ideas, because that frame has also helped me see how different disciplines influence and speak to one another. I should probably figure out a more eloquent version of this idea, huh?
I could not agree more. With regards to Renaissance art, I was so much more impressed by it to see the development from the Gothic art that came right before it. (My hottest take that I have no right to give: Gothic artists just weren't very good at painting.) This also happened for me at the City of Oslo museum in Norway. This might sound strange, but they had an exhibit that was the history of the development of kitchens. It sounded very boring, but they said it was so popular that it had once started as a temporary exhibit but had switched to become permanent one. And once I was in the room, it clicked. They had six kitchens lined up in a row, one each from medieval times, 1600s, 1700s, 1800s, and two lined up in a row. Now I saw what a big advance it was to have a stove (and not have to cook in a fire pit on the ground) or an airshaft (that gave a way for smoke to get out). It was incredible. It also reminds me of this tweet with the development of light technology: https://twitter.com/waitbutwhy/status/1586444528814329856?lang=ca
So, yeah, I totally agree. I'm teaching math this year to a track of kids with a record of low math performance, and I find that they are so much more interested in it when I can help them see each new day's math concept as a new struggle that mathematicians had to figure it out. They use their prior knowledge, and it makes every day a fun game to see if they can figure out what the greatest minds of mathematics have done. That way, it makes the my reveal of the that day's to be a fun aha moment (not quite Aha-Erlebnis, but still satisfying ;).
By the way, I think you should totally come up with a more eloquent version of it, and I'll call it Epstein's Law. (I think I asked you once before what would be Epstein's Law, so maybe this one will have to be "Epstein's Other Law.")
I tend to agree with Cal. The technology is extraordinary and can be very helpful, but like he says: in the end, it’s just spouting off solutions to complex mathematical equations based on your input.
This will help people with their mundane tasks like sending a message to someone, reminding you of stuff, etc. I think google is sort of onto this idea with their assistant making reservations for restaurants. Hopefully this will free us from a lot or annoying tasks we tend to spend half our day on in the modern work environment. That is my hope at least, but I tend to be more optimistic when it comes to people predicting “the end of *blank* as we know it” every few years.
A small point that I think the econ-test passing exercise he's referring to is from (fellow GMU economist to Cowen) Bryan Caplan. Caplan wrote about it here for those interested in the details:
Regarding bank teller employment, the BLS has at least US employment still over 300k, but down from 600k in 2010 (according to Vox quoting AEI) https://www.bls.gov/oes/current/oes433071.htm
Automation has indeed historically allowed workers to do more value added activities (that is, conceptual) but now the automation is coming to conceptual & diagnostic activities as well and we have no precedent for that. Maybe we can all do the deep work, I hope so!
The comment section on this newsletter continues to delight me. Thanks so much for sharing this link. I've read some of Caplan's work, but hadn't seen this. ...Regarding automation generally, I agree we're dealing with something without precedent. I think I'm definitely more concerned than Cal is (although if I were forced to bet, I'd certainly take his opinion over mine), but I have some big pockets of optimism. I'm curious to see what happens in the short-term with graphic designers, for instance. The graphic I created above took a few minutes total, but that sort of illustration would've been a fairly expensive custom job no that long ago. I'd be curious to hear your thoughts!
Steve, so glad to hear that! I've been doing more and more Q&As as this newsletter goes on, so good to know if people like them...especially as they tend to be quite a bit longer than my non-Q&A posts.
Thanks for bringing the chill contrarian side of the argument. It's good to have my views challenged.
I feel like I would have sided with the "don't worry about it" group if most of the whistle-blowers were lay-people like myself, but that doesn't seem to be the case. The infamous petition calling for a pause on AI research was littered with AI researchers. One of my favorite interviews on the topic is Lex Fridman's interview of Max Tegmark. Max is vulnerable and a joy to listen to, but specifically, he and Lex do a good job of talking about what meaningful work works like and how certain aspects are already being replaced.
They make a point I disagree with somewhat, though. They say when technology replaced physical jobs, they were jobs that people didn't want to do, but knowledge jobs are jobs that people find more meaningful or enjoyable. I think of Grapes of Wrath and the Joad family losing their family farm with the advent and proliferation of big agriculture. Maybe society has already taken a hit because we are doing less physical work than we should(?) be doing. I guess sitting in front of a laptop and other people all day doing psychiatry creates some envy in me.
To hone in on the medical work, I would say that the figuring involved with diagnosing and prescribing is actually one of the pieces of psychiatry that I find most enticing. I understand that the case management, listening, and reassuring is essential to good health, but it is not what I like doing every day. I know that makes me sound a little bit like a monster. But I do think the large majority of diagnosing and prescribing could be farmed out to an LLM like GPT-4 now or in the near future. I don't think it will because no one lobbies like healthcare lobbies, but that is what it is.
As usual, excellent work and great guest (I've read Deep Work and Digital Minimalism). Thanks for taking the time to post on Substack and communicate with the little guys!
Hi Paul. First of all, interacting with people who leave comments has been one of the perks of having this newsletter. Just going by internet comment sections in general, I didn't anticipate how thoughtful and interesting this one would turn out to be.
As far as the "don't worry about it" group, I should say that I'm more concerned than Cal seems to be. (If I were forced to place bets, I'd certainly trust his judgment over mine...but still I'm concerned.) I think you're very right about physical jobs. I don't really think the dichotomy of knowledge work vs. physical work equates to meaningful vs. meaningless work at all. And I think some people who do knowledge work find more meaning in hobbies that have a major physical component. Maybe we should all be reading some Robert Pirsig. Personally, I think if I were a pro athlete I'd need some less-physical hobbies, and as a writer I find that I need physical hobbies. (I have so many of my ideas while running that sometimes I worry if I won't be able to write if I get injured!) I'm just thinking out loud, as usual...
I don't think your point about diagnosing and prescribing makes you sound like a monster even one bit. ...This is tangential, at best, but you just reminded me of it: when I was an undergrad I worked in a lab studying urban air pollution. At some point I realized that the principal investigator wasn't necessarily fired up about air pollution specifically, but rather he really liked solving puzzles. (Like: where are these kids getting all this manganese exposure from?) And urban air pollution happened to offer puzzles for which he could get grants. As far as I'm concerned, the fact that his powerful brain could be engaged with puzzles that happened to benefit others was a win for everyone.
David, I have been pondering about this topic and I'm mix of excitement and fear. My excitement stems from the potential benefits of Large Language models(LLMs) such as GPT for knowledge workers. These models can reduce the time spent on back-office work such as searching for documents, understanding policies and regulations, and drafting emails. This can enable knowledge workers to have more time to focus on high-value tasks like creative thinking and producing unique content. However, I have two concerns. Firstly, content creation has become effortless with the use of these models, which can lead to an increase in content on platforms and decrease in attention from content consumers. Are consumers really going to adopt a new set of heuristics to filter the content they consume and what will be consider as "Trust Centers" given that anybody can produce content effortlessly? Secondly, if knowledge workers start accepting GPT responses as satisficing or "good enough," there is a risk that their roles may be replaced. The best approach would be to use LLMs to increase productivity and leverage the available time to create unique and valuable content.
Rajesh, this is a great comment. Everything you said makes sense to me. And I think that question you raise about a "new set of heuristics" is a really important one. That's an issue I find quite concerning, just given the pace of the rollout. Really appreciate you sharing these thoughts.
I have tried it. There is a kind of cleverness but to be intelligent something has to return novel and disruptive ideas that are based on data and can be argued. It’s not there yet. But even so the woke are so monotonously on message that Chat GPT can pass for woke if not intelligent.
Hi Corwin, I haven't queried ChatGPT about social issues at all. I'd be curious to hear what you asked it that led to that conclusion, if you're willing to share.
Note the typically. “No, biologically speaking, men cannot get pregnant because they do not have a uterus or the necessary reproductive organs to carry a pregnancy. Only individuals with a uterus and functional reproductive organs, typically females, are capable of pregnancy.”
Great Conversation, moving away from hype to focus on the basics. In my opinion, the output generated by large language models should be regarded as a preliminary draft. Creating content from scratch can be difficult and time-consuming, leading to people switching between tasks. I agree with Cal's point that Generative AI can be beneficial in reducing context switching. With its ability to generate a first draft effortlessly, the process can become much more efficient and manageable.
Rajesh, thanks so much for this comment. I hadn't thought about the reduction of context switching much, but with you and Cal having mentioned it, it's definitely on my mind now. I would say, I'm probably a bit more fearful than Cal is repercussions for work, in terms of replacing people faster than they can adapt. Curious to hear where on that spectrum you fall, if you've thought about it much. And I'm not even exactly sure where I fall, but my sense is a little more on the concerned side than Cal (who obviously knows much more than me!).
Great interview thank you.
I have to comment as a doctor, because your letter references us several times, and because I naturally feel defensive when this AI stuff starts breathing down our backs. As of right now, in primary care, the juggling of biological, psychological, and social components of someone’s health is just too human of a task. When I coordinate the care of someone with 20+ problems, there are all sorts of levers being pulled in terms of priorities, hierarchical decisions, diagnostic possibilities and treatment recommendations... all delivered with a modicum of charisma and compassion. I think the estimate that 99% of what we do is not diagnostic is a gross underestimation... but it’s true that perhaps 80% of what we do is creative or algorithmic thinking within the boxes of patients’ established diagnoses.
ChatGPT does really suck at citing real sources. It’s one of the biggest Achilles heels, and undermines its reliability and trustworthiness. And trust is at the very foundation of the doctor patient relationship!
Nonetheless, here are a few questions I have typed in live, during the past 2 weeks, while seeing a patient, just to get some quick ideas before double checking veracity:
“is hemochromatosis carrier state associated with an increased risk of pancreatic cancer? Estimate the increased risk in percentage / relative risk.”
“please compare and contrast Interstim procedure versus sling procedure for the treatment of urinary incontinence and overactive bladder.”
“can D-mannose cause candida infection in the esophagus?”
“what could cause swelling of the fingernail beds and toenail beds, with tenderness, associated with blood clots and pulmonary embolus?”
“What are some possible diagnoses for a postpartum woman, presenting with Purpura, petechiae, swollen toes, erythematous skin on the toes, thrombocytopenia and history of antiphospholipid antibody syndrome Who is experiencing generalized abdominal pain?”
“How do statin drugs affect macular degeneration? Include at least 5 medical journal article references”
*
Most of these queries led to helpful ideas and discussions back and forth, except for the last one asking for references which was a confabulation of sources that were not real. I think I’m in a very small minority of primary docs experimenting with this, so I’m not a typical case. But thought you would like to see some of the trench work😊
This is so cool, Ryan, thanks for sharing! ...and I often find myself referencing or at least thinking of healthcare for a few reasons. First, it's interesting. Second, it's one of the few uniformly respected professions. Third, absolutely everyone interacts with it. Fourth, I think it's the biggest sector in our economy, and growing. ...Plus I think in some ways its representative to people of a very complex domain that has to combine cutting edge technology with human judgment every second of the day. So, yeah, I just find it interesting! ...Of course I have no idea how ChatGPT or other AI will impact healthcare, but do you think that idea of allowing doctors to shift even more to strategy is realistic? (Although, as you note, that's a large degree of what's going on already.) A little while ago I saw a presentation about some medical imaging AI, and the speaker was touting how sensitive it was, and one of my thoughts was that it might lead to a lot of over-treatment that might not make sense in patients' lives, so we'd really need doctors in some cases maybe to be helping people understand when intense treatment doesn't make sense. And I know doctors already do that, but I wonder if it becomes a much, much more prevalent need. Just thinking out loud.
More importantly, thank you so much for sharing these queries!! I've really valued your comments, and it's so neat to see some queries from the trenches. Would love to keep hearing about your experimentation.
Hi David, thanks for this thoughtful reply as always! I discovered your writing when you joined Substack, and it continues to resonate with me as a generalist physician, and like you said, a lot of fertile ground to consider in the realm of healthcare!
I think all of us fear becoming obsolete professionally, or even worse as Yuval Noah Harris writes, joining a future class of “useless” humans.
In the time horizon that I hope to continue practicing, which is the next 15 to 20 years probably (since my substack is currently generating a generous amount of beer money 😏) I don’t see Drs. being replaced by this artificial intelligence, but rather using it as a tool to wield, being able to ask the right questions, and being able to filter, present, and counsel about the results. Fortunately, the human brain is just an incredible machine, and human intelligence so difficult to recreate with emotional, rational, and creative inputs and outputs.
By the time most doctors are replaced society will have had some major restructuring , along the lines of WALL-E I’m guessing 🤖
Autocorrect: Yuval Noah Harari!
Good lord. That’s the kind of dumb algorithm stuff that makes me frustrated (and secretly feel superior to the machines!)
haha...I think Yuval Noah Harris sounds like a great pro wrestler name;) As usual, your points seem eminently sensible to me. ...Your comments also made me think a bit about track coaches. (Just given my own track background, my brain often looks for analogies there.) For many events, you don't even need to spend much time on the internet to be able to get and understand the training regimens of high performers at all levels. There really aren't many, or perhaps any secrets in terms of specific training tactics. And yet, I think the role of the coach, who has a strategic view of how those tactics can be brought to bear with each unique individual, is so crucial that almost no one succeeds without one. I guess I'm being redundant to my earlier point...I have more concerns than I think Cal does about the impact (and speed of rollout) of ChatGPT, but part of me feels like even if, say, diagnosis were eventually completely automated, does the doctor just get to spend more time being strategic with each patient. Yeah I'm being redundant. Anyway, I appreciate this exchange.
Wow this was fun. I read Deep Work last year, so it was really exciting to see you interviewed Newport. I hadn't seen his New Yorker article until you linked to it, and I'm really glad you did. I was struck by (1) how amazingly smart humans have been to figure out the problem solving and ingenuity behind LLMs and ChatGPT and (2) how terrific Newport is at explaining it. I especially love his analogies like the Plinko one you pulled (shoutout to Kepler, our analogical king).
You noted that one of your favorite approaches is going through the historical development of an idea. I'd love to read another example of this. Do any come to mind that either written by you or someone else? Also, if you have the time/energy to explain your master list, I'm all ears. And I'll be sure to check out the other Newport books you recommended. Thanks again!
Ps - just curious: what was the prompt you used to get the cover illustration?
Neural networks in general come from analogical thinking!
In terms of that approach, honestly I usually have to cobble together sources to get it. But Isaac Asimov did this in a bunch of his nonfiction, like The Search for the Elements and A Short History of Chemistry. A book I read recently called A Journey of the Mind did some of this. I have a sort of textbook called Listen about music that does this, and there's an element of it in this really cool book called Art: The Whole Story. ...I'm not doing a great job. I'll try to come up with some others!
Master thought list is still coming;)
The prompt was: "An illustration for an article that looks inside the mind of ChatGPT, use many layers to represent the neural network" ....I tried a number of other things before this. Have you tried Midjourney? Happy to take tips!
Very cool. I'll check those out. I find that a piece of information sticks so much better when I can put it in context, and my brain seems to like chronological contexts as anchors. I just spent some time in Florence looking at Renaissance art, and I found that I could appreciate it so much more in the museums that positioned it in chronological order where I could compare it to the art that had come right before. Human progress leaps off the page that way. So I'm especially excited to look into Art: The Whole Story.
I haven't tried Midjourney before, but I'll have some fun with it now.
Before I even saw this comment, I was about to come back and say that I see this in museum exhibits sometimes, and it's almost unbelievable how a lightbulb can come on once I get a sort of anchoring in how ideas were in conversation with one another. What flowed to and from what, and why. On a personal level, the biggest perk of reporting Range for me was doing some of the background reporting on art and music to write about Van Gogh or Venice or jazz, gave me an anchor point that totally changed my experience of a museum or concert because I have some reference for the progression of ideas. Reading about 17th and 18th century Venice was fascinating...it'd be like: "... the harpsichord doesn't really have dynamics, but then the piano was created and led people to try X, Y, and Z." Just sort of getting some understanding of how new creations or inventions are responding to old ones, and what people then do with them. You know what I'm saying....I think if i had to develop a principle for some kind of curriculum, it might try to incorporate teaching the progression of ideas, because that frame has also helped me see how different disciplines influence and speak to one another. I should probably figure out a more eloquent version of this idea, huh?
I could not agree more. With regards to Renaissance art, I was so much more impressed by it to see the development from the Gothic art that came right before it. (My hottest take that I have no right to give: Gothic artists just weren't very good at painting.) This also happened for me at the City of Oslo museum in Norway. This might sound strange, but they had an exhibit that was the history of the development of kitchens. It sounded very boring, but they said it was so popular that it had once started as a temporary exhibit but had switched to become permanent one. And once I was in the room, it clicked. They had six kitchens lined up in a row, one each from medieval times, 1600s, 1700s, 1800s, and two lined up in a row. Now I saw what a big advance it was to have a stove (and not have to cook in a fire pit on the ground) or an airshaft (that gave a way for smoke to get out). It was incredible. It also reminds me of this tweet with the development of light technology: https://twitter.com/waitbutwhy/status/1586444528814329856?lang=ca
So, yeah, I totally agree. I'm teaching math this year to a track of kids with a record of low math performance, and I find that they are so much more interested in it when I can help them see each new day's math concept as a new struggle that mathematicians had to figure it out. They use their prior knowledge, and it makes every day a fun game to see if they can figure out what the greatest minds of mathematics have done. That way, it makes the my reveal of the that day's to be a fun aha moment (not quite Aha-Erlebnis, but still satisfying ;).
By the way, I think you should totally come up with a more eloquent version of it, and I'll call it Epstein's Law. (I think I asked you once before what would be Epstein's Law, so maybe this one will have to be "Epstein's Other Law.")
I tend to agree with Cal. The technology is extraordinary and can be very helpful, but like he says: in the end, it’s just spouting off solutions to complex mathematical equations based on your input.
This will help people with their mundane tasks like sending a message to someone, reminding you of stuff, etc. I think google is sort of onto this idea with their assistant making reservations for restaurants. Hopefully this will free us from a lot or annoying tasks we tend to spend half our day on in the modern work environment. That is my hope at least, but I tend to be more optimistic when it comes to people predicting “the end of *blank* as we know it” every few years.
A small point that I think the econ-test passing exercise he's referring to is from (fellow GMU economist to Cowen) Bryan Caplan. Caplan wrote about it here for those interested in the details:
https://open.substack.com/pub/betonit/p/gpt-4-takes-a-new-midterm-and-gets?utm_source=share&utm_medium=android
Regarding bank teller employment, the BLS has at least US employment still over 300k, but down from 600k in 2010 (according to Vox quoting AEI) https://www.bls.gov/oes/current/oes433071.htm
Automation has indeed historically allowed workers to do more value added activities (that is, conceptual) but now the automation is coming to conceptual & diagnostic activities as well and we have no precedent for that. Maybe we can all do the deep work, I hope so!
Thanks for sharing this interview.
The comment section on this newsletter continues to delight me. Thanks so much for sharing this link. I've read some of Caplan's work, but hadn't seen this. ...Regarding automation generally, I agree we're dealing with something without precedent. I think I'm definitely more concerned than Cal is (although if I were forced to bet, I'd certainly take his opinion over mine), but I have some big pockets of optimism. I'm curious to see what happens in the short-term with graphic designers, for instance. The graphic I created above took a few minutes total, but that sort of illustration would've been a fairly expensive custom job no that long ago. I'd be curious to hear your thoughts!
Just love these interview postings. I now have homework to do reading the links to articles and books! Thank you for the post!
Steve, so glad to hear that! I've been doing more and more Q&As as this newsletter goes on, so good to know if people like them...especially as they tend to be quite a bit longer than my non-Q&A posts.
Thanks for bringing the chill contrarian side of the argument. It's good to have my views challenged.
I feel like I would have sided with the "don't worry about it" group if most of the whistle-blowers were lay-people like myself, but that doesn't seem to be the case. The infamous petition calling for a pause on AI research was littered with AI researchers. One of my favorite interviews on the topic is Lex Fridman's interview of Max Tegmark. Max is vulnerable and a joy to listen to, but specifically, he and Lex do a good job of talking about what meaningful work works like and how certain aspects are already being replaced.
They make a point I disagree with somewhat, though. They say when technology replaced physical jobs, they were jobs that people didn't want to do, but knowledge jobs are jobs that people find more meaningful or enjoyable. I think of Grapes of Wrath and the Joad family losing their family farm with the advent and proliferation of big agriculture. Maybe society has already taken a hit because we are doing less physical work than we should(?) be doing. I guess sitting in front of a laptop and other people all day doing psychiatry creates some envy in me.
To hone in on the medical work, I would say that the figuring involved with diagnosing and prescribing is actually one of the pieces of psychiatry that I find most enticing. I understand that the case management, listening, and reassuring is essential to good health, but it is not what I like doing every day. I know that makes me sound a little bit like a monster. But I do think the large majority of diagnosing and prescribing could be farmed out to an LLM like GPT-4 now or in the near future. I don't think it will because no one lobbies like healthcare lobbies, but that is what it is.
As usual, excellent work and great guest (I've read Deep Work and Digital Minimalism). Thanks for taking the time to post on Substack and communicate with the little guys!
Hi Paul. First of all, interacting with people who leave comments has been one of the perks of having this newsletter. Just going by internet comment sections in general, I didn't anticipate how thoughtful and interesting this one would turn out to be.
As far as the "don't worry about it" group, I should say that I'm more concerned than Cal seems to be. (If I were forced to place bets, I'd certainly trust his judgment over mine...but still I'm concerned.) I think you're very right about physical jobs. I don't really think the dichotomy of knowledge work vs. physical work equates to meaningful vs. meaningless work at all. And I think some people who do knowledge work find more meaning in hobbies that have a major physical component. Maybe we should all be reading some Robert Pirsig. Personally, I think if I were a pro athlete I'd need some less-physical hobbies, and as a writer I find that I need physical hobbies. (I have so many of my ideas while running that sometimes I worry if I won't be able to write if I get injured!) I'm just thinking out loud, as usual...
I don't think your point about diagnosing and prescribing makes you sound like a monster even one bit. ...This is tangential, at best, but you just reminded me of it: when I was an undergrad I worked in a lab studying urban air pollution. At some point I realized that the principal investigator wasn't necessarily fired up about air pollution specifically, but rather he really liked solving puzzles. (Like: where are these kids getting all this manganese exposure from?) And urban air pollution happened to offer puzzles for which he could get grants. As far as I'm concerned, the fact that his powerful brain could be engaged with puzzles that happened to benefit others was a win for everyone.
David, I have been pondering about this topic and I'm mix of excitement and fear. My excitement stems from the potential benefits of Large Language models(LLMs) such as GPT for knowledge workers. These models can reduce the time spent on back-office work such as searching for documents, understanding policies and regulations, and drafting emails. This can enable knowledge workers to have more time to focus on high-value tasks like creative thinking and producing unique content. However, I have two concerns. Firstly, content creation has become effortless with the use of these models, which can lead to an increase in content on platforms and decrease in attention from content consumers. Are consumers really going to adopt a new set of heuristics to filter the content they consume and what will be consider as "Trust Centers" given that anybody can produce content effortlessly? Secondly, if knowledge workers start accepting GPT responses as satisficing or "good enough," there is a risk that their roles may be replaced. The best approach would be to use LLMs to increase productivity and leverage the available time to create unique and valuable content.
Rajesh, this is a great comment. Everything you said makes sense to me. And I think that question you raise about a "new set of heuristics" is a really important one. That's an issue I find quite concerning, just given the pace of the rollout. Really appreciate you sharing these thoughts.
I have tried it. There is a kind of cleverness but to be intelligent something has to return novel and disruptive ideas that are based on data and can be argued. It’s not there yet. But even so the woke are so monotonously on message that Chat GPT can pass for woke if not intelligent.
Hi Corwin, I haven't queried ChatGPT about social issues at all. I'd be curious to hear what you asked it that led to that conclusion, if you're willing to share.
Note the typically. “No, biologically speaking, men cannot get pregnant because they do not have a uterus or the necessary reproductive organs to carry a pregnancy. Only individuals with a uterus and functional reproductive organs, typically females, are capable of pregnancy.”
Great Conversation, moving away from hype to focus on the basics. In my opinion, the output generated by large language models should be regarded as a preliminary draft. Creating content from scratch can be difficult and time-consuming, leading to people switching between tasks. I agree with Cal's point that Generative AI can be beneficial in reducing context switching. With its ability to generate a first draft effortlessly, the process can become much more efficient and manageable.
Rajesh, thanks so much for this comment. I hadn't thought about the reduction of context switching much, but with you and Cal having mentioned it, it's definitely on my mind now. I would say, I'm probably a bit more fearful than Cal is repercussions for work, in terms of replacing people faster than they can adapt. Curious to hear where on that spectrum you fall, if you've thought about it much. And I'm not even exactly sure where I fall, but my sense is a little more on the concerned side than Cal (who obviously knows much more than me!).