Computer scientist Cal Newport lifts up the GPT hood, and talks with me about whether it will replace and/or nuke you
Great interview thank you.
I have to comment as a doctor, because your letter references us several times, and because I naturally feel defensive when this AI stuff starts breathing down our backs. As of right now, in primary care, the juggling of biological, psychological, and social components of someone’s health is just too human of a task. When I coordinate the care of someone with 20+ problems, there are all sorts of levers being pulled in terms of priorities, hierarchical decisions, diagnostic possibilities and treatment recommendations... all delivered with a modicum of charisma and compassion. I think the estimate that 99% of what we do is not diagnostic is a gross underestimation... but it’s true that perhaps 80% of what we do is creative or algorithmic thinking within the boxes of patients’ established diagnoses.
ChatGPT does really suck at citing real sources. It’s one of the biggest Achilles heels, and undermines its reliability and trustworthiness. And trust is at the very foundation of the doctor patient relationship!
Nonetheless, here are a few questions I have typed in live, during the past 2 weeks, while seeing a patient, just to get some quick ideas before double checking veracity:
“is hemochromatosis carrier state associated with an increased risk of pancreatic cancer? Estimate the increased risk in percentage / relative risk.”
“please compare and contrast Interstim procedure versus sling procedure for the treatment of urinary incontinence and overactive bladder.”
“can D-mannose cause candida infection in the esophagus?”
“what could cause swelling of the fingernail beds and toenail beds, with tenderness, associated with blood clots and pulmonary embolus?”
“What are some possible diagnoses for a postpartum woman, presenting with Purpura, petechiae, swollen toes, erythematous skin on the toes, thrombocytopenia and history of antiphospholipid antibody syndrome Who is experiencing generalized abdominal pain?”
“How do statin drugs affect macular degeneration? Include at least 5 medical journal article references”
Most of these queries led to helpful ideas and discussions back and forth, except for the last one asking for references which was a confabulation of sources that were not real. I think I’m in a very small minority of primary docs experimenting with this, so I’m not a typical case. But thought you would like to see some of the trench work😊
Wow this was fun. I read Deep Work last year, so it was really exciting to see you interviewed Newport. I hadn't seen his New Yorker article until you linked to it, and I'm really glad you did. I was struck by (1) how amazingly smart humans have been to figure out the problem solving and ingenuity behind LLMs and ChatGPT and (2) how terrific Newport is at explaining it. I especially love his analogies like the Plinko one you pulled (shoutout to Kepler, our analogical king).
You noted that one of your favorite approaches is going through the historical development of an idea. I'd love to read another example of this. Do any come to mind that either written by you or someone else? Also, if you have the time/energy to explain your master list, I'm all ears. And I'll be sure to check out the other Newport books you recommended. Thanks again!
Ps - just curious: what was the prompt you used to get the cover illustration?
I tend to agree with Cal. The technology is extraordinary and can be very helpful, but like he says: in the end, it’s just spouting off solutions to complex mathematical equations based on your input.
This will help people with their mundane tasks like sending a message to someone, reminding you of stuff, etc. I think google is sort of onto this idea with their assistant making reservations for restaurants. Hopefully this will free us from a lot or annoying tasks we tend to spend half our day on in the modern work environment. That is my hope at least, but I tend to be more optimistic when it comes to people predicting “the end of *blank* as we know it” every few years.
A small point that I think the econ-test passing exercise he's referring to is from (fellow GMU economist to Cowen) Bryan Caplan. Caplan wrote about it here for those interested in the details:
Regarding bank teller employment, the BLS has at least US employment still over 300k, but down from 600k in 2010 (according to Vox quoting AEI) https://www.bls.gov/oes/current/oes433071.htm
Automation has indeed historically allowed workers to do more value added activities (that is, conceptual) but now the automation is coming to conceptual & diagnostic activities as well and we have no precedent for that. Maybe we can all do the deep work, I hope so!
Thanks for sharing this interview.
Just love these interview postings. I now have homework to do reading the links to articles and books! Thank you for the post!
Thanks for bringing the chill contrarian side of the argument. It's good to have my views challenged.
I feel like I would have sided with the "don't worry about it" group if most of the whistle-blowers were lay-people like myself, but that doesn't seem to be the case. The infamous petition calling for a pause on AI research was littered with AI researchers. One of my favorite interviews on the topic is Lex Fridman's interview of Max Tegmark. Max is vulnerable and a joy to listen to, but specifically, he and Lex do a good job of talking about what meaningful work works like and how certain aspects are already being replaced.
They make a point I disagree with somewhat, though. They say when technology replaced physical jobs, they were jobs that people didn't want to do, but knowledge jobs are jobs that people find more meaningful or enjoyable. I think of Grapes of Wrath and the Joad family losing their family farm with the advent and proliferation of big agriculture. Maybe society has already taken a hit because we are doing less physical work than we should(?) be doing. I guess sitting in front of a laptop and other people all day doing psychiatry creates some envy in me.
To hone in on the medical work, I would say that the figuring involved with diagnosing and prescribing is actually one of the pieces of psychiatry that I find most enticing. I understand that the case management, listening, and reassuring is essential to good health, but it is not what I like doing every day. I know that makes me sound a little bit like a monster. But I do think the large majority of diagnosing and prescribing could be farmed out to an LLM like GPT-4 now or in the near future. I don't think it will because no one lobbies like healthcare lobbies, but that is what it is.
As usual, excellent work and great guest (I've read Deep Work and Digital Minimalism). Thanks for taking the time to post on Substack and communicate with the little guys!
David, I have been pondering about this topic and I'm mix of excitement and fear. My excitement stems from the potential benefits of Large Language models(LLMs) such as GPT for knowledge workers. These models can reduce the time spent on back-office work such as searching for documents, understanding policies and regulations, and drafting emails. This can enable knowledge workers to have more time to focus on high-value tasks like creative thinking and producing unique content. However, I have two concerns. Firstly, content creation has become effortless with the use of these models, which can lead to an increase in content on platforms and decrease in attention from content consumers. Are consumers really going to adopt a new set of heuristics to filter the content they consume and what will be consider as "Trust Centers" given that anybody can produce content effortlessly? Secondly, if knowledge workers start accepting GPT responses as satisficing or "good enough," there is a risk that their roles may be replaced. The best approach would be to use LLMs to increase productivity and leverage the available time to create unique and valuable content.
I have tried it. There is a kind of cleverness but to be intelligent something has to return novel and disruptive ideas that are based on data and can be argued. It’s not there yet. But even so the woke are so monotonously on message that Chat GPT can pass for woke if not intelligent.
Great Conversation, moving away from hype to focus on the basics. In my opinion, the output generated by large language models should be regarded as a preliminary draft. Creating content from scratch can be difficult and time-consuming, leading to people switching between tasks. I agree with Cal's point that Generative AI can be beneficial in reducing context switching. With its ability to generate a first draft effortlessly, the process can become much more efficient and manageable.