ChatGPT: Friend Or Foe?

ChatGPT: Friend Or Foe?

By

Leonard Zwelling

https://chat.openai.com/chat

I asked ChatGPT, the interactive artificial intelligence program (see link above, try it yourself) that does what you ask it to do, to write a 750-word blog about ChatGPT. I asked it twice. Both times it stopped before finishing because the text it had elected to write was longer than 750 words and it does what you ask it to do. Exactly. Second, I would not have put my name on either version. Boring!

A lot of people in education are very worried that students will employ ChatGPT to write their term papers. No-duh! Of course, they will. My tenth-grade Citizens’ Education teacher Mrs. Guido always said before she handed out a test, “it’s not that I think you’ll cheat. I know damned well you will.”

What is a teacher going to do to assess her students’ grasp of the material in a course?

I have an idea. Give the students tests in the classroom (enough with this zoom nonsense) instead of allowing them to write essays at home and then proctor the tests. It is not like ChatGPT is the first tool to be used to cheat. Crib sheets have a long and storied history and in the age of the Internet, students were conversing with one another all the time about assignments.

Both in high school and in college, I filled blue books with lots of words that were all my own—for better or worse during final exams. What’s wrong with blue books? Of course, that assumes that students can write with a pen and no keyboard.

Artificial intelligence is also likely to change the face of medicine. The electronic medical record should assure that any physician, even a new one, can gather all the appropriate data, ask all the appropriate questions, do every part of the physical exam (yes, there are still doctors who do one BEFORE ordering a total body MRI), and get all the right lab tests. Those data will be fed to the AI doctor in the house and out will come a list of likely diagnoses and how to rule them in or out. As an old physician, I would not feel threatened by AI if I were still caring for patients. Why? Because AI is likely to help generate a more complete assessment of any patient and do it faster. That gives the doctor time to be the doctor and actually interact with the patient given that a technician can load the data into the AI brain and get the print out.

What may well happen is that a different kind of person will gravitate toward medicine. AI can out-diagnose House MD. It will hear the hoof beats and list the horses and the zebras. It will also tell you how to sort the horses from the zebras and what to do once you identify which animal a patient’s disease is. But that will be no substitute for true patient care which requires talking, listening, touching, and empathy. Those things AI cannot do. Only people can and it will be people who are good at those things that will be your doctors of tomorrow.

Teachers and leaders of American medicine are going to have to contend with the reality of machine learning and machine teaching and machine diagnosing. Instead, instruction will have to focus on understanding concepts in the classroom along with abstract, logical thinking and interpersonal interaction.

Doctors will have to hone their people skills as machines read slides, analyze blood, and even assist in surgery.

ChatGPT is a powerful tool, but it is no substitute for human creativity. It may well replace the ace diagnostician in medicine, but will never replace bedside manner. As a patient more frequently now than ever, I really appreciate smart doctors who use smart machines to care for me, but who can look me in the eye and deliver the news—bad or good.

AI does not make us less human. It means we will have to be more human to be effective because the data-driven stuff can be done for us. Personally, I think it’s great!

Leave a Comment

Your email address will not be published. Required fields are marked *