A mile wide and an inch deep là gì

I believe I am speaking for a great many of us who perpetually feel that "what they know is known to everyone else" (@Daniel Roth), who know a little bit about many things, but not many things about some things. As a result, they feel that they really do not have much to contribute to the world. After all, who would wish to listen to a dilettante, if you will?

So, it came as quiet a shock to hear Daniel Roth start his miniseries on 'what to write about on LinkedIn' with exactly that turn of the phrase. While brands and industries would have a number of core ideas to write about and expound on, what should be the strategy of writing for someone who knows many things, but not everything about something? More likely, is it that we are in Johari's Blind Window- knowing things unconsciously, but being blind to what we know?

Chess. Cycling. Reading. Writing. Photography. Technology. Social Media. The Command Terminal. Linux. Philosophy. God. The Origins of the Universe. Movies. Ayn Rand. Aesop. Middles. Rum. Happenstance. MBTI. Tarot. You get the picture.

Connecting the Dots. I guess that's where all the little pools of knowledge here and there lead to, if you allow it to. Knowing something about a lot of things gives you an insight as to how things generally work, giving you the ability to connect.

Is that being an apologist for a generalist, and not a specialist? Is there space for generalists in this world, at the professional level?

I have been watching the job posts for some time. I see that the requirements for those in the initial years of professional life are specific in nature. Do this course, get that certification. As the experience levels sought increase, the specializations become generic.

The second point is- perhaps, in this world where technology is morphing at such a rapid pace, where the last hundred years would get covered in the next ten, where the Singularity ( re Ray Kurswell) could occur one fine day and Artificial Intelligence take over the world and the Universe in quick succession.. in such a world, is it really possible to know everything, or most things , about one thing? Probably yes. But even so, what is the shelf life and relevance of such knowledge?

Constant learning, therefore, could be a key. Constant learning, in areas that really interest. The 10,000 hour rule exists for mastery in a field. Maybe it has to be amended to read 2,500 hours for working knowledge in four different areas.

One major burnout-inducing task all physicians take on is documentation. We have to write progress notes whenever we see a patient in the office or the hospital. In the not-too-distant past, this involved pen and paper. It was quick and relatively painless. To say we wrote concisely is putting it mildly. There were many cryptic abbreviations, but once you got past those, it didn’t take long to get up to speed with the important aspects of the patient’s care.

Today, there are precious few hospitals or offices that use paper for documentation. The electronic health record (EHR) is omnipresent, and with it, we’ve seen short notes become long notes (i.e., note bloat). It’s easy to bring in discrete data such as lab results, imaging reports, and even other doctors’ notes into the daily progress note. To add to the mayhem, regulators and payers have been demanding more information in the past decades before agreeing to accept bills, so clinicians have become accustomed to adding the kitchen sink to the note just in case.

Whether these notes are written in chicken scratch on a piece of paper or typed into a fancy EHR, Dr. Lin posits that the process of generating the progress note is important. It’s often the time that the physician uses to reconsider the history, physical exam findings, lab and imaging results, and other data points before committing to a plan of care. There’s a valid concern that if we hand off the writing of most of the progress note to an AI, we’ll miss out on that essential time to ponder.

The concept that ChatGPT or something like it could listen in to the office visit or hospital rounds and write our notes for us is not some futuristic fantasy. It’s happening now, but we’ve not seen widespread adoption because the tools are not yet ready for primetime. We’re getting there, but it’s taking time. However, with the phenomenal improvement in generative AI in the last six months, physicians are now asking why they can’t have this tool right now.

What about automation complacency? This has been seen in non-healthcare industries for years. Airplane pilots must actively work to keep up their skills so that if/when an emergency happens, they’ll have the ability to take over tasks that are largely handled by computers and their algorithms. It’s easy to think that “I don’t need to know how to do that because the system handles that.” And it’s true … until it isn’t.

We see automation complacency when physicians complain that the EHR’s clinical decision support (CDS) module didn’t remind them to order that lab or not order that expensive medicine. While it’s true that it would be great if CDS were always spot on, that’s not how it works. The computer only “knows” what we tell it, and further, only what we tell it so it can “understand.” If the tobacco history isn’t documented in the tobacco history box, then our fancy tools won’t work as designed.

How do we fight against the tendency to let the computer do the thinking? What steps can we take to make it easy for physicians to use the technology in the most responsible way? I think the answers involve educating physicians on how these AIs work while simultaneously configuring the technology to stay in its lane.

Doctors do not need to become software developers or data scientists, but they do need to learn about the limits of the technology. Many years ago, I spent some time with medical records coders. These folks are not clinicians but have lots of training in medical terminology and the rules for how we assign billing codes to office visits and hospital stays. They would ask me to look at a doctor’s note and then answer questions that they had. For instance, did this surgeon perform an excisional or incisional biopsy? Or do I think the doctor meant anemia from blood loss or iron deficiency?

As I interacted with these coders, I began to forget that they didn’t have any actual clinical experience. They never took care of patients. They just read the documentation. Yet still, they could go toe-to-toe in the world of “doctor speak.” I was reminded that their medical knowledge was a mile wide but only an inch deep when I occasionally engaged them in discussion as if they were a doctor or a nurse. That’s when I remembered that, in fact, they weren’t clinicians. They could throw the big words around (tachycardia, anyone?) but typically couldn’t talk about the physiology or anatomy causing it.

This is what I think physicians need to understand about AI. Right now, its “knowledge” is wide but not deep. The words are there, but the underlying reasoning and rationale aren’t, at least not yet. Hence, it’s imperative that doctors leverage the tool without overestimating its abilities. Of note, tests being done now comparing GPT-3.5 to GPT-4 are showing exponential improvements in “understanding,” so perhaps by the time comes for physician training, I’ll have to seriously reconsider this position!

We should also consider not allowing an AI to autogenerate the assessment and plan part of the note. Or at the very least, we should ensure that doctors spend much of their time in the note on the assessment and plan. That’s where the value is, and to have an AI create it without significant oversight and editing by the clinician would be a mistake.

Dr. Lin and his colleagues are correct in worrying about physicians becoming overly reliant on technology, whether AI or something else. Yet I think through education and by setting limits in the EHR, we can come to a sweet spot in how we use these tools to take care of those entrusted to our care.