in All Things Localization

AI Communication

Patient communication has always been a struggle for healthcare providers, although the patient typically does most of the struggling. Unless you’re a top-tier medical professional with decades of generalized experience, it’s likely that, following a visit to a doctor’s office, you’ve spent at least some amount of time staring at your care summary or discharge papers trying to parse precisely what all of the specialized terms and phrases mean. It seemed so easy when the nurse was summing everything up – here’s your problem, and you should do this, and not do that – but without the guidance of trained and qualified individuals, everything can get complicated quickly. A call back to the doctor’s office means a lengthy wait time on hold; turning to Google may get you unreliable results or community-generated answers. There’s no one else to turn to — no one you know in your life can explain your situation to you in plain, clear language, especially when no one speaks what you’re fluent in.

Enter Generative AI. While Large Language Models like GPT-4 or Microsoft Azure have a variety of uses in healthcare, one of the most promising areas in which they may be able to innovate is in the patient communication sector. Several industries already use AGI to create concise summaries of jargon-heavy information, but it’s a bit more limited in the medical field, given the stakes involved. A recent study explores how this technology, in its current state, can be used for patients of all reading levels or language skills, and it offers a glimpse into a possibly healthier relationship between doctors and patients through more accessible communication. It also provides a new avenue for language service providers (LSPs) to involve themselves in the process of perfecting AGIs for professional use in healthcare.

Related:  Beyond “Hey Siri”: Apple Moves in on Large Language Models

Conducted by NYU Langone Health Systems, this study took 50 patient discharge sheets from one month generated from their healthcare system and fed them into a version of Azure. The researchers utilized a prompt prepared by AI experts and tech-savvy physicians to minimize possible errors. The generated discharge sheets translated the technical jargon into a form that a patient with a sixth- or seventh-grade reading level could understand, and their success was assessed both through empirical methods — standardized testing — and from more subjective perspectives, such as evaluations done by practicing residents. The results were intriguing: when the model worked, it worked well, transforming the source sheet into a less complex document at the comprehension level they were aiming for. Even more, they discovered what may be another barrier when they realized how short the generated summaries were in comparison to the originals (which were over a thousand words long).

Yet, for all their promise, AGI-simplified patient discharge sheets may not be headed for the hospital printer anytime soon. A sizable number of the simplified discharge sheets studied contained inaccurately phrased, incomplete, or plain wrong information, some genuinely dangerous to the patient. This can be attributed to several issues, the first of which is the omission of essential data – in trying to simplify the document, the AGI removed relevant details for perceived brevity or clarity. Second, the “hallucination” phenomenon inherent to all AGI platforms in their current state was an issue in roughly 10% of the summaries, in which the AGI generated incorrect or irrelevant information, either through poor training or a poorly phrased prompt. Finally, there’s the matter of bias. Although the study didn’t see any in the summaries generated within the test group, it is still a pervasive problem with AGIs, and it’s perhaps better to follow the precautionary principle in this case, given the stakes involved. The study lays most of the blame with the prompt – AGIs, after all, are tools, and the tiniest phrasing issues can have the same devastating effect that an imprecise cut on a table’s leg can have on its stability. It may be able to stand, but it might not hold up when one adds weight to it.

Related:  Generative AI in Drug Development and Health Care: Exploring the Changing Landscape of Medical Communications at DIA’s 2023 Annual Meeting

However, the study’s limitations were most relevant to our interests – the researchers acknowledged openly that all documents, both the source material and the simplified versions generated by AGI, were written in English. Given the general infancy of AGIs and LLMs, as well as the small sample size of the sheets used for the study, it’s understandable that the researchers would want to simplify their already-weighty task. While this is a good start, it’s proof that LSPs like CSOFT have an even more significant role to play in the future of healthcare translation and localization than one might have anticipated. In addition to providing quality control through proofreading and transcreation, LSPs can also be involved in the experimental phase, offering support and expertise to researchers looking to expand their sample sizes to encompass more than one language. There may come a day when AGIs are so advanced that they can rock and sock with the heaviest hitters of language, but we’re still far from anyone delivering a knockout blow – that’s why it’s so important to have a qualified LSP like CSOFT in your corner.

[dqr_code size="120" bgcolor="#fff"]