In our ongoing exploration of linguistic AI, how close to life the creations of a neural network can come is a persistent question for both machine translation and more experimental fields like natural language generation (NLG), simply in terms of whether any living person would communicate the way these machines do. But as algorithms like GPT-3 become indiscernible from us in tone and grammar, the question on the horizon is shifting from how well a model can persuade us to believe it toward how well it can be trusted to tell us things that are true. Few fields could hold higher stakes in this respect than medical research, in terms of the real-world risks of relying on potentially inconsistent models for insights. However, that risk also corresponds to an immense demand for AI to artificially generate realistic medical data in the volumes needed to drive discovery, and doing so has become a front-and-center focus for leading technology firms as the pandemic makes outcomes an immediate priority. Through it all, one astonishing finding emerging from that effort is just how powerful of a tool for understanding factual reality language can be on its own when analyzed through massive probabilistic computation.
This week, shedding light for participants on some of the complexities of how that process is advancing, CSOFT joined DIA Global 2021’s Innovation Theater session Massive Deep Learning Language Models and the Application to Life Sciences, hosted by scientists from Microsoft currently working with the GPT-3 algorithm. Prefacing the innovations that enable GPT-3 to thrive, Principal Data Scientist Mario Inchiosa highlighted transfer learning methods as a key development applicable to a large pre-trained neural networks to optimize them for specific domains through active learning (i.e., human expert training). While the experts in this case are working with mathematical ways to train language in terms of relative sentence position and semantic embedding, the parallels to machine translation and post-editing are clear, in terms of the baseline understanding that machines require human input to ensure quality. Likewise, the distribution of human and machine resources is a major consideration in Microsoft’s work, as it is for LSPs. Where the connections grow more abstract and astonishing, however, lies in the fact that GPT-3 is generally already so insightful with language that it can often infer and recount pathologies with a level of detail that is both possible and plausible, all without a trained domain knowledge.
In a specific scenario introduced by Senior Data and Applied Scientist Robert Horton, Microsoft used the Synthea platform to generate artificial structured facts about medical scenarios – the bare bones of a complete health record entry. Using just these facts and its knowledge of the English language, Microsoft’s general GPT-3 model was able to output a surprising number of highly plausible or totally plausible summaries of scenarios that would have resulted in those facts. In terms of its errors, these tend to reflect an overabundance of insight, rather than a lack of it; having seen the many ways things can work, the model occasionally makes conjectures in error, but not for lack of trying something just as interesting in the ways it almost works. From both a life sciences localization perspective and a general machine translation perspective, the ways in which a purely linguistic AI can drive valid simulations of real world medical scenarios are as fascinating as they are validating of the crucial role human subject matter experts must continue to play in driving their development and supporting broader innovation.
CSOFT works closely with technology providers and companies across the life sciences to support the rollout of innovative solutions in new markets through high-quality translation services, delivered by in-country subject matter experts and linguists in over 250 languages. To learn more about our services, please visit csoftintl.com.
[dqr_code size="120" bgcolor="#fff"]