in Technology

As we covered in our previous look at smart technology, AI, with the help of AI translations, and big data have a remarkable capacity to escape the biases of human thinking and generate far-reaching insights. Nevertheless, it is rapidly becoming apparent that AI is not always as Teflon-resistant to the shortcomings of human judgement as we would like to think, if recent concerns at Google about the diversity of its AI programs are any indication. On the contrary, AI seems remarkably prone to absorbing the biases and perspectives of its creators, to name just one of many potentially problematic influences that can collectively shape its “character”. Further afield of Silicon Valley, little is known about the ways that increasingly autonomous technologies might already be forming flawed inferences from real world data that in turn reinforce prevailing biases surrounding specific groups of people and demographics.

It is difficult to overstate how much is at stake when it comes to ensuring AI’s fairness. In the first place, common perceptions of AI as a “neutral” form of intelligence suggest that people are likely to view it as an authority for settling disagreements of opinion. Even where no dispute is concerned, Googling something outlandish is essentially today’s way of consulting normality for a second opinion. Although real world controversies regarding AI’s authority have yet to materialize, concern is growing that falling afoul of smart technology’s judgement could effectively amount to disenfranchisement in an increasingly digitized society.

The question of AI’s fairness is often framed within a traditional view of consumer rights, which falls short of recognizing the revolutionary impact it will have on the entire landscape that consumers navigate. While in theory people are free to reject a product that disagrees with them, it is difficult to argue persuasively that anyone has the right to opt out of a service as ubiquitous as Google’s. Moreover, nonusers still participate in these systems as data points. AI’s inner workings are notoriously difficult to grasp, and almost entirely opaque for the vast majority of people who use the services it helps power. In such a vacuum, our eventual reckoning with these questions forebodes the kind of political controversy that can itself resolve in unfairness toward large groups of people.

Related:  AI Subtitling: Will AI replace subtitle writers?

What is clear from Google’s efforts to get ahead of biases in its software is the need for technology providers to proactively tackle these concerns now, while AI is in its formative stages. If those with the technical expertise to shape AI’s course cannot effectively address the human side of their responsibility, it is unlikely that others without that technical expertise will later be able to. Humanizing AI will be a challenge not just of technology but of language, bringing unprecedented opportunities to leverage the power of communications.

Google machine-learning researcher Maya Gupta asks, “Let’s say I want to be equally accurate at identifying a Boston accent and a Texas accent, but I have a speech recognizer that’s a little better at the Texas one. Should I penalize the people with a Texas accent by making the recognition just as bad as it is for Boston, to be fair?” Ensuring AI’s diversity and inclusivity is a matter not of simply charting but of understanding and accommodating people’s differing backgrounds, languages, cultures, and nationalities, as well as teaching autonomous systems to account for the significance of these differences. Lessons from localization and the challenges of bringing products and services to people across borders will likely prove instrumental in cultivating a social presence for AI that is as positive as it is ubiquitous.

CSOFT International works closely with leading AI technology providers to support the rollout of their products and services in new languages and markets, specifically through AI translations. Our linguistic testing for apps and platforms offers a crucial step in the process of ensuring the quality of users’ interactions with digital technologies. We invite you to learn more about our work with enterprise AI providers and how robust linguistic services can help deliver better products for a better world.

[dqr_code size="120" bgcolor="#fff"]