AI: Experts Voice Concerns Over Its Ethical Design And Focus On Profits, Social Control
AI: Experts Voice Concerns Over Its Ethical Design And Focus On Profits, Social Control
Artificial intelligence can “understand” and shape much of what happens in people’s lives.
AI apps like Amazon Alexa, Apple Siri, and Google Assistant answer questions and converse with people who call out their name. In navigation apps, they help people drive from one location to another. Models also can scan platforms for the use of fraudulent credit cards, and help diagnose cancer.
Still, experts and advocates have voiced concerns about the long-term impact and implications of AI applications. It’s all about automation.
It led Pew Research Center and Elon University’s Imagining the Internet Center to ask experts where they think efforts aimed at “creating ethical artificial intelligence would stand in the year 2030.”
The question was asked: “by 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” About600 technology innovators, developers, business and policy leaders, researchers and activists responded.
Some 68% said ethical principles focused primarily on the public good will not be employed in most AI systems by 2030. The remainder believed they will.
The remainder of the study focused on written responses explaining their thoughts about “worries” and “hopes.”
The biggest “worry” is that major “developers and [those deploying] AI are focused on profit-seeking and social control, and there is no consensus about what ethical AI would look like.” It’s difficult to define and control, they said. “Global competition, especially between China and the U.S., will matter more to the development of AI than any ethical issues,’ according to the report.
When it came to “hopes,” the progress being made as AI spreads and shows its value, because societies have always found ways to mitigate the problems from technological evolution.
Respondents also wrestled with the meaning of “beneficence, nonmaleficence, autonomy and justice” when it comes to tech systems.
Some said their approach is not centered on whether AI systems alone produce questionable ethical outcomes, it’s whether the AI systems are less biased than the current human systems and their known biases.
Danah Boyd, founder and president of the Data & Society Research Institute and principal researcher at Microsoft, wrote: “We misunderstand ethics when we think of it as a binary, when we think that things can be ethical or unethical.”
Boyd wrote that most data-driven systems, especially AI, entrench existing structural inequities into their platforms by using training data to build models. The key, however, is to continually identify and combat these biases that require the digital equivalent of reparations.
“While most large corporations are willing to talk about fairness and eliminating biases, most are not willing to entertain the idea that they have a responsibility for data justice,” she wrote.
Experts noted that there is a lot at stake. They are concerned that AI systems will be used in ways that affect people’s livelihoods and well-being such as jobs, families, and access to things like housing and credit.
One respondent noted: “Rabelais used to say, ‘Science without conscience is the ruin of the soul.’”
(30)