5 Ways To Ensure AI Unlocks Its Full Potential To Serve Humanity

By Will Byrne

May 10, 2018
 

We’re in a defining moment for the intelligent age. Artificial intelligence will be the great general-purpose technological advance of our time. Born of advances in computing power and an explosion of data, AI will meaningfully impact every major industry, expanding the global economy by trillions of dollars. More importantly, AI has the potential to drive great leaps forward in solving our civilization’s greatest challenges, from climate change to public health to armed conflict to poverty.

 

And yet the field finds itself in a precarious position.?? Issues of bias, civil rights, workforce displacement, and even the power and perception of Silicon Valley’s largest tech firms–all on display in Facebook’s recent public scolding in Washington–threaten AI’s ability to thrive. Without steps to solve for these issues, the budding field could spend years held back from its transformative potential, functioning as a mere business efficiency tool instead of a catalyst in solving some of the world’s most intractable problems.

Here are five steps we can take now to ensure AI achieves the heights it should, and serves humanity in the process.

5 Ways To Ensure AI Unlocks Its Full Potential To Serve Humanity | DeviceDaily.com
[Source Images: The7Dew/iStock, Rogotanie/iStock]

Build AI to amplify, not replace

Machines are already driving trucks, flipping burgers, and beating world chess champions. As automation gets more powerful, some jobs will become obsolete, wages in certain industries will likely drop. Preparing for these changes through education and retraining, along with investments and innovations in the social safety net, will be paramount.

But AI’s greatest impact in industry will be found when paired with human capabilities, not replacing them. According to a report by Danny Chui of Mckinsey Global Institute, at least in the next decade, there are very few occupations that AI will replace outright.

The reason lies in the complementary natures of silicon and human intelligence. While AI can already perform some tasks better than humans, it lacks common sense and intuition, and it won’t be gaining either of those anytime soon. As Ernest Davis, a scientist at NYU, recently put it: “While AI can trounce the best chess or Jeopardy player … it’s still light years behind the average 7-year-old in terms of vision, language, and intuition on how the physical world works.”

AI’s ability to input, associate, and recall information transcends what people can do. But a person’s ability to use this information to reason, evaluate, and strategize far exceeds the capabilities of any machine. These two powers combine to make for a potent force.

 

Still, many companies with an eye for fast profits will focus their AI investments on replacing workers, hoping to automate away labor costs. This isn’t just bad for the workforce, it’s bad business strategy, as it completely misses on AI’s potential.

Dr. Radhika Dirks, CEO of XLabs, a technology company building “moonshot” projects in AI, is bullish on the point. The biggest opportunity in AI is “not machines that think like us or do what we do, but machines that think in ways we cannot conceive and do what we can not.” While collaborating with non-human intelligence is hard for postmodern people like us to grasp, she says, our forebears would have understood it just fine. “They worked with companions that could see more, think different, and sense via unknown modes. They knew that paying attention to a dog or a horse in the wild could mean the difference between life and death.” Ultimately, she says, “companies that do not seek to amplify their human intelligence with diverse automated intelligences will be left behind.”

So what does this “amplification” look like now? ??In breast cancer screening, PathAI has developed a computer vision model that has improved biopsy accuracy from 85% to 99.5%. This has translated to 68,000 to 130,000 more women receiving accurate diagnoses per year.

These systems aren’t doing this by replacing doctors, but by arming them with better information, faster. The average pathologist sees 50 slides per patient, each containing hundreds of thousands of cells. While assessment of every cell is a near-impossible, time-intensive task for a human, AI can detect abnormalities and present them for review in minutes. Pathologists then apply their experience, judgment, and holistic knowledge of the patient to assess each case.

But what about front-line workers? It’s intuitive that doctors wouldn’t be replaced by machines, but those in customer service or handling products on the manufacturing floor seem more exposed. Even in this part of the economy’s value chain, AI can be most impactful when complimenting peoples’ natural abilities, not supplanting them.

Dennis Mortensen, an AI entrepreneur, points to today’s salesperson. Much of the most repetitive components of sales–researching prospects, entering data on “leads,” and sending rote follow-up emails–will soon be executed by AI. In Mortensen’s view, AI isn’t just a productivity booster, it’s a liberator, allowing the salesperson to spend their time on the creative, emotional, and empathic components of the job. For this reason, he describes AI as capable of making us “more human.”  

 

AI can even help companies more effectively retrain their workforce, a crucial process as tech advances transform the skills needed at work. Walmart, UPS, and the elevator company ThyssenKrupp are a few companies already tapping AI to “upskill” their employees. Elevator technicians are using intelligent augmented-reality headsets–which project schematics and tutorials over the real machinery in front of them–to learn how to service new models while in the field.

5 Ways To Ensure AI Unlocks Its Full Potential To Serve Humanity | DeviceDaily.com
[Source Images: The7Dew/iStock, Rogotanie/iStock]

Make AI Explainable

For any new technology, especially one transforming life as quickly as AI, winning the public’s trust is crucial for liftoff.

This is a major challenge in the field. Much of AI–in particular “deep learning” systems that simulate the neural networks of the human brain–are now so complex that even their designers can not explain how they arrive at decisions. Compounding this is that, for tech companies like Facebook, algorithms are treated as trade secrets and guarded jealously. These factors have given rise to the term “black-box algorithms.”

Opaque AI seems innocuous when recommending what you should watch on Netflix. But AI is now also powering decisions where lives and livelihoods are on the line, across criminal sentencing, loan worthiness, hiring, firing, medical diagnoses, and autonomous transport. If an algorithm denies your family a critical loan, you’re going to want to understand why.?? This raises a range of questions: How can someone dispute an outcome when there isn’t justification available? How can we fix problems that arise in algorithms if we don’t understand their origin? And who’s accountable when something goes wrong?

One of the world’s largest funders of technology research is taking the problem very seriously: the U.S. Department of Defense. The DOD is investing heavily in explainable AI (XAI), at DARPA, the same R+D lab that birthed the predecessor to the internet. Their aim is to produce “glass box” AI models that are interpretable in real-time by “humans in the loop,” raising flags when AI’s performance might be untrustworthy.

Startups are working on it, too. Factmata pairs AI with human insights to help media companies identify and stop the spread of misinformation and hate speech online. Their algorithms are fully laid out for their community, and invite users to inform design and correct mistakes as they arise. Dhruv Gulati, Factmata’s CEO, states that “only a totally accountable, explainable algorithm and methodology for validation, which anyone can interrogate and criticize,” is suitable for a company whose purpose is advancing trust online.

 
 

The public sector, wielding the twin tools of vast purchasing power and regulation, is also nudging the field toward greater transparency. After Propublica asserted that racial bias was lurking in algorithms used in the criminal justice system, New York City banned black-box algorithms across all its public services. Similar initiatives are underway nationwide in the U.K.

The European Union’s General Data Protection Protocol, which goes into effect later this month, is well known in tech and policy circles for its “Right to Be Forgotten” law, aimed at showing citizens how firms track and use their personal data. Less known is that it includes a “Right to Explanation,” granting EU citizens the right not to be subject to decisions made by automated systems. Firms in violation of this rule could face fines up to 4% of their annual revenue. While it only applies in the EU, many of the world’s biggest tech firms employ the same systems across all their markets, sending the most influential companies scrambling to comply. A recent survey of firms affected showed that fully one-third of them possessed black-boxed systems. It will be an interesting year.

5 Ways To Ensure AI Unlocks Its Full Potential To Serve Humanity | DeviceDaily.com
[Source Images: The7Dew/iStock, Rogotanie/iStock]

Diversify AI’s creators

While machine-driven decisions may evoke accuracy, all AI is built upon data fed to it by humans, and AI is only as effective as the data set upon which it is trained. This data comes from our society, and our society, unfortunately, is rife with bias. Without rigorous review of the training data, algorithms can encode or even compound the prejudices in our culture. Take the hiring process. If a hiring manager runs a search for “software engineer” on a professional networking platform, she is likely to see a first page of primarily Caucasian men, as they comprise the majority of the field. If she views the profile of one of these candidates, the algorithms underlying the platform interpret this as an indicator that this candidate type is “most relevant” to the hiring manager, sending those with different backgrounds to the back of the line. As a result, while the hiring manager has no ill intent, and candidates of various backgrounds exist, she may scroll through dozens of candidates without seeing one of them. ??Cases like this make it crucial that AI’s designers represent the diversity of perspectives needed to identify and root out bias–diversity across class, gender, race, as well as expertise.

But beating bias is only one reason for diversity in AI. It’s also about pointing the field at the full spectrum of problems people face.

When Timnit Gebru, an AI researcher and founder of Black in AI, attended her first AI conference, five of 1,000 attendees were people of color. She described this as “terrifying” because, as she put it, the field “needs people who just have this social sense of how things are.” Between the lines here is an obvious point: White male engineers from privileged socioeconomic backgrounds aren’t likely to build AI to solve problems they’ve never experienced.

Some of the leading minds in AI have joined Gebru in moving the dial. One example: Dr. Fei Fei Li, a renowned scientist who runs Stanford’s AI Lab, recently launched AI4ALL, to bring women and people of color into the field. Her reasoning is fittingly scientific: Research has shown time and again that diverse teams create the most creative solutions to problems. The work to be done is clear: As of 2015, about 70% of computer science majors at Stanford were male.

 

Designing AI to be inclusive isn’t just a high-minded aspiration, it’s a business imperative. When AI is built with underserved groups in mind, it creates both new value and entrepreneurial opportunity.

Take women and money management. Women now control 39% of investable assets in the U.S., yet there is no industry with which women are more dissatisfied with than financial services. The historically male-dominated field frames financial choices through the lens of competition, power, and wealth, but women tend to perceive investment around longer-term outcomes, caring for loved ones, and security. One startup, Joy, is using AI to bridge this gap, engaging women on the values that undergird their financial goals, then leveraging this information to inform personalized coaching. Another example comes from healthcare in the developing world. In Rwanda, where the medical system is short on specialists and operational capacity, Babylon Health employed AI to match patients with doctors abroad for remote digital consultation. They also introduced an artificially intelligent triage system to improve urgent care outcomes. Within the first six months of rollout, over 10% of the Rwandan population had signed up.

5 Ways To Ensure AI Unlocks Its Full Potential To Serve Humanity | DeviceDaily.com
[Source Images: The7Dew/iStock, Rogotanie/iStock]

Shield AI from the big-tech industrial complex

Another obstacle to AI’s potential is the growing monopoly of the world’s largest tech companies on the burgeoning field’s talent. As Ryan Kottenstette outlined recently in Techcrunch, Apple, Alphabet (Google), Facebook, Amazon, and Microsoft represent just 5% of U.S. GDP, “yet they are rapidly buying up AI companies and directing them to focus on R&D, rather than building AI applications for specific, non-tech industry problems.”

This means the brightest minds working on the greatest general purpose technology of our time are operating in service of four or five business models, often incremental applications –like ad targeting–that aren’t clearly advancing our society. While some of these firms have their own research units focused on larger-scale problems, it’s a troubling sign.

Compounding this challenge is that AI has characteristics of a “winner takes all” technology. Because AI is powered by data, those companies with the biggest data troves and most users are best positioned to win the arms race. For this reason, Yoshua Sergio, one of the creators of machine learning, raised eyebrows recently by advocating for the breakup of big tech, despite having worked in one such firm much of his career.

Alternatives are rising. OpenAI, a nonprofit with an explicit focus upon the “safe development of AI,” has landed some of the most respected scientists in the field, along with billions of dollars from the likes of Elon Musk and LinkedIn’s Reid Hoffman. MIT recently announced a 10-year, $240 million investment in AI research (in partnership with IBM).

 
5 Ways To Ensure AI Unlocks Its Full Potential To Serve Humanity | DeviceDaily.com
[Source Images: The7Dew/iStock, Rogotanie/iStock]

Point AI at “moonshots” that solve humanity’s biggest problems

Dirks calls AI a “super-accentuating technology,” one that not only spawns whole new fields of human inquiry and industry, but improves upon itself with new discoveries. In other words, each application of AI can make the underlying technology even more powerful.

But realizing this promise is possible only if we stop pointing a transformational technology at derivative business models and incremental problems.

AI’s superpower is processing variables that are too numerous and mutable for a human mind to handle, and identifying patterns to inform action. It’s time we harness this power toward our greatest challenges, not just because it’s right, but because it will capture the imagination of a new generation of entrepreneurs, and elevate the tech to new heights.

Where there is data around a major challenge, AI can unlock solutions. Examples of early promise abound:

AI is in its infancy. We’re at the start of an unprecedented wave of change in our society, economy, and personal lives. But risks have already emerged that could send AI in the wrong direction: bias, lack of explainability, replacement-minded industry, and tech giants that are confining the field’s ingenuity to parochial problems. For the good of society, the economy, and our species, let’s ensure AI is tackling challenges as big as the technology is powerful.


Will Byrne is an Ashoka Fellow, Forbes Under 30 entrepreneur, and strategist in emerging digital technology. He was founding CEO of Groundswell. 

Fast Company , Read Full Story

(28)