Executives from leading companies share how to achieve responsible AI

 

By Kolawole Samuel Adebayo

AI continues to dominate the headlines, with new uses popping up in everything from education to healthcare. As the technology continues to proliferate—often faster than government regulations can keep up—it’s increasingly vital that companies embrace the principles of responsible AI; failure to do so can hurt a company’s reputation, expose the organization to costly legal liability, and damage employee morale.

Fortunately for executives, responsible AI—defined by MIT Sloan Management Review as “a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact”—often means good business. According to a recent study by Accenture, a “small cadre” of companies that are proactively pursuing responsible AI policies are also generating 50% more revenue growth than their peers, while outperforming on customer experience and environmental, social, and governance metrics, as well. In other words, adherence to ethical AI principles usually spells a win-win proposition in terms of a company’s brand reputation and P/L (profit and loss) results.

Fast Company spoke with several industry leaders on the importance of responsible AI. Their insights aren’t meant to ignore the many exciting opportunities for growth available from carefully designed and deployed AI solutions; rather, they all sought to call attention to the importance of clearly defined and rigorously implemented internal governance policies. After all, the long-term benefits of AI are profound, both to business and to society at large.

Who’s responsible?

Being a leading provider of data-storage platforms for AI solutions, VAST Data enables some highly advanced machine learning systems. As a result, Subramanian Kartik, the company’s VP of systems engineering, has a bird’s-eye view of the industry’s current landscape, including how companies in various sectors are addressing (or not) responsible AI issues that apply specifically to their unique business models. At the heart of the challenge, Kartik says, lies the issue of explainability—whether or not humans are able to decipher how AI solutions arrive at the decisions they make, some of them with profound real-world impacts.

“The emerging field of explainable AI—also called XAI—is the backdrop to responsible AI” says Kartik, “because at some point you need to be able to explain to a nontechnical human being why an AI system behaves the way it does.” Especially in the case of deep-learning systems that contain many opaque layers of neural networks—which are sometimes described as a “black box”—determining how an AI reaches its conclusions can be immensely complicated. “You cannot readily understand how a deep-learning model works internally and easily manage it responsibly because it self-tunes. It learns on its own,” Kartik adds.

But sci-fi fans can breathe a sigh of relief: Kartik thinks the black-box explainability problem will be solved by science; he isn’t concerned about any Skynet-like doomsday scenario.

Taking the bias out of data

AI  bias” is perhaps the greatest barrier to achieving responsible AI. One company that provides tools for developing machine learning solutions, Boon Logic, helps to solve the problems of biased data and lack of explainability using its proprietary Boon Nano algorithm. In the words of Grant Goris, the company’s CEO, the algorithm “starts with a blank slate and finds its own ‘truth.’”

“Given that so much bias is introduced by humans labeling the data, our approach is inherently much less likely to contain bias—unless the unlabeled training data itself contains bias,” says Goris.

With unsupervised machine learning algorithms, Boon’s system trains data that’s been collected, without human labeling, directly from sources such as industrial machines, cameras, or internet traffic counters, for example. In this way, according to Goris, data is organized in an unbiased fashion and ultimately presented to a human for optimal interpretation and analysis.

He adds that the key to Boon’s success in the domain of responsible AI is that it empowers customers to create their own individualized models, suited to their unique situations. The company avoids the lack-of-transparency problem in machine learning through an anomaly-detection technique that doesn’t rely on opaque neural networks. “This is generally enough for subject matter experts to understand what’s happening ‘inside the box,” he says.

Erez Moscovich, cofounder and CEO at Tasq.ai—the data annotation platform with AI-assisted tools—says that everyone agrees today that data is the new oil. But the accuracy of whatever ML models companies are building depends on the data those models are trained on. Moscovich notes there are three major steps that businesses must focus on to take bias out of their models: Data collection, data labeling, and data validation.

Observing without tracking

Data privacy is a growing concern in the AI field, as customers increasingly want to know how companies gather, track, and use their information. But some companies like Intenseye, Boon, Avataar, Winn.ai, and Quris.ai are offering AI-powered solutions that put data privacy and responsible AI in view.

For example, Intenseye “provides AI solutions for preventing workplace accidents, using closed-circuit video as the system’s data source”—a scenario that might raise personal privacy concerns at first glance. But cofounder and CEO Sercan Esen says Intenseye recognizes the potential ethical risks associated with the unintended use and further development of its technology, which is why the company is committed to designing it in a way that cannot be repurposed for unjustified surveillance on frontline teams. Intenseye applies a responsible data and AI approach throughout the lifecycle of its various models, Esen says.

advertisement

“We ensure that our system design, end goals, and treatment of individuals subject to the system are ethically justifiable, mitigating potential risks related to privacy and bias through design choices and user agreements. As AI becomes more advanced, it can be challenging for humans to understand how algorithms produce a given result, and that’s why explainable AI is crucial for building trust and confidence when putting models into production,” he adds.

For Eldad Postan Koren, CEO at Winn.ai—the real-time sales insights company—businesses “must consistently improve their privacy and security practices to keep pace, as AI technology advances.” Koren adds that it’s imperative for organizations leveraging AI to build, maintain, and improve their privacy and security practices. “We take great pride in maintaining high privacy standards, both within our company and for our users who are aware that their information is fully protected. If privacy is not prioritized, personal and organizational data can be vulnerable to misuse,” he says.

Similarly, the 3D imaging company Avataar, which allows online shoppers to visualize products virtually in their actual living spaces using a camera phone, deliberately excludes the collection of any user data related to human images. Beyond that, the only user-preference data collected by Avataar relates to surfaces and materials, without tracking users’ broader product category preferences. “We focus on ensuring that our product categorization data is diverse and inclusive,” says Sravanth Aluru, CEO of Avataar. “Our in-house AI data team works to make our dataset more diverse in regard to different capture environments and capturers. This helps mitigate the risk of bias in our AI systems by providing a broader and more representative dataset.” The company regularly monitors and updates its AI systems to ensure that they remain unbiased over time, Aluru adds.  

Regulatory models of responsibility

For highly regulated industries such as financial services, existing legal frameworks set up and encourage companies operating in these spaces to behave responsibly through internal AI data and governance policies, which parallel their regulatory requirements. A company such as Lendbuzz, for example, which pursues a business model that’s socially beneficial at its core—enabling access to car loans for “no-credit” and “thin-credit” customers—by law must already be adhering to fair lending and Equal Credit Opportunity Act statutes. This group of overlooked and underserved borrowers (often called “credit invisibles”) numbers some 45 million people, most of whom do not have a FICO score or a deep enough credit history to earn a rating by which they can gain fair access to credit.

“We ensure our machine learning models are not discriminating or creating disparate impact for applicants across age, race, gender, etcetera,” says Amitay Kalmar, founder and CEO at Lendbuzz. As for explainability behind its credit decisions, “This is something we’re always working on as we add new features to our AI models,” he adds. “While we can help to ensure features of the model are not discriminatory on the input side, we also continually improve the explainability of our models so that our colleagues in underwriting have the tools they need to assist responsibly in the lending process.”

The healthcare industry is another critical sector where the issue of ethical AI is a top priority. Biased results from AI models in healthcare can have fatal consequences, so it’s understandable why regulations are stiff in the industry. Isaac Bentwich, founder and CEO at Quris.ai—an AI company revolutionizing drug development by focusing on a better way to assess which drug will be safe for human use—tells Fast Company that the year 2023 will see an explosion of AI, especially in pharma, and AI ethics will be a cornerstone of that revolution.

Quris.ai combines artificial intelligence with a proprietary organ-on-chip technology to “generate a massive proprietary dataset that is automated, highly predictive, and uses classification algorithms to train the machine learning model to better predict which drug candidates will safely work in humans.” Bentwich notes that the company’s technology has a tremendous ethical impact—reducing the risk and the unfairness of human testing, shortening the drug development process, and lowering the harm that drugs pose.

With its stellar advisory board including industry stalwarts like Nobel Laureate Prof. Aaron Ciechanover, Moderna cofounder Prof. Robert Langer, stem-cell pioneer Professor Nissim Benvenisty, and others, Bentwich says “Quris.ai is very focused on the ethical aspects of using AI for a good cause—reducing animal testing and especially reducing human suffering.”

Maintaining a responsible reputation

In an age where social media influencers are key channels for effectively communicating corporate values, monitoring and managing a brand’s messaging requires a concerted and ongoing effort. This often entails the application of artificial intelligence solutions designed to support responsible behavior, according to Joe Gagliese, cofounder and co-CEO of Viral Nation. His company’s technology enables clients to ensure that anyone who represents their brands in the wild ecosystem that is social media—from employees to influencer partners—does so consistently in alignment with the brand’s core values.

“Organizations today face various emerging threats in the social media landscape such as brand impersonation, ransomware, and phishing attacks conducted through social networks,” Gagliese says.

Responsibly navigating through these criminal and reputational threats, he adds, requires specialized AI tools, as well as well-defined internal policies that drive responsible online brand reputation management.

Fast Company

(26)