How AI will Address its Crisis of Confidence
How AI will Address its Crisis of Confidence
Artificial intelligence has advanced by leaps and bounds in recent years, becoming smarter and more autonomous than thought possible. However, there’s one area where AI could use some improvement in the decade ahead: transparency. But how will AI address its crises of confidence?
Historically, AI has operated like a black box. Select developers knew how the algorithms inside worked, but for everyone else, the mechanics of the AI remained obscured. Businesses have asked users to trust that AI’s insights are complete and accurate. But without understanding where those insights originated — what data and logic informed their basis — it’s hard to trust AI is as intelligent as it’s made to be.
This lack of transparency is increasingly unacceptable as AI moves into more aspects of our daily lives. Everything from hiring decisions to police actions are now informed by AI. As those trends continue, AI raises difficult questions about bias, fairness, and trust in machines. For example, an AI recruiting tool developed by Amazon was found to have a bias against women. That finding suggested the technology was far less objective than intended.
Already, groups like the Organization for Economic Cooperation and Development have begun rallying around making AI more transparent. The landmark General Data Protection Regulation (GDPR), passed in the EU, allows individuals to learn how algorithms utilize their data. These are both steps in the right direction and clear indications of where AI is heading. But there are also risks to making AI too transparent.
Why AI Thrives in the Shadows
When AI becomes more transparent, it also becomes more vulnerable to manipulation. Think of it like a safe — once you reveal how the locking mechanism works, the safe becomes much easier to crack.
For all of the problems created by opaque AI, it’s easy to imagine just as many arising from AI with its inner workings exposed. Once bad actors understand how the algorithms work, they could potentially game the system to achieve their own desired outcome by feeding the AI doctored data sets or by tweaking the underlying logic to rule in their favor. Imagine if a professor released the score code for the algorithm that grades student assignments. Students could then exploit the grading system.
There are also questions to consider concerning intellectual property. Private companies develop the majority of algorithms, and the way they work is considered a company secret — just like the recipe for Coca-Cola. Acknowledging that this is a sensitive issue, some have called for AI developers to release their source code to regulators or auditors who can provide oversight. Unfortunately, this half-solution does little to satisfy either developers or end-users, suggesting we need to return to the drawing board.
No matter what solution materializes, one thing is clear: Total transparency in AI could lead to trouble. For this technology to work, some things must remain unknown.
Future AI Will Walk a Fine Line
Tomorrow’s AI will strike a careful balance between transparency and secrecy. What form this takes remains to be seen after the inevitable rounds of give-and-take between the public, private, and consumer sectors. Despite how much remains unknown, however, a few developments look likely.
Improving AI’s transparency involves more than just flinging open the doors. Understanding what algorithms are actually doing requires intense scrutiny. >Explainable AI (XAI) utilizes interpretable machine learning algorithms to enable AI operators and users to understand why an AI system made the decisions it did. Developments in the XAI field will come quickly, too. Software review platform company G2 predicts commercialized versions of XAI will soon give end-users more tools for looking inside of AI’s mind.
Companies such as Kyndi and Fiddler are already advertising the explainable nature of their machine-learning solutions. And the U.S. Defense Advanced Research Projects Agency (DARPA) has invested in multiple research projects focusing on the contextual adaptation of tech advances. The multiyear investment is part of the agency’s AI Next campaign. The overarching goal is to develop AI that can run on autopilot without raising concerns about how the machine behind the curtain arrives at decisions.
As these tools proliferate, users will expect all aspects of their data journey to be explainable. They won’t demand to know exactly what’s going on, but they won’t settle for (February 01, 2020)’s black-box approach, either.
AI will improve in countless ways over the coming decade as it factors into ever more aspects of daily life. But the biggest changes will involve our own attitudes. As AI gains the ability to do more — and tell us how it does so — we will happily give it new responsibilities without worrying about the risks of ceding control. Think of it as AI without anxiety.
The post How AI will Address its Crisis of Confidence appeared first on ReadWrite.
(22)