Understanding the hype vs. reality around artificial intelligence

Understanding the hype vs. reality around artificial intelligence

Group of Robots and personal computer vector illustration

With all the attention Artificial Intelligence (AI) attracts these days, a backlash is inevitable – and could even be constructive. Any technology advancing at a fast pace and with such breathless enthusiasm could use a reality check. But for a corrective to be useful, it must be fair and accurate.

The industry has been hit with a wave of AI hype remediation in recent weeks. Opinions are surfacing that label recent AI examples so mundane that they render the term AI practically “meaningless” while others are claiming AI to be an “empty buzzword.” Some have even gone so far to label AI with that most damning of tags– “fake news.’

See also: AI to benefit from MIT’s super low-power chip

Part of the problem with these opinions are the expectations around what is defined in “AI.” While the problem of how best to define AI has always existed; skeptics argue that overly broad definitions, and too-willing corporate claims of AI adoption, characterize AI as something which we do not have. We have yet to see the self-aware machines in 2001‘s HAL and Star Wars’ R2D2, but this is simply over-reach.

Today’s AI programs may be just ‘mere’ computer programs – lacking the sentience, volition, and self-awareness – but that does not neglect their ability to serve as intelligent assistants for humans.

The highest aspirations for AI – that it should reveal and exploit, or even transcend, deep understandings of how the mind works – are undoubtedly what ignited our initial excitement in the field. We should not lose sight of that goal. But existing AI programs which serve lower human end functions provide great utility as well as bring us closer to this goal.

For instance, the seemingly mundane activities humans conduct look simple but aren’t straightforward at all. A Google system that ferrets out toxic online comments; a Netflix video optimizer based on feedback gathered from viewers; a Facebook effort to detect suicidal thoughts posted to its platform may all seem like simple human tasks.

Critics may disparage these examples as activities which are performed by non-cognitive machines, but they nonetheless represent technically interesting solutions that leverage computer processing and massive amounts of data to solve real and interesting human problems. Identify and help a potential suicide victim just by scanning their online posts. What could be more laudable – and what might have seemed more unlikely to be achieved via any mere “computation?”

Consider one of the simplest approaches to machine learning applied to today’s easily relatable problem of movie recommendations. The algorithm works by recommending movies to someone that other similar people – their nearest neighbors – also enjoyed.

No real mystery

Is it mysterious? Not particularly.

It’s conceptually a simple algorithm, but it often works. And by the way, it’s actually not so simple to understand when it works and when it doesn’t, and why, or how to make it work well. You could make the model underlying it more complex or feed it more data – for example, all of Netflix’s subscribers’ viewing habits – but in the end, it’s understandable. It’s distinctly not a ‘black box’ that learns in ways we can’t comprehend. And that’s a good thing. We should want to have some idea how AI works, how it attains and uses its ‘expert’ knowledge.

To further illustrate, envision that interesting moment in therapy when a patient realizes his doctor looks bored – the doctor has heard this story a hundred times before. In the context of AI, it illuminates an important truth: it’s a good thing when an expert – in this case, our hypothetical therapist – has seen something before and knows what to do with it. That’s what makes the doctor an expert. What the expert does is not mundane, and neither is replicating that type of expertise in a machine via software.

Which leads to another problem hiding in these recent critiques: that once we understand how something works – regardless of how big a challenge it initially presented – its mystique is lost. A previously exciting thing – a complex computer program doing something that previously only a person exercising intelligence could do – suddenly seems a lot less interesting.

But is it really? When one looks at AI and realizes it turns out to just program — of course, it is just “programs,” but that’s the whole point of AI.

To be disappointed that an AI program is not more complicated, or that its results aren’t more elaborate – even cosmic – is to misstate the problem that AI is trying to address in the first place. It also threatens to derail the real progress that continues to accumulate and may enable machines to possess the very things that humans possess, and that those criticizing real-world AI as too simplistic pine for volition, self-awareness, and cognition.

Take genetics, for example. The field didn’t start with a full understanding or even theory of DNA, but rather with a humbler question: why are some eyes blue and some eyes brown? The answer to that question required knowledge of and step-by-step advancements in biology, chemistry, microscopy, and a multitude of other disciplines. That the science of genetics should have started with its end game of sequencing the human genome – or in our case, that AI must begin by working on its endgame of computer sentience – is as overly-romantic as it is misguided.

In the end, all scientific endeavors, including AI, make big leaps by working on more basic – and perhaps, only in hindsight, easier – problems. We don’t solve the ultimate challenges by jumping right to working on them. The steps along the way are just as important – and often yield incredibly useful results of their own. That’s where AI stands right now. Solving seemingly simple yet fundamental challenges – and making real progress in the process.

There’s no need to debunk or apologize for it. It is required to advance the field and move closer to the more fanciful AI end-goal: making computers act like they do in the movies, toward which our AI critics — and indeed all of us in the field — strive as our ultimate ambition.

Larry Birnbaum, Co-founder and Chief Scientific Advisor, Narrative Science

Larry Birnbaum, Co-founder and Chief Scientific Advisor, Narrative Science

Larry Birnbaum is a co-founder of Narrative Science and the company’s Chief Scientific Advisor, where he focuses on next-generation architecture, advanced applications, and IP. In addition, Larry is Professor of Computer Science and of Journalism at Northwestern University, where he also serves as the Head of the Computer Science Division/EECS Department. He received his BS and Ph.D. from Yale.

The post Understanding the hype vs. reality around artificial intelligence appeared first on ReadWrite.

ReadWrite

(44)