Could Light Switches Have Morals?
Could Light Switches Have Morals?
There’s a revolution underway at the edge of computer networks, where devices are getting better at assessing circumstances, analyzing data locally, and then actuating more services for consumers instead of relying on the cloud for all of that “intelligence.” But could light switches have morals?
Innovation at the Edge is being driven by the need for computing that’s fast.
Computing is expected to be reliable, secure, and environmentally sustainable. We want our computing to have the availability of ever more powerful and relatively inexpensive solutions (versus relying on connectivity with a server-based central processor that distributes informational commands).
A “best of both worlds” model is emerging that puts AI and data inference closer to the points of use.
AI and data inference are turning to the cloud for machine learning and transactional functions that only the cloud enables.
The fruits of this innovation will start becoming more apparent in 2020. The “smart” innovation will yield such things as washer and dryers that recognize simple voice commands; vacuum cleaners that learn to avoid damaging obstacles; and locks and light switches that operate more like smart home assistants.
Our computing demands will bring recognizing faces, voices, and gestures, security systems that discern broken window glass from a dropped wineglass, and cars that respond more quickly and efficiently to emergent road conditions.
But then — our demand for computing will also demand more.
The computing development suggests at least three questions are “out there” when it comes to considering how we think about Artificial Intelligence — and what it may think or feel about us:
First, could self-aware AI be more like a dog than a human? Much of the debate about AI is concerned with the transition from algorithm-based decision-making (however complex) and the self-reliant ability to pose and answer questions based on a sense of, well, self.
This is a far cry from the opaque nature of today’s deep learning, which doesn’t empower a processor to explain how it concluded. A theory of mind that allowed for a mind within the computer or robot that could analyze and justify its actions might be more trustworthy and believable, as well as more efficient.
Why should the Artificial Intelligence mind be modeled on human beings?
The Edge suggests that we may see varying types or degrees of intelligence or even consciousness emerge.
- For example: why couldn’t a “smart” home assistant’s AI possess the qualities of, say, a trustworthy dog or empathetic dolphin?
- Couldn’t those qualities be enough to provide immense benefits to consumers?
- Wouldn’t those qualities be closer to the realization that some models of the mind require AI to be for us?
- Can AI think and act like a friend or neighbor?
Perhaps the idea of AI as Sonny in the movie I, Robot, is too extreme of a vision, both cognitively and in physical form.
Second, could devices invent their own language and, therefore, build some model of AI and consciousness?
There are numerous examples of computers — robots, and chatbots — creating their own language and communicating:
- In 2017, Facebook asked chatbots to negotiate with each other, and the robots responded by inventing their own language that was unintelligible to the human coders (Facebook shut down the experiment).
- That same year Google revealed that an AI experiment was using its Translate tool to migrate concepts into and out of some language it had invented (Google allowed the activity to continue).
- Alphabet’s Open AI encouraged bots to invent their own language via reinforcement learning (think giving your dog a biscuit for doing the right thing). Alphabet eventually built a linga franca that let them conduct business faster and more efficiently than before.
The computing among the bots moved forward along with relevance to their “shared” experiences, much like a human language.
Claude Shannon, who was perhaps the conceptual godfather to Robert Noyce’s parenting of the semiconductor, posited in his Information Theory that information was a mechanism for reducing uncertainty (versus a qualitative tool for communicating content, per se).
What if processors at the Edge invent a language (or languages) that not only enable their improved function but emulate some form of distributed consciousness?
Would we be able to translate it? Will we care?
Third, and maybe most intriguingly, could a light switch have morals?
We can hardwire a device to execute specific functions, and the communications I referenced earlier can build upon that platform but, ultimately, smart devices may prove to be as “hardwired” to particular actions as are we humans.
For instance, consider a smart thermostat that has been built to perceive ambient temperature and execute commands based on that data. Now think of a human user who violates those programmed functions or the shared ML of environmentally responsible settings.
Does the thermostat object to the human’s input, or even reject it?
Could we build some rudimentary AI consciousness into the silicon-level structure of smart devices that makes them physically incapable of violating certain rules?
I’m reminded of Isaac Asimov’s Three Laws of Robotics; the concept of silicon-level security could well prove central to safety and reliability. We need from devices to which we give more responsibility and authority of action.
Exploring and answering such questions as these will be crucial to the mass market success of AI (and the ML that enables it).
We will need to apply not only our evolving expertise in computing but also bring in our understanding of psychology and social sciences. As our machines assume more autonomy of action (also called “agency”), they and we will run into problems that we didn’t expect.
For instance:
- What conditions will cause learning machines to get “stuck” or lead them into pathways of action that were characterized as “illness” if we were describing humans?
- How will AI be immunized from problems, both circumstantial or purposeful (i.e., hacks)?
- Could there be new regimes for oversight required for machines that “misbehave,” or even break the law?
It will be incredibly intriguing to ponder the questions posed by this revolution.
The post Could Light Switches Have Morals? appeared first on ReadWrite.
(15)