Siri does not love you. Yet.

According to these recent news reports, a robot has become conscious:

    This little robot just passed a self-awareness test

    Humanoid shows a glimmer of self-awareness

    World’s First Self-conscious Robots

    END of Humanity? Self Conscious robot pass final test

Despite the headlines, the robot in question did not become conscious. It solved a puzzle by following an algorithm. You could use pencil and paper and follow the same algorithmic calculations and arrive at the same answers the robot did.

There’s a big difference between human-like behavior driven by an algorithm, and the same behavior driven by conscious awareness and intention.

A player piano doesn’t feel sad when it plays a sad song. Your face-recognition app doesn’t experience a thrill when it sees you. Siri says “thank you” because of an algorithm, not because there is a person inside the device who feels thankful.

In the case of this news report, the robot in question was programmed to solve puzzles by applying algorithms structured in a way called Deontic Cognitive Event Calculus (DCEC). DCEC is a way of representing problem-solving algorithms that models the needs and desires of multiple agents relative to each agent and relative to each other.

Computer scientists often distinguish between “strong AI,” which includes some sort of machine consciousness or awareness, and “weak AI,” which mimics human behavior in a narrow task, like Siri. The headlines above confuse the two.

I cannot imagine scientists confusing algorithmic machine behavior with human consciousness, yet the technical publications by these robotic scientists foster that misconception by the language they use. In one of their papers, the authors rhetorically asked whether robots could “ever enjoy the kind of genuine freedom to choose that we assume ourselves to have.” The authors then argue that yes, they could.

Their strange reasoning goes like this: if a machine’s behavior-generating algorithms model the machine’s goals and the goals of the other agents, and aligns the machine’s behavior with the goals of self and others, then the resulting behavior must be free choice, and when you have free choice, you necessarily have consciousness:

…alignment with desires, ascriptions of intentionality, and the absence of constraints predict how “free” certain choices are regarded to be, and predict as well whether blame judgments will be issued…

Now, if … ascriptions of intentionality and alignment of actions with respect to desires are features of free choice, then it follows almost without argument that self-consciousness is also a feature of free choice.

The problem in this reasoning is that the robot does not actually have the freedom to choose a behavior “without constraint.” It is very much constrained by the algorithms. The algorithms may model conscious agents and their needs and desires, but the outcome that is calculated to be the optimum behavior is the result of applying logic and arithmetic. The robot does not have the freedom to refuse to solve the puzzle. The robot does not have the freedom to choose to watch cat videos instead of solving the puzzle.

The authors do know the difference. Later in the same paper, they talk about a “self model” and the “DCEC calculus.” They describe how the robot includes model-based reasoning, machine proof generation, argument discovery, semantic parsers, and various inference engines. Yet, they confuse modeling intention with actually having intention:

Finally, it is common knowledge that if any agent A performing an action a is perceived at some time t‘ and known to have knowledge of a’s effects, then a was intended by A at some time t < t‘. Crucially, this last condition rudimentarily captures the awareness condition discussed earlier in the paper.

It may be a model of how consciously aware agents make decisions, but it’s not conscious awareness.

Given the recent media hype about AI doomsday scenarios, technologists have a responsibility to honestly represent the state of the art. Headlines about achieving machine consciousness should not appear in the public press. Perhaps a far future generation will get to deal with machine consciousness, but at this point, we can only devise algorithms that model conscious decision making; nobody has any idea how to induce real consciousness in a machine.

Shame on the authors for misrepresenting reality with their language.

Leave a Reply to Jordan Micah Bennett Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Before you post, please demonstrate that you are a live, honest person:

How many degrees are in pi radians?

One comment

  1. Is there in fact, a general geometric shape, that generates consciousness?