Advertisement
Singapore markets closed
  • Straits Times Index

    3,176.51
    -11.15 (-0.35%)
     
  • Nikkei

    37,068.35
    -1,011.35 (-2.66%)
     
  • Hang Seng

    16,224.14
    -161.73 (-0.99%)
     
  • FTSE 100

    7,895.85
    +18.80 (+0.24%)
     
  • Bitcoin USD

    64,703.80
    +1,919.77 (+3.06%)
     
  • CMC Crypto 200

    1,386.73
    +74.11 (+5.66%)
     
  • S&P 500

    4,988.43
    -22.69 (-0.45%)
     
  • Dow

    38,040.36
    +264.98 (+0.70%)
     
  • Nasdaq

    15,385.42
    -216.07 (-1.38%)
     
  • Gold

    2,409.60
    +11.60 (+0.48%)
     
  • Crude Oil

    83.44
    +0.71 (+0.86%)
     
  • 10-Yr Bond

    4.6250
    -0.0220 (-0.47%)
     
  • FTSE Bursa Malaysia

    1,547.57
    +2.81 (+0.18%)
     
  • Jakarta Composite Index

    7,087.32
    -79.50 (-1.11%)
     
  • PSE Index

    6,443.00
    -80.19 (-1.23%)
     
Engadget
Why you can trust us

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.

Hitting the Books: Do we really want our robots to have consciousness?

A little bit of self-awareness goes a long way.

Lidiia Moor via Getty Images

From Star Trek’s Data and 2001’s HAL to Columbus Day’s Skippy the Magnificent, pop culture is chock full of fully conscious AI who, in many cases, are more human than the humans they serve alongside. But is all that self-actualization really necessary for these synthetic life forms to carry out their essential duties?

In his new book, How to Grow a Robot: Developing Human-Friendly, Social AI, author Mark H. Lee examines the social shortcomings of the today’s AI and delves into the promises and potential pitfalls surrounding deep learning techniques, currently believed to be our most effective tool at building robots capable of doing more than a handful of specialized tasks. In the excerpt below, Lee argues that the robots of tomorrow don’t necessarily need — nor should they particularly seek out — the feelings and experiences that make up the human condition.

How to Grow a Robot
How to Grow a Robot (MIT Press)

Excerpted from How to Grow a Robot: Developing Human-Friendly, Social AI by Mark H. Lee © 2020 Massachusetts Institute of Technology.

ADVERTISEMENT

Although I argue for self-awareness, I do not believe that we need to worry about consciousness. There seems to be an obsession with robot consciousness in the media, but why start at the most difficult, most extreme end of the problem? We can learn a lot from building really interesting robots with sentience, by which I mean being self-aware, as with many animals. The sentience of animals varies over a wide range, and it seems very unlikely that consciousness is binary—either you have it or you don’t.

It’s much more probable that there is a spectrum of awareness, from the simplest animals up to the great apes and humans. This is in line with evolutionary theory; apparently sudden advances can be traced to gradual change serendipitously exploited in a new context. As I’ve indicated, there are many animal forms of perception and self-awareness, and these offer fascinating potential. Let’s first try to build some interesting robots without consciousness and see how far we get.

Support for this view comes from biophilosopher Peter GodfreySmith, who studies biology with a particular interest in the evolutionary development of the mind in animals. He traces the emergence of intelligence from the earliest sea creatures and argues for gradual increases of self-awareness. He says, “Sentience comes before consciousness” (Godfrey-Smith, 2017, 79) and claims that knowing what it feels like to be an animal does not require consciousness. It seems entirely logical that we can replace the word animal with robot in the last sentence. Godfrey-Smith also argues that “language is not the medium of complex thought” (2017, 140–148, italics in original), which supports the view that symbolic processing is not a sufficient framework for intelligence.

In any case, it is important to recognize that the big issues in human life—birth, sex, and death—have no meaning for robots. They may know about these concepts as facts about humans, but they are meaningless for nonbiological machines. This seems to be overlooked in many of the predictions for future robots; systems that are not alive cannot appreciate the experience of life, and simulations will always be crude approximations. This is not necessarily a disadvantage: A robot should destroy itself without hesitation if it will save a human life because to it, death is a meaningless concept. Indeed, its memory chips can be salvaged from the wreckage and installed inside a new body, and off it will go again.

Consequently, such robots do not need to reason philosophically about their own existence, purpose, or ambitions (another part of consciousness). Such profound human concerns are as meaningless to a robot as they are to a fish or a cat. Being human entails experiencing and understanding the big life events of living systems (and some small ones as well), and human experience cannot be generated through nonhuman agents. If this contention is accepted, it should counter much of the concern about future threats from robots and superintelligence.

Two Nobel laureates, Gerald Edelman and Francis Crick, both changed direction following their prize-winning careers. Edelman won the prize for his work on antibodies and the immune system, and Crick was the co-discoverer (with James Watson) of the structure of the DNA molecule. Both started research into consciousness as a second career. Edelman experimented with robots driven by novel competing artificial neural systems (Edelman, 1992), and Crick looked for the seat of consciousness in the brain (Crick, 1994). They didn’t agree on their respective approaches, but their work, well into retirement, produced interesting popular books and showed how fascinating the whole topic of consciousness is. Despite their mutual criticism, their general goal was the same: They both thought that the circular feedback paths in the brain somehow supported consciousness, and they were looking for structural mechanisms in the brain.

I have already argued that sentient agents, like robots, need not be conscious, but they must be self-aware. In any case, it is a reasonable scientific position to start with experiments with models of self, self-awareness, and awareness of others and see how far the results take autonomous agents. Then the requirement for, or the role of, consciousness, can be assessed by the lack of it. This is not a structural approach, based directly on brain science as with Edelman and Crick, but rather a functional approach: What do models of self offer? How do they work? What is gained by self-awareness? What is missing from the behavior of sentient robots that consciousness could address?