Advertisement
Singapore markets closed
  • Straits Times Index

    3,224.01
    -27.70 (-0.85%)
     
  • Nikkei

    40,369.44
    +201.37 (+0.50%)
     
  • Hang Seng

    16,541.42
    +148.58 (+0.91%)
     
  • FTSE 100

    7,952.62
    +20.64 (+0.26%)
     
  • Bitcoin USD

    70,181.45
    -630.95 (-0.89%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • S&P 500

    5,254.35
    +5.86 (+0.11%)
     
  • Dow

    39,807.37
    +47.29 (+0.12%)
     
  • Nasdaq

    16,379.46
    -20.06 (-0.12%)
     
  • Gold

    2,254.80
    +16.40 (+0.73%)
     
  • Crude Oil

    83.11
    -0.06 (-0.07%)
     
  • 10-Yr Bond

    4.2060
    +0.0100 (+0.24%)
     
  • FTSE Bursa Malaysia

    1,536.07
    +5.47 (+0.36%)
     
  • Jakarta Composite Index

    7,288.81
    -21.28 (-0.29%)
     
  • PSE Index

    6,903.53
    +5.36 (+0.08%)
     

Why We Shouldn't Fear Artificial Intelligence

Why We Shouldn't Fear Artificial Intelligence

There’s a lot of fear surrounding artificial intelligence these days, and it’s hard to know what's warranted and what isn’t.

Media blasts feature certain eminent billionaires’ and physicists’ Doomsday theories of intelligent machines alongside legitimate concerns over how AI will effect privacy in the age of big data. But I don’t think this confusion -- this conflation of fantasy and reality -- really serves businesses, the public or the field of AI very fairly.

The truth is that, despite AI’s problematics (both real and imagined), it can produce a lot of good when used properly. What we need is a clearer understanding of the issues: what AI can do, and, more pressingly, what it can’t.

“Can’t AI become self-aware and take over the world using computers against us?”

ADVERTISEMENT

Related: Jibo, the Personal Robot Startup, Lands $25 Million in Funding

Probably not. For AI to overthrow humanity, four things would have to occur:

  1. An AI would have to develop a sense of self distinct from others and have the intellectual capacity to step outside the intended purpose of its programmed boundaries

  2. It would have to develop, out of the billions of possible feelings, a desire for something that it believes is incompatible with human existence

  3. It would have to choose a plan for dealing with its feelings (out of the billions of possible plans) that involved death, destruction and mayhem

  4. It would have to have the computing power / intelligence / resources to enact such a plan

An AI achieving any of these is highly unlikely. To achieve all of it? Next to impossible.

A development of what we understand as “consciousness” -- the ability to think about oneself as an object and self-direct action -- in an AI is improbable. Machine learning is achieved by training a machine -- showing it, for example, millions of bits diagnostic information in order to “teach” the machine to make statistically-educated guesses about whether a patient has a certain kind of cancer.

If, then, we end up with an incredibly intelligent machine like Deep Blue (the best chess player in the world), we’re left with a machine that can only reason about chess. Your toddler could beat Deep Blue in checkers because Deep Blue doesn’t know it exists; it can’t understand rules other than what it was programmed for.

If, somehow, a machine were able to learn to reason (and reason well) outside of its programming, it, like us, would be left with billions of choices: What do I feel? What am I going to do? Who am I? When faced with these questions, very few humans decide, “I’m going to dominate the human race." There’s no reason at all to assume AI would automatically go there either.

And even if it did, if (in this incredibly unlikely scenario) a single bad apple emerged, where would it get the resources to enact a destructive plan? A common fallacy suggests that, because AI are hosted on computers, they’ll be good at manipulating them. But let me ask you this: By virtue of living in a house, do you know how to build / remodel / manipulate one? Many thinkers in computational and mathematical logic agree that computer programs are almost certainly worse with computers than we are. What we’d be left with, then, is an incredibly grumpy AI, and little else.

Related: These Giant Robotic Ants Could One Day Replace Factory Workers

Much more pressing are the concerns about privacy as rising from issues of “big data” and “data mining.” It’s true that, as more and more of our lives become digitized, machines are being developed to discover and use that information for different purposes. And that tends to make people uneasy.

“I don’t want my information being read,” people think. I don’t either, but keep this in mind: As a researcher and builder of these machines, I don’t see your information; the machines do, and they have no idea what they’re “reading.” They’re simply looking for the indicators they’ve been trained to notice and making whatever statistical decision they’ve been asked to make.

Some of my graduate students, for example, have developed methods for predicting the need for blood transfusions and emergency surgery for traumatic brain injury patients based on a few hours of continuous vital signs recording. Others have explored determining operating room state from video, digitizing paper forms so that health workers in third world countries can get their data out for quick analysis and generating textual descriptions of people from triage images to help loved ones find disaster victims.

What we’re seeing again and again are forward-thinkers applying AI to situations where what we need is speed: identifying specifics based on complex statistical models, understanding and processing enormous amounts of data to solve otherwise-impossible problems. AI isn’t the “demon” it’s made out to be; inherently it’s useful and will allow us to affect change like we never have before.

Related: Steve Wozniak: The Future of AI Is 'Scary and Very Bad for People'