Advertisement
Singapore markets closed
  • Straits Times Index

    3,224.01
    -27.70 (-0.85%)
     
  • Nikkei

    40,308.91
    +140.84 (+0.35%)
     
  • Hang Seng

    16,541.42
    +148.58 (+0.91%)
     
  • FTSE 100

    7,952.62
    +20.64 (+0.26%)
     
  • Bitcoin USD

    70,705.59
    +1,132.34 (+1.63%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • S&P 500

    5,254.35
    +5.86 (+0.11%)
     
  • Dow

    39,807.37
    +47.29 (+0.12%)
     
  • Nasdaq

    16,379.46
    -20.06 (-0.12%)
     
  • Gold

    2,254.80
    +16.40 (+0.73%)
     
  • Crude Oil

    83.11
    -0.06 (-0.07%)
     
  • 10-Yr Bond

    4.2060
    +0.0100 (+0.24%)
     
  • FTSE Bursa Malaysia

    1,530.60
    -7.82 (-0.51%)
     
  • Jakarta Composite Index

    7,288.81
    -21.28 (-0.29%)
     
  • PSE Index

    6,903.53
    +5.36 (+0.08%)
     

These are the research projects Elon Musk is funding to ensure A.I. doesn’t turn out evil

arnold schwarzenegger 1991 4x3 terminator
arnold schwarzenegger 1991 4x3 terminator

(mgm)

A group of scientists just got awarded $7 million to find ways to ensure artificial intelligence doesn't turn out evil.

The Boston-based Future of Life Institute (FLI), a nonprofit dedicated to mitigating existential risks to humanity, announced last week that 37 teams were being funded with the goal of keeping AI "robust and beneficial."

Most of that funding was donated by Elon Musk, the billionaire entrepreneur behind SpaceX and Tesla Motors. The remainder came from the nonprofit Open Philanthropy Project.

Musk is one of a growing cadre of technology leaders and scientists, including Stephen Hawking and Bill Gates, who believe that artificial intelligence poses an existential threat to humanity. In January, the Future of Life Institute released an open letter — signed by Musk, Hawking and dozens of big names in AI — calling for research on ways to keep AI beneficial and avoid potential "pitfalls." At the time, Musk pledged to give $10 million in support of the research.

ADVERTISEMENT

The teams getting funded were selected from nearly 300 applicants to pursue projects in fields ranging from computer science to law to economics.

Here are a few of the most intriguing proposals:

Researchers at the University of California, Berkeley and the University of Oxford plan to develop algorithms that learn human preferences. That could help AI systems behave more like humans and less like rational machines.

A team from Duke University plans to uses techniques from computer science, philosophy, and psychology to build an AI system with the ability to make moral judgments and decisions in realistic scenarios.

Nick Bostrom, Oxford University philosopher and author of the book "Superintelligence: Paths, Dangers, Strategies," wants to create a joint Oxford-Cambridge research center to create policies that would be enforced by governments, industry leaders, and others, to minimize risks and maximize benefit from AI in the long term.

Researchers at the University of Denver plan to develop ways to ensure humans don't lose control of robotic weapons — the plot of countless sci-fi films.

Researchers at Stanford University aim to address some of the limitations of existing AI programs, which may behave totally differently in the real world than under testing conditions.

Another researcher at Stanford wants to study what will happen when most of the economy is automated, a scenario that could lead to massive unemployment.

A team from the Machine Intelligence Research Institute plans to build toy models of powerful AI systems to see how they behave, much as early rocket pioneers built toy rockets to test them before the real thing existed.

Another Oxford researcher plans to develop a code of ethics for AI, much like the one used by the medical community to determine whether research should be funded.

Here's the full list of projects and descriptions.

NOW WATCH: WHERE ARE THEY NOW? The casts of the first two 'Terminator' films



More From Business Insider