Advertisement
Singapore markets close in 5 hours 46 minutes
  • Straits Times Index

    3,407.60
    -3.21 (-0.09%)
     
  • Nikkei

    40,999.80
    +87.43 (+0.21%)
     
  • Hang Seng

    17,567.68
    -231.93 (-1.30%)
     
  • FTSE 100

    8,203.93
    -37.33 (-0.45%)
     
  • Bitcoin USD

    55,101.72
    -3,057.96 (-5.26%)
     
  • CMC Crypto 200

    1,147.43
    -61.26 (-5.07%)
     
  • S&P 500

    5,567.19
    +30.17 (+0.54%)
     
  • Dow

    39,375.87
    +67.87 (+0.17%)
     
  • Nasdaq

    18,352.76
    +164.46 (+0.90%)
     
  • Gold

    2,391.20
    -6.50 (-0.27%)
     
  • Crude Oil

    82.82
    -0.34 (-0.41%)
     
  • 10-Yr Bond

    4.2720
    -0.0830 (-1.91%)
     
  • FTSE Bursa Malaysia

    1,611.02
    -5.73 (-0.35%)
     
  • Jakarta Composite Index

    7,243.55
    -9.82 (-0.14%)
     
  • PSE Index

    6,528.35
    +35.60 (+0.55%)
     

How AI can help build a universal real-time translator

The breakthroughs in natural language processing and machine translation brought by deep learning might enable us to build a trope of science-fiction books — a universal real-time translator that fits within the human ear. Geoff Hinton, one of the godfathers of deep learning and neural networks, explained how it could be done at the Association for the Advancement of Artificial Intelligence conference held in Austin, Texas, on Wednesday at the tail end of a talk he gave about the history and future of artificial intelligence.

He wasn’t clear in his timeline, although he did say that he only could only anticipate the future about five years out, so perhaps we’re closer than we think to this concept. Here’s how he explained it in his talk for a translation from English to French.

You start with recurrent neural networks, which excel at text analysis and natural language processing. Recurrent neural networks have been responsible for some of the significant improvements in language understanding, including the machine translation that powers Microsoft’s Skype Translate and Google’s word2vec libraries.

Essentially, for each language you have multiple recurrent neural networks that will take your English sentence and parse it word by word. It will then take the entire sentence and move that over to the French recurrent neural network for decoding. There, it will take the concept represented by the sentence and start with the first word to be translated. Once it has translated that, it will match that word against both the statistically probability of the likeliest word that would follow that first word and also against a distribution of the likeliest translation of the second word to come up with a match.

ADVERTISEMENT

It continues to do this until you get a translation. Hinton explained that the neural networks are trained using random words, and after training the recurrent neural networks for one man-year, which equated to a few students working for about three months, the Hinton recurrent neural network translator matched state-of-the-art databases.

Hinton added that the more languages one adds, the better it makes the neural network, because it helps the computer narrow the probabilities it has to look at. Hinton concluded, “In few years time we will put it on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.”

For those who aren’t Douglas Adams fans, the Babel fish was an alien fish that the hero of his Hitchhiker’s Guide to the Galaxy books slipped into his ear at the beginning of his journey so he could instantly understand all of the alien languages he encountered.

Image copyright .

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.



More From paidContent.org