Advertisement
Singapore markets closed
  • Straits Times Index

    3,280.10
    -7.65 (-0.23%)
     
  • Nikkei

    37,934.76
    +306.28 (+0.81%)
     
  • Hang Seng

    17,651.15
    +366.61 (+2.12%)
     
  • FTSE 100

    8,139.83
    +60.97 (+0.75%)
     
  • Bitcoin USD

    62,858.57
    -1,528.21 (-2.37%)
     
  • CMC Crypto 200

    1,304.48
    -92.06 (-6.59%)
     
  • S&P 500

    5,099.96
    +51.54 (+1.02%)
     
  • Dow

    38,239.66
    +153.86 (+0.40%)
     
  • Nasdaq

    15,927.90
    +316.14 (+2.03%)
     
  • Gold

    2,349.60
    +7.10 (+0.30%)
     
  • Crude Oil

    83.66
    +0.09 (+0.11%)
     
  • 10-Yr Bond

    4.6690
    -0.0370 (-0.79%)
     
  • FTSE Bursa Malaysia

    1,575.16
    +5.91 (+0.38%)
     
  • Jakarta Composite Index

    7,036.08
    -119.22 (-1.67%)
     
  • PSE Index

    6,628.75
    +53.87 (+0.82%)
     

Google built a hardware interface for its AI music maker

NSynth Super plays sounds created entirely with machine learning.

Music and technology go together hand in hand; drum machines and modular synths are just some of the more recent music technologies to emerge. Last year, a Google Brain project called Magenta created NSynth (Neural Synthesizer), a set of AI and machine learning tools that learn the characteristics of sound and create entirely new sounds from those attributes. Now, in collaboration with Google Creative Lab, the team has built NSynth Super, hardware to interface with NSynth using up to four source sounds at once to algorithmically create new sounds.

The team recorded 16 sounds sources across a 15-pitch range for input to the NSynth algorithm, which resulted in more than 100,000 newly created sounds, not just blends. These new sounds were then loaded into the NSynth Super, which has a touch screen musicians can drag their fingers across to play the new sounds. It's still early days with this music tech, but the project is open source; code and design files can be found on GitHub if you want to make your own.