Top
We’re not saying you’re obsolete, but Google is making music humans can’t – ANITH
fade
8384
post-template-default,single,single-post,postid-8384,single-format-standard,eltd-core-1.1.1,flow child-child-ver-1.0.0,flow-ver-1.3.6,eltd-smooth-scroll,eltd-smooth-page-transitions,ajax,eltd-blog-installed,page-template-blog-standard,eltd-header-standard,eltd-fixed-on-scroll,eltd-default-mobile-header,eltd-sticky-up-mobile-header,eltd-dropdown-default,wpb-js-composer js-comp-ver-5.0.1,vc_responsive

We’re not saying you’re obsolete, but Google is making music humans can’t

We’re not saying you’re obsolete, but Google is making music humans can’t


Image: Shutterstock / Africa Studio

After teaching AI to draw and paint with AutoDraw, Google has set its sight on conquering another art form: music

The company’s AI research team, Google Magenta, announced a new project in April called Neural Synthesizer, or NSynth, which generates audio using deep neural networks. That technology will be demonstrated at Durham, North Carolina’s annual arts and technology festival, Moogfest, later this week.

To create music, NSynth uses a dataset containing sounds from individual instruments and then blends them to create hybrid sounds. According to the company, NSynth gives “artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer,” the company said in their announcement.

The resulting sound is not like playing two of the individual sounds together, Cinjon Resnick, a member of the Magenta team told Wired. Instead, the software is actually producing an entirely new sound that would be impossible or nearly impossible to do so otherwise. The end product resembles sounds that are “in between” other instruments, combined in a way that can only be done digitally. 

The code for the project is open source, which means that anyone can download, modify and use it.

The Magenta team has already produced several interesting pieces of music, one of which even one best demo at NIPS, an industry conference for neural networks and machine learning projects.

Several other systems like IBM’s Watson have been working on similar projects for music made by or with AI. For now, Google itself is not offering the software as a product, NSynth is meant to be a dataset for other developers to play with and try creative projects. 

But don’t worry, Google is not trying get rid of human musicians with NSynth. The team is focused on making new sounds that are “intuitive” and “expressive,” and want to work with musicians rather than replace them.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f5%2f12213c75 ef08 aa43%2fthumb%2f00001



Source link

Anith Gopal
No Comments

Post a Comment

3 × 4 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.