The Future Is Bright For Ai Music
Musicians Harness AI to Create Unprecedented Songs: A Vast Frontier of Possibility in Ai generated music.
In a recent episode of Sean Carroll's Mindscape podcast, musician Grimes made a provocative statement about the future of human art. According to her, with the advent of Artificial General Intelligence (AGI), machines will become better at creating art than humans, potentially leading to the end of human art.
Her comments caused an uproar on social media, with many musicians and artists criticizing her for promoting "silicon fascist privilege" and taking the "bird's-eye view of billionaires." The idea that AI could potentially optimize and improve music, a deeply personal and subjective art form, has raised concerns and sparked a debate about the future of creativity.
Ai Music has a wide-open horizon of possibility
However, numerous musicians believe that the emergence of AI will not mark the end of human art but rather incite a new era of creativity. In recent years, renowned artists such as Arca, Holly Herndon, and Toro y Moi have experimented with AI to explore new and unanticipated musical avenues. Moreover, a group of musicians and researchers worldwide are devising tools to enable artists from all backgrounds to utilize AI in their creations. Despite ongoing obstacles, such as copyright and other legal concerns, musicians who employ AI are hopeful that this technology will become a democratizing force and a fundamental aspect of music production.
The SYMBIOSIS OF MUSIC AND TECHNOLOGY
The relationship between music and artificial intelligence has a rich history. As far back as 1951, computer science pioneer Alan Turing created a machine that produced three basic melodies. In the 1990s, musician David Bowie experimented with a digital lyric randomizer for creative inspiration. During this same period, a music theory professor trained a computer program to compose new pieces in the style of Bach. The audience was unable to distinguish the AI-generated work from a genuine Bach composition.
In recent years, the field of AI-generated music has seen significant progress, fueled by dedicated research teams at universities, major tech companies' investments, and machine learning conferences such as NeurIPS. One notable achievement was the creation of the first pop album composed entirely with artificial intelligence, "Hello, World," led by Francois Pachet, a longtime AI music pioneer, in 2018. Another breakthrough was the experimental singer-songwriter Holly Herndon's acclaimed album "Proto," released in the previous year, in which she collaborated with an AI version of herself to produce music.
Despite significant advancements in the technology, experts believe that AI still has a long way to go before it can independently create hit songs. Much more intriguing progress is being made in two seemingly diametrically opposed streams of music: the functional and the experimental. But as we all know things move very quickly in the Ai Music space, so we will continue to keep an eye on all things artificial intelligence generated.
“Technology will change things, but I refuse to become cynical, and I refuse to give up my agency in this process. I’m not scared; I’m excited about what this new technology could unlock.” ... “A balance needs to be found between protecting artists, and encouraging people to experiment with a new and exciting technology.” - Holly Herndon
Enter THE AGE of Ai generative & Assistive Music Production and generation: The Mind behind the instrument still proves to be the difference.
AI music has addressed a simple demand on one end of the spectrum: the need for more music than ever before due to the increasing number of content creators on streaming and social media platforms. In the early 2010s, Drew Silverstein, Sam Estes, and Michael Hobe, who were then composing music for Hollywood films, were overwhelmed with requests for simple background music for film, TV, or video games. Silverstein says, "There would be so many of our colleagues who wanted music that they couldn't afford or didn't have time for, and they didn't want to use stock music."
As a result, the trio created Amper, which enables non-musicians to generate music by specifying parameters such as genre, mood, and tempo. Amper's music is now employed in podcasts, advertisements, and videos for firms such as Reuters. Silverstein explains, "Previously, a video editor would search stock music and settle for something sufficient. Now, with Amper, they can say, 'I know what I want, and in a matter of minutes, I can make it.'"
Recently, the company conducted a Turing-like test and discovered that customers could not distinguish between music produced by humans and that created by Amper's AI, much like with the AI-generated Bach composition.
Ai Generated Personal Soundscapes!?
Endel was created to cater to the growing need for personalized soundscapes. Stavitsky recognized that people are increasingly using headphones to get through the day, but there is no playlist or song that can adapt to the context of their environment. Therefore, his app considers real-time factors such as weather, heart rate, physical activity rate, and circadian rhythms to create gentle music that helps people sleep, study, or relax. Endel has successfully helped combat ADHD, insomnia, and tinnitus. The app has reached one million downloads as of January. Both Amper and Endel have enabled non-musicians to become involved in the music-making process. Amper plans to launch a consumer-friendly interface this year, making it accessible to anyone who wants to create music. "Billions of individuals who might not have been part of the creative class now can be," says Silverstein.
PUSHING THE ENVELOPE
Naturally, generating basic tunes or mere background noise is a vastly dissimilar process from producing exceptional music. This apprehension is one of the leading worries about AI's role in music - that it could reduce music to unoriginal and utilitarian sounds, leading to a homogenous sound in all music. What if the major labels resort to AI and algorithms to impose unchallenging tunes that stick in our minds ad nauseam, forevermore?
The Music outfit YACHT utilized a machine learning system trained on their entire catalog of music to create their latest album, Chain Tripping. The machine produced hours of melodies and lyrics based on what it had learned, which the band then sifted through and pieced together into coherent songs. The resulting music was a bizarre interpretation of dance pop that was both intriguing and challenging to perform. According to band member Claire L. Evans, the project forced them to confront patterns that went beyond their comfort zone, and they had to learn new skills to break out of their musical habits. The project ultimately earned YACHT their first Grammy nomination for best immersive audio album.
Ash Koosha, a British-Iranian musician, had an unexpected emotional breakthrough while working with AI. He created Yona, an AI pop star that generates songs through software. Although some of Yona's lyrics are obscure, others were surprisingly vulnerable. Koosha was particularly impressed by the line "The one who loves you sings lonely songs with sad notes," which he felt was emotionally raw in a way that few humans could achieve.
The hacker duo Dadabots is currently creating musical disorientation and chaos with the aid of AI in Berlin, which has become a global hub for AI experimentation. The duo is working on developing new tools in collaboration with avant-garde songwriters and running AI-generated death metal livestreams. According to co-founder CJ Carr, AI can serve as both a trainer and a radical creator, allowing musicians to improve their craft while producing novel and unconventional sounds. Dadabots' neural networks have generated eerie whispers, guttural howls, and fiercely choppy drum patterns that push the boundaries of music.
Maybe you think it's cool to do AI cuz you see your friends doing it. We thought so too. But what you don’t know is that over time our music ability shriveled into an assortment of degenerate discharge. Ballooned into a bundle of rot. Deteriorated into a jam of geek drainage. We suck now. Automation is an addiction. Stay in music school kids, play your instruments. Don’t do AI. - Dadabots
In contrast, for other creators, AI represents a connection to a forgotten era of pre-recorded music. For instance, last summer, a new version of Jai Paul's 2012 cult classic "Jasmine" was released, featuring an evolving soundscape of slippery guitar licks and syncopated hand claps that continues as long as the listener is engaged in the song.
Evolve past the FEAR - False Evidence Appearing Real
While some worry that AI could lead to job displacement among musicians, others point out that similar concerns have arisen with each new technological development of the past century. Ash Koosha compares it to the resistance some guitarists had in the 70s to synthesizers, which ultimately led to the rise of home producers and new genres of music. Similarly, Francois Pachet of Spotify's Creator Technology Research Lab says that we are still in the early stages of AI music experimentation, with much more research to be done before the technology reaches its full potential.
As more AI-generated music enters the market, legal disputes are bound to emerge. Existing copyright laws don't address AI creations and it's unclear who would own the rights to an AI-generated song - the AI programmer, the original musician whose works were used for training, or even the AI itself. Concerns are growing that a musician might not have legal recourse against a company that uses AI to create soundalikes of their music without permission.
Despite these issues, musicians worldwide are working hard to make their AI tools accessible to as many aspiring music-makers as possible. Carr says, "I want to see 14-year-old bedroom producers inventing music that I can't even imagine."