July 23, 2024

Advancing Digital Growth

Pioneering Technological Innovation

Opinion: Record labels are suing artificial-intelligence companies. Here’s why that’s important

4 min read
Open this photo in gallery:

Sonosynthesis, an AI-based collaborative music composition system at the Misalignment Museum, on March 8, 2023, in San Francisco.AMY OSBORNE

Vass Bednar is a contributing columnist for The Globe and Mail and host of the new podcast, Lately. She is the executive director of McMaster University’s master of public policy in digital society program.

The monopolization of the digital economy, in which streaming platforms reap most of the profit, has overhauled the music industry and made it harder than ever before for artists to make money. A bill proposed in the U.S. Congress, the Living Wage for Musicians Act, seeks to establish a new royalty standard of one penny per stream.

It’s hardly radical. The modesty of this intervention is at once humbling and hilarious, acting as a reminder of the power of cultural choke points such as the platforms.

And now, there’s a powerful new player threatening artists’ earnings.

Last week, major record companies (Universal, Capitol, Atlantic, Warner, Sony and others) filed a lawsuit against two generative AI companies, Suno and Udio, that make “music” based on text prompts. The record companies are accusing the AI firms of “willful copyright infringement on an almost unimaginable scale” and provide evidence that both companies have trained their algorithms on the record companies’ catalogues of songs. The lawsuits outline why this application is not “fair use” but instead “wholesale theft of … copyrighted recordings [that] threatens the entire music ecosystem and the numerous people it employs.”

These record companies have a point. And their move against AI should spur government toward formal policy updates. Such updates are needed to better protect artists from having their material stolen and monetized by computer models that don’t want to pay to use it.

This is particularly so for our government. Canada’s unique history of cultural-content protection makes the country well positioned to take a bold stance on the legality of this behaviour as it concludes a national consultation on the implications of generative artificial intelligence for copyright.

In Canada, the fair-dealing exception in the Copyright Act permits the use of other people’s copyright-protected material for the purpose of research, private study, education, satire, parody, criticism, review or news reporting, provided that what you do with the work is “fair.” AI companies have recently pressed Ottawa for an exemption around copyright laws, insisting that the use of AI to read and learn from material should not require compensation. At present, it is unclear whether all media published online are truly fair game for these generative models, though these firms have been forging ahead in the absence of regulatory clarity as they twist an overly ambiguous law in their favour.

High-quality fake audio is testing the music industry in other important ways. Last year, a collaborative track featuring AI-generated imitations of Drake and the Weeknd’s voices called Heart on My Sleeve was submitted for Grammy consideration, although it was deemed ineligible by the Recording Academy, which subsequently updated its Grammy rules. Through various social-media posts, many listeners also have cautiously raised concerns that Spotify is either using AI to generate music or permitting AI-generated music to masquerade as a traditional tune on its platform, which is misleading and can cut into a musician’s earnings, further diluting an already paltry payout. While the inputs to these models certainly matter, as an output, fake music is unfair and deceptive.

At the most basic level, we need more transparency regarding the inputs into these models, and the ability to choose (and pro-actively reject) their automatic inclusion in the music services we subscribe to. From a consumer-protection standpoint, listeners need reliable mechanisms to make independent decisions about the content they consume and support. Meanwhile, as synthetic music pollutes playlists, Universal and TikTok recently settled a licensing dispute that had them in a stalemate over artist compensation and the use of AI-generated music on the platform. TikTok has agreed to work with Universal to remove unauthorized AI-made content. Large platforms that have the ability to set norms for creative industries are starting to reject the presence of generated material, but their reactions are mixed. It was recently reported that, after an initial test phase, YouTube is offering music labels lump sums of cash to entice more artists to allow their songs to be used to train AI.

Meanwhile, OpenAI’s chief technology officer, Mira Murati, recently mused that some artistic jobs shouldn’t exist. This follows her confusion in a Wall Street Journal interview regarding what data OpenAI’s video-generational Sora model was trained on. But what if it’s fake music made by computers that the world doesn’t have the bandwidth for? Perhaps in addition to policing and enforcing copyright protections, artificially generated “music” simply shouldn’t be permitted to sonically masquerade as if it were made by humans.

A quick reaction that can slow the sounds of these strange tunes would be music to our ears.


Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.