For the record: AI, creativity, and the future of music

8 Jul 2025 17:15h - 18:00h

For the record: AI, creativity, and the future of music

Session at a glance

Summary

This discussion focused on the intersection of artificial intelligence and the music industry, exploring how AI can serve as a tool for good while protecting artists’ rights and interests. The session featured presentations and panel discussions with executives from Universal Music Group, including Chief Digital Officer Michael Nash and producer Don Was, alongside representatives from WIPO, IFPI, and Deezer.


Michael Nash emphasized that Universal Music Group’s AI strategy centers on artists, defending their rights while creating new creative and commercial opportunities. He argued that copyright protection is essential for innovation rather than being an obstacle, citing successful tech-media collaborations like Apple’s iTunes. The presentation showcased several AI applications in music, including the Beatles’ “Now and Then,” which used AI to extract John Lennon’s voice from an old cassette recording and won a Grammy Award. Universal has also developed Sound Therapy with Apple, using AI to integrate wellness benefits into music from major artists.


Don Was provided historical perspective, comparing current AI concerns to past technological disruptions like drum machines, noting that artists like Prince successfully incorporated new tools to create innovative sounds. The panel revealed significant data from Deezer showing that while AI-generated music comprises about 18-20% of uploads to their platform, it represents less than 0.5% of actual consumption, suggesting limited audience interest in purely AI-created content.


Consumer research presented indicated that while half of consumers are interested in AI in music, they primarily want utility improvements like better discovery and recommendations, with 70-75% emphasizing that “real artists matter most.” The discussion concluded with calls for transparent collaboration, fair legislation protecting artists’ rights, and continued exploration of AI’s potential in music medicine and therapeutic applications.


Keypoints

## Major Discussion Points:


– **AI as a creative tool rather than replacement**: The discussion emphasized that AI should serve as a tool to enhance artists’ creativity and expand their capabilities, similar to how synthesizers and drum machines were once revolutionary additions to music production, rather than replacing human artistry entirely.


– **Artist-centric approach to AI implementation**: Universal Music Group’s strategy centers on protecting artists’ rights and interests first, then building new creative and commercial opportunities from that foundation, ensuring artists remain at the center of AI-driven music innovation.


– **Copyright protection and ethical AI development**: The conversation highlighted the critical importance of respecting copyright laws and developing market-based solutions that license artists’ work properly, rather than allowing AI systems to freely use copyrighted material without permission or compensation.


– **Transparency and detection of AI-generated content**: Deezer’s experience with AI-generated music uploads (18% of platform uploads, but less than 0.5% of consumption) demonstrated the need for detection systems and transparent labeling so consumers can make informed choices about AI-generated versus human-created content.


– **Music’s therapeutic and wellness applications**: The discussion explored how AI can enhance music’s proven health benefits through innovations like sound therapy, combining scientific research with AI to create music specifically designed for wellness outcomes like better sleep and relaxation.


## Overall Purpose:


The discussion aimed to explore how the music industry can harness AI technology responsibly while protecting artists’ rights and maintaining the human element that makes music culturally significant. The goal was to present a framework for “AI for good” in music that benefits creators, consumers, and society.


## Overall Tone:


The tone was optimistic and collaborative throughout, with speakers presenting AI as an opportunity rather than a threat. The discussion maintained a constructive, forward-looking perspective that acknowledged challenges while focusing on solutions. There was a sense of industry unity around protecting artists while embracing innovation, and the tone remained consistently professional yet passionate about music’s cultural importance.


Speakers

– **LJ Rich**: Moderator, works with music and technology since age 11, used AI to make the Beatles sing in 2020


– **Don Was**: Musician, record producer, and president of Blue Note Records (part of Universal Music Group)


– **Michael Nash**: Executive Vice President and Chief Digital Officer at Universal Music Group, leads digital strategy


– **Victoria Oakley**: CEO of the International Federation of the Phonographic Industry (IFPI)


– **Michele Woods**: Director of Copyright Law Division at World Intellectual Property Organization (WIPO)


– **Alexis Lanternier**: CEO of Deezer (streaming platform)


– **Session video 1**: Video content about Sound Therapy


– **Sesion video 2**: Video content explaining AI breakdown for Keith Urban’s “Straight Line” music video


– **Session video 3**: Video content showing sneak preview of Keith Urban’s “Straight Lines” music video


**Additional speakers:**


– **Brandon Andrews**: Takes over to start the AI Film Festival programming (mentioned at the end)


Full session report

# AI and Music Industry: Navigating Innovation Whilst Protecting Artists’ Rights


## Executive Summary


This panel discussion brought together key stakeholders from across the music industry to explore how artificial intelligence can serve as a force for good whilst protecting artists’ rights. Moderated by LJ Rich, who shared her experience using AI in 2020 to make the Beatles sing (taking 10 hours for one minute of music), the session featured Don Was (musician, producer, and president of Blue Note Records), Michael Nash (Executive Vice President and Chief Digital Officer at Universal Music Group), and remote participants Alexis Lanternier (CEO of Deezer), Victoria Oakley (CEO of IFPI), and Michele Woods (Director of Copyright Law Division at WIPO).


The conversation revealed consensus around treating AI as a creative tool rather than a replacement for human artistry, with speakers emphasising copyright protection, transparency, and collaborative approaches to industry challenges.


## Opening Context and Personal Perspectives


### LJ Rich’s Introduction


LJ Rich opened by acknowledging the complexity of AI in music, noting her own experience with the technology and setting the stage for a discussion about both opportunities and challenges in the industry.


### Don Was’s Foundational Story


Don Was provided crucial context by sharing his personal journey with music, describing how discovering Joe Henderson’s “Mode for Joe” at age 14 completely changed his mood and perspective. This story illustrated music’s fundamental power to affect human emotion, which he positioned as central to understanding AI’s role in the industry.


Was drew historical parallels between current AI concerns and past technological disruptions, specifically recounting how Prince innovatively used the Lindrum drum machine: “Prince made much better use of it than I did. That’s a drum, a beat and a sound that no human being ever would have gotten. It was mechanised and he messed with the tuning. It was brilliant.” This example demonstrated how artists have consistently transformed potentially threatening technologies into creative tools.


## Universal Music Group’s Strategic Approach


### Michael Nash’s Comprehensive Vision


Despite some technical difficulties with his presentation clicker, Michael Nash outlined Universal’s AI strategy, which fundamentally centres on defending artists’ rights whilst creating new opportunities. He challenged common narratives about copyright, arguing that “copyright is not the enemy of innovation. Quite the opposite. Media tech convergence predicated on respect for copyright has produced a multi-trillion euro economy.”


### Practical AI Applications


Nash showcased several successful AI implementations:


**The Beatles’ “Now and Then”**: Using “next generation AI source separation technology,” the team extracted John Lennon’s voice from an old cassette recording. The resulting song won a Grammy Award for best rock performance, demonstrating how AI can honour rather than replace human creativity.


**Sound Therapy with Apple**: This collaboration uses “scientifically calibrated audio supplements” integrated into music from major artists, representing a breakthrough application in music medicine that combines artistic expression with wellness benefits.


**Keith Urban’s “Straight Lines” Music Video**: Created using Moon Valley Gen AI, this project included a technical breakdown video showing the production process, illustrating transparency in AI-assisted creative work.


### Consumer Research Findings


Nash presented consumer research showing that whilst half of consumers express interest in AI applications in music, they primarily seek utility improvements such as better discovery and recommendations. Crucially, 70-75% of respondents emphasised that “real artists matter most,” indicating that human connection remains paramount.


### Human Artistry Campaign


Nash referenced Universal’s involvement in the Human Artistry Campaign, which has “more than 170 supporting organizations from over 40 countries,” demonstrating broad industry support for artist-centric AI development.


## Streaming Platform Realities


### Deezer’s Data and Challenges


Alexis Lanternier, participating remotely due to train issues, provided crucial data about AI-generated content on streaming platforms. He revealed that whilst AI-generated music comprises approximately 18 percent of uploads to Deezer, it represents less than 0.5% of actual consumption, suggesting audiences naturally prefer human artistic expression.


Lanternier highlighted practical challenges platforms face, noting significant volumes of AI-generated uploads requiring sophisticated detection tools. He mentioned that 7-8% of rights are distributed annually through their fraud detection systems, emphasising the need for robust content verification.


His platform’s response includes transparency measures such as labelling AI-generated content and developing fair remuneration systems collaboratively across the industry.


## International Policy Perspectives


### IFPI’s Balanced Approach


Victoria Oakley advocated for balanced approaches that protect artists whilst enabling innovation. She emphasised that traditional copyright principles can be applied to generative AI if transparency and respect for existing rules are maintained, arguing that focus should start with creativity and artists before moving to technology.


### WIPO’s Global Coordination


Michele Woods reinforced the importance of maintaining human-centric approaches to AI development, emphasising that AI and intellectual property discussions must remain human-centric, treating AI as a tool for creators. She advocated for global multilateral conversations and policy development to help all member states understand and regulate AI technology effectively.


## Key Areas of Agreement


### AI as Creative Enhancement


All speakers demonstrated consensus that AI should enhance rather than replace human creativity. This agreement transcended organisational boundaries, with record label executives, streaming platform leaders, and international policy makers emphasising AI’s value in amplifying human artistic expression.


### Copyright Protection as Foundation


Speakers agreed that intellectual property protection is fundamental to sustainable innovation. Nash’s argument about copyright enabling rather than hindering innovation resonated across the panel, with international representatives affirming that existing legal frameworks can accommodate AI development when properly applied.


### Transparency Requirements


Participants agreed that transparency in AI-generated content is essential for both consumer awareness and regulatory compliance. This reflects recognition that consumers deserve to make informed choices about AI-generated versus human-created content.


### Collaborative Industry Response


All participants emphasised that addressing AI challenges requires collaborative efforts across the industry, with no single stakeholder able to address AI’s implications alone.


## Audience Engagement


During the session, LJ Rich engaged the audience by asking for a show of hands regarding preferences for AI disclosure in music, demonstrating active participation in the discussion about transparency requirements.


## Ongoing Challenges


### Technical Detection and Attribution


The industry continues to develop robust detection tools for AI-generated content, with platforms like Deezer implementing systems to identify fully AI-created content and combat potential rights holder fraud.


### Fair Remuneration Systems


Developing appropriate compensation frameworks for AI-generated or AI-assisted content remains an ongoing challenge requiring continued industry collaboration and potentially new economic models.


### Global Policy Coordination


The practical challenges of developing consistent AI governance across different jurisdictions with varying legal frameworks require continued international cooperation through organisations like WIPO and IFPI.


## Conclusion


The discussion demonstrated broad industry consensus around artist-centric approaches to AI development that respect existing rights frameworks whilst embracing innovation. The combination of empirical data showing limited consumer interest in purely AI-generated content, historical perspective on technological adoption, and practical examples of successful AI applications provides guidance for balanced development.


Key principles emerging from the discussion include respect for copyright, transparency in AI use, artist-centric development, and collaborative governance. As LJ Rich noted in transitioning to evening programming, these conversations represent ongoing industry efforts to navigate the complex intersection of artificial intelligence and human creativity in music.


The session concluded with acknowledgments of the technical production team and a transition to continued programming, reflecting the ongoing nature of these industry discussions and the need for continued collaboration across all stakeholders.


Session transcript

LJ Rich: to take a deep breath and get 10 seconds of well-deserved brain relaxation. In fact, music has long been used to change our state, from the relaxing ambient music that helps us sleep to the hype songs athletes use to get ready to perform at the top of their game. And I’ve worked with music and technology since I was 11 years old. In 2020, I used AI to make the Beatles sing, and it took 10 hours to make one minute’s worth of music. And now we’re at a point where one well-crafted sentence can turn everybody into a composer. In fact, all the music that you would have heard at the registration was mine. It was composed, or should that be compiled, by me, myself, and AI. So if AI is transforming the future ecosystem of music creation, music production, and music consumption, how do we stay ahead of this technology? How do we make sure humans stay in the loop? And even if you don’t work with music or in the music industry, music has historically paved the way for innovation and helped us understand the behaviors that are going to be displayed by humanity as part of it. So our next session is epic. It features some universal names in music, and we’re going to set the scene ahead of a fantastic presentation and panel. And might I say the moderator is kind of awesome as well, because the moderator is, in fact, me. And I think at the same time, our fantastic stage staff have done a brilliant job of getting— These chairs are very heavy, by the way, so they’re doing brilliantly. Thank you very much for resetting the stage. And just ahead of our amazing presentation and panel, we’re going to have an introduction from renowned music producer and artist, and this is where you think I was right to stay sitting down, everybody. We’re going to have Don Was join the stage, but before that, let’s watch the video. Thank you. ♪


Don Was: Hello, everyone. Well, I’m Don Was. I’m a musician, a record producer, and the president of the world’s preeminent jazz label, Blue Note Records, which is part of the Universal Music Group. Back in Detroit, when I was 14 years old in 1966, my mom made me go out and run errands with her one Sunday. And, you know, I wanted to be hanging out with my friends. And I was just being like a really grumpy little kid. My mom was disgusted. She finally just stopped dragging me into stores. She just left me in the car with the engine running so I could listen to the radio. So I was playing with the dial, and I stumbled upon the Detroit jazz station, WCHD. The DJ was playing a song, which I later learned was a Blue Note track by Joe Henderson called Mode for Joe. And I landed on the station just as the saxophone solo was beginning. And man, I never heard any music like this before in my life. He was emitting these kind of anguished cries from the sax. Sounded like, da-da-da, da-da-da, da-da-da, right? And it caught my ears immediately because it matched the profound anguish I was experiencing being forced to run errands with my mom all day. But about 20 seconds into the solo, the drummer, Joe Chambers, he starts grooving on the ride cymbal. And eventually, Joe Henderson calms down and joins him and starts swinging like crazy. And in that moment, the music delivered a very clear nonverbal message to me. Joe Henderson was saying, Don, you got to do what I just did and groove in the face of adversity. When that registered, my mood turned around 180 degrees. I wasn’t pissed off anymore. By the time my mom got back in the car, I was a nice kid again. And it was really strong testimony to the power of music. And it hit me right there that creating music that could help the listeners make sense out of the chaos and the confusion of life was an incredibly worthwhile calling. But it’s not an easy one. Like all the artists who are on our label, Joe Henderson studied from childhood to university. And he worked hard, practicing three to six hours a day for his entire life just to develop and maintain the knowledge and keep up the chops to get under people’s skin and impact their lives with his music. And the learning and the practicing never ends because the way we make music is constantly evolving. Change is good. Change is essential. The goal isn’t to fight change. It’s to guide it and to make sure that new modes of expression serve the musicians, not replace them. So right now, one of the biggest changes we’re facing is the introduction of artificial intelligence. Some of us are excited. Some are nervous. And everyone’s got questions. What does it mean for creativity? What does it mean for originality? What does it mean for profound artistic communication? Well, at UMG, we’re very lucky to have someone leading that conversation with clarity and with purpose. And that’s Michael Nash. He’s our executive vice president and chief digital officer. He’s the guy who oversees our global digital strategy, working with platforms and working with partners and policymakers to make sure our artists can thrive in these evolving times. But more than that, Michael understands that creativity isn’t just data. It’s emotion. It’s connection. It’s humanity. He sees AI not as a threat to humanity, but as a tool that, if used the right way, can support artists, protect their rights, and even help spread their voices further. So today, he’s going to walk us through what that future music looks like, how AI’s use in music leads to something meaningful and ethical and good, how AI, when done right, enhances creativity, and how UMG leads the discussion in AI for good in music. Please join me in welcoming my good friend and colleague, Michael Nash.


Michael Nash: All right, brother. Good evening. And you know it’s been a long day at a conference when you have somebody leading off a presentation, and we’re not done yet with good evening. That’s an incredible intro from Don Wuz. And I feel like the room needs to acknowledge the great Don Wuz in our presence. So how about another round of applause for Don Wuz, please? So I am indeed the Chief Digital Officer, Universal Music Group. Looks like I’m wearing the same glasses I am in that photo, so easy to recognize. I’ve led digital strategy at Universal for the last 10 years. It’s an honor to have this opportunity to highlight why music matters greatly, we think, to this dialogue about AI for good. To better understand music’s importance to this conversation, let’s start with the importance of music. not be surprised to hear an executive from Universal Music say, as a starting point, music is universal. Music engages a massive global audience. It is the soundtrack of life on this planet. And if I get the clicker to cooperate here with me on the next slide. Chief Digital Officer cannot operate the clicker properly there. That’s a pretty big letdown from Don’s introduction here. Let me see if we can recover here. Music engages a massive global audience. It’s the soundtrack of life on this planet. Half the world’s population is actively engaged. And in fact, music is the most popular form of entertainment in dozens of developed countries where up to 90% of adults listen to music on a monthly basis with internet enabled fans spending over 20 hours a week on average listening to music. Even more important than those engagement stats, however, music drives culture worldwide as it has for centuries. I’m still having a little clicker difficulty here, so bear with me. This is that human moment where you know that technology is not necessarily always going to be in charge here and human intervention is required to properly operate the equipment. Thank you. So let me recover the flow here. Comprehensive scientific research has definitively concluded that music is integral to every society ever studied and it pervades social life the world over, proving out the immortal words of Longfellow that music is the universal language of mankind. But even more important than that, let me just reorient myself to where we are here. Music is not just a universal language. It is a universal force for good, bringing connection, expression, communion, and enrichment to lives in every corner of the world. Beyond music’s fundamental role in culture, we are gaining a better understanding of how music promotes health and well-being. Neuroscientific revelations have established how music significantly influences cognitive, emotional, and physical functions of the brain. Backed by this growing body of research, music is now being deployed in advanced therapeutic and breakthrough medical applications. And this builds on what we’ve always known, that music is a powerful force for good in the minds, hearts, and lives of people everywhere. So with this understanding of music’s greater purpose, let’s move into a conversation about technological disruption. For well over a century, since the advent of sound reproduction and radio, music has been situated on the cutting edge of tech transformation. Just one of many examples, this one quite relevant to the implications of AI, Peter Gabriel’s pioneering incorporation of digital synthesizers in the 1980s. Everyone recalls that well, I’m sure. As the next major milestone in this journey of technological disruption and transformation, we believe that AI can be instrumental to the future of music. However, it’s critical to emphasize that AI will realize its potential to transform the future of music through the voices and visions of artists. Now, yes, of course, the scope of technological disruption may be unprecedented with AI, but that doesn’t change a fundamental truth. It’s artists who’ve always shaped global culture. And as this mosaic illustrates here, featuring many of our beloved artists that we represent. And that’s why universal strategy on AI is based on a simple philosophy. Center the conversation on artists. Defend their rights and interests. And from that foundation, forge new creative and commercial opportunities. To maximize creative collaboration with technology, you’ve got to start from a foundation that’s built on copyright and associated rights. Of course, it’s essential that we energetically defend our artists’ interests. You can’t forge new business models with non-rights. If you don’t claim a seat at the dinner table, you might wind up on the menu. Working from our philosophy, we believe that market-based solutions are the answer. We have enabled numerous entrepreneurs and we’re working with a wide range of major tech platforms in announced partnerships and with significant licensing deals in the works. I have a bone to pick here on a point of contention that populates the popular discourse. The anti-copyright premise of some AI industry commentators is deeply flawed. Copyright is not the enemy of innovation. Quite the opposite. Media tech convergence predicated on respect for copyright has produced a multi-trillion euro economy. And that started with Apple, marrying iPod with iTunes in 2003 as a critical early step, embracing licensed music to create the first trillion-dollar company. Jobs wrote the blueprint. Now Google, Amazon, Meta, and many other companies are following in various ways with their own music and content strategies. Tech collaboration with the creative community, respecting the value of artists’ work and harnessing their innovation has produced enormous cultural and economic benefit. Now these lessons learned in the first quarter of the century are why we believe the future of AI music will be realized by unlocking new creative collaborations. The future of AI music will not be the regurgitation of derivative knock-offs of artists’ music to cannibalize their current marketplace. Who wants that? It will be activated by putting new tools in the hands of artists so they can expand the market and create new experiences for their fans in partnership with technologists. This is how AI will promote innovation and advanced culture. So let’s look at some examples of these collaborations and how they’re being fostered. An important foundation in successful collaboration is being promoted by creative community alliances, and Universal’s taken a very active role here. Just one of many examples, we helped launch the Human Artistry Campaign and articulate its principles. This global initiative was formed in 2023 to protect creators’ rights in the age of AI, and now has more than 170 supporting organizations from over 40 countries. We’re backing many such industry and creative community initiatives you see referenced here. I’m taking that advice to hydrate very seriously received earlier in the day here, so I’m going to hydrate, which will no doubt help with my enunciation. On the business development front, Universal’s forging the future of AI music with leading innovators. To highlight just a few of our active partnerships, SoundLabs, Hooky, and SuperTone are each employing voice clone AI technology, enabling artists to perform in multiple languages so their audiences can hear them singing in the fans’ native tongues for the first time. ClayVision’s developing exciting new models leveraging generative AI for hyper-personalization and deeper fan engagement. ProRata’s making important output attribution advancements to enable new gen AI business models. Now let’s look at some exemplary artist projects. We were thrilled to release Now and Then, the last song from the Beatles, written and sung by John Lennon almost 50 years ago. Using next generation AI source separation technology, Lennon’s voice was excavated from a cassette recording that had been completely unusable for decades, waiting for this moment in the evolution of AI to arrive. In February of this year, Now and Then won the Grammy for best rock performance, the first time any recording utilizing AI technology has been so honored in a major category. I know it’s true, it’s all because of you, and if I make it through, it’s all because of you. Thank you. We recently joined forces with Apple to introduce a new category called Sound Therapy, powered by a team of scientists, engineers, and producers at Solace, Universal’s music health division. This audio wellness innovation employs our own ethically developed, patented Gen AI system. It seamlessly integrates scientifically calibrated audio supplements into the recordings of some of our biggest artists, including Imagine Dragons, Katy Perry, and Kacey Musgraves. The result? Immersive, artist-approved remixes that deliver a range of wellness benefits to subscribers.


Session video 1: What is Sound Therapy? With guidance from hundreds of scientists and artists, sound therapy is related seamlessly with music that’s perfectly suited to your mood. Gamma frequencies to help you get in the zone, theta for relaxing, delta for better sleep. You can even dive into playlists curated specifically for each purpose. The best way to experience sound therapy is to find a quiet space, put on some headphones, and see how you feel. For the first time in my career, I think I can honestly say I hope my presentation has put you to sleep.


Michael Nash: Thank you for the kind chuckles. I wasn’t sure how that one was going to go, you know? So finally, I’m pleased to highlight a new music video project from our global superstar, Keith Urban. The very latest example of how our artists are harnessing their world-class creativity to engage with cutting-edge technology and map the future of AI music. This mesmerizing treatment of Keith’s hit, Straight Lines, is being created with Moon Valley Gen AI and other ethically trained tools. First, a short explanatory video that’s going to detail the production process.


Sesion video 2: This is the AI breakdown for Keith Urban’s Straight Line. Like every project, we started with storyboards. We trained a custom AI model using photos and videos of Keith over the years, allowing us to insert him into any scene with control and consistency. For backgrounds, we used Mary with two to three key frames per shot to create smooth morphing transitions. In some cases, we enhanced live-action frames using Adobe Firefly to fine-tune details while keeping visual coherence. Car scenes were a mix of Mary prompts and AI-trained models. For complex shots, we trained a separate Laura using on-set photos and a 3D scan of the car, which gave us the flexibility to rebuild and animate it like in the drifting sequence. For our transition shot, we used two stills to prompt the final result that you see. Our most complex shot involved chaining five frames in Mary and inserting a Laura-generated still of Keith walking.


Michael Nash: So now, a sneak preview clip, and thanks to Keith Urban’s camp for allowing us to show it to this audience. Artists rarely like to release content before it’s completely done. A sneak preview clip from the nearly-completed, forthcoming Keith Urban music video, Straight Lines.


Session video 3: Finally leaving Hotel California, a couple runaways in a getaway car. Remember when it’s kinda high, where’s us, babe? I wanna feel that way, I wanna feel that way now. Let’s trade the past and the fading rubies. For the good life comin’ in the windshield, clear view. Anywhere I go, you wanna disappear too. I wanna go there too. I wanna go there with you. Let’s trade the line, baby. Hey, what you said, we’re never gonna look back now.


Michael Nash: Thank you. We’ll pass along that positive feedback. So before I go into the summary, I wanna call out the incredible technical staff here. I failed them at the point of attack on the Clipper. That is user error 100%. The production of this event has been outstanding, and all the preparation that goes into it is incredible. And I just want to take a moment to ask the audience to give a round of applause to the amazing technical staff producing this event. So in summary, we believe that music is universal. We still believe that music is universal, that AI can be instrumental to our future, and that it’s the artists who will continue to shape global culture, channeling disruption into transformation as they have always done. In our artist-centric vision, AI innovation can drive music culture, and in so doing, generate even greater benefits to the quality of life on this planet. Thank you.


LJ Rich: Thank you very much, Michael. You’ll be happy to know that that Clipper has been suitably punished. It’s been opened up and the batteries have been changed. So thank you for being excellent and understanding about it. And a fantastic presentation, don’t you think, ladies and gentlemen, with some exclusive footage as well. Really appreciate that being brought to the stage. And we’re going to bring Don Was back onto the stage as well, alongside a frankly awesome set of panelists. We have from World Intellectual Property Organization, WIPO Director of Copyright Law Division, Michelle Woods, and the CEO of the International Federation of the Phonographic Industry. That’s the IFPI. That’s Victoria Oakley. And you’re all joining us in reality. Why don’t you come on in? Take a seat. Okay, I’ll sit here. Now, we’re also going to have a remote guest. So joining us remotely and sort of in reality, perhaps in latency, we can say we’ve got the CEO of Deezer. That’s Alexis Lantagnier. Are you there? Yes, you have managed to make it. Thank you very much. We have some issues with trains, and we’re so glad that you could join us remotely. And we’re going to have a fantastic discussion. I think we should start because that was a great presentation. And so I just want to get a little bit. Yeah, it was good. So let’s get a bit of reaction, I think, from our fantastic panelists. What did you think? Did anything kind of stand out in particular for you?


Victoria Oakley: I’ll be brave and go first. You can be brave and go first. Yes, it’s a lovely crowd. I think there are lots of things about that that are incredibly impressive. It’s unusual to be in a conversation about AI and technology where we start with creativity and we start with the artist and we start with thinking about all of the brilliant things about music or any other art form, in fact, and only then move to talk about technology as the tool that’s enabling it. I find the conversation so often starts the other way around and all too often is a kind of divisive conversation, one versus the other. So it was just a total joy to hear Michael talk from the outset about the two in collaboration.


LJ Rich: Do you know what? I feel like now’s an opportunity to just really take the temperature of the audience, if you’ll indulge me. Let’s have a show of hands. I want to find out if you think it’s important to know if a creative work contains AI. So please, please can you raise your hands if you would prefer to know if a piece of music contains AI? Keep your hand up, please, if you prefer to know if a piece of music contains synthesizers like a drum machine or something like that. Okay. Yes, it’s interesting. Like a drum machine instead of a drummer. This is technology. It has always been upendings with music. Okay. So why is the conversation so nuanced? There’s some really interesting stuff going on here. I’m going to start with Alexis because you’re behind us at the back, and I feel like it’s worth mentioning Deezer is responding quite proactively to this. You’ve launched a synthetic music detector at the beginning of the year, so you’re really well-placed to give us an idea of the trends in this area.


Alexis Lanternier: Yeah, thank you. It feels a little bit weird to be standing like this in this area. I’m really sorry not to be all with you. We’d love to be there. Sadly, the trend decided otherwise. So thank you so much for still having me. And thanks, Michael, for your introduction. I think it was also fascinating to see how you address this very complex question of AI that I think Deezer, for now, address in a bit more straightforward way because it’s probably a little bit easier for us. So just to give a little bit of the context for everyone, so we are a streaming platform, we have hundreds of millions of songs on the platform, millions of users, AI for us was an amazing tool to build the connection between hundreds of millions of songs and users and to give to our users the songs that they are most likely to like. The other thing that Deezer is doing is to fight against rights holder fraud, which is something that represents 7 to 8 percent, roughly, of the overall rights being distributed every year. We catch roughly 7 to 8 percent. So it’s billions of dollars every year that can be taken by fraudster on streaming platform. And so Deezer has been always very active. And what we’ve seen recently is hundreds of thousands of songs are coming on our platform. And we’re not very, you know, we’re a bit specific and we’re related to fraud. And what we identified that actually they were coming from apps that are generating song 100 percent with AI, you know, apps like Suno or Oyo that just with a prompt, as you were mentioning at the beginning, you know, create song out of in a few seconds, out of just a sentence. And some can be actually quite impressive, some not as much. But we are getting 180,000 every week right now. So this is like the angle we are seeing this AI generation of music. And so we started our work was to build a tool to be able to identify those songs. I mean, to the question you asked earlier, you know, do we want to know? The question is, can we know? So the first work, as we were used to work on AI quite a bit, we have been able to build this tool that tells us if a song is coming from those apps. So to your point, the one that are 100 percent AI, the one that are just created with a prompt. Second thing we did was to make sure we tell the world what’s happening. So a lot of music are coming on our platform, not a lot are listened to, to be fair. And the third thing is to work on, you know, what do we do with this? Now that we have this information, that the information is shared to the music industry, to regulation, what can we do about it? So what we started to do first was to make sure it’s not included in recommendation, in automated recommendation we do to user, we have excluded those songs. The second thing that we’ve done, and apparently I’m happy that we support what the crowd here is asking, because we have put a label on those songs that people can know they are generated with AI. And then the future has to be discussed, you know, with a lot of stakeholders, obviously with Universal Music Group, and we are extremely excited, a bit concerned as well, to know exactly what the future holds. For now, what we’ve decided is transparency, making sure we’re getting rid of the fraud that are driven by those type of 100 percent AI-generated music. But then exactly, you know, what people want to listen to, how to treat that, how to remunerate it, I think is something that we should all discuss together.


LJ Rich: Yes, which is why we’re here, and transparency, these are the words that come up outside of the music industry in AI, and that’s why I feel like it’s such a powerful way to talk about these issues. And I think, Alexis, one of your press releases was 18 percent of the music that’s being uploaded onto your platform is AI-generated, which is quite a large amount. I was going to ask you, Michelle, you’ve got a unique viewpoint watching creativity unfold from so many countries. What trends are you noticing in and around copyright as it’s expanding to include AI works, or how are we seeing the shape of the future here?


Michele Woods: So we’ve been holding conversations on AI and intellectual property, trying to really share with all of our member states and all the stakeholders, because it’s really a much broader issue than government policy makers, what the trends are. Certainly with respect to generative AI, copyright is front and center, and so we’re really the UN agency that kind of brings that into the UN discussion, and more globally, into the multilateral discussion. And absolutely, we’ve heard, and in fact, our own approach is to focus on human-centric AI. AI as a tool for creators can be extremely positive, but in order for the creators to really realize the full potential of AI, something also has to be done about situations where their works are treated, you know, abusively, basically infringed. And so we’re seeing traditional copyright infringement, but in a different mode, and needing to work with member states, or member states, and of course, all the stakeholders are working on that. But overall, the message is keep it human-centric, keep AI as a tool, and with that, the traditional intersection between copyright and new technologies will work here again. And we heard a little bit of that, I think, Michael, in your message, that this is a continuum, and here, yes, this is maybe a scary big new technology, but at the end of the day, copyright can still function in order to help creators make a living and express themselves using that new technology.


LJ Rich: And it’s a great time to bring in Michael and Don as well, because obviously, audiences, have they changed? Are audiences still the same? What are audiences expecting? The new, the music therapy stuff from Apple is fabulously interesting, but I’d just like to hear a little bit more. Maybe we’ll go to you, Don, first, on how you think audiences are evolving.


Don Was: I don’t know that audiences have changed. They want music that gets under their skin, makes them feel something. Makes them kind of come to terms and understand the complexities of life. That’s what art does. Art is there to help you understand things that can’t be communicated through conversational language. There’s just limitations to it. All art goes deeper. So I don’t think the human need for art has changed at all. And honestly, I’m not that alarmed by a new technological innovation. In 1982, I went into a music store. I was already making records then, you know, and I went into a music store and a salesman took me in the back and he showed me the Lindrum machine, Lindrum 1. And the Lindrum was a radical departure, because up until that time, all other drum machines did electronic simulations of drum sounds, and you could tell immediately. The Lindrum was the first one to use digital recordings of drums, and you triggered actual drum sounds. So every drummer in the world hated these things. And when I bought serial number 003, some people treated me like I just joined the Ku Klux Klan. It was not well received. Now, the thing is, at the same time— music store, serial number 002 was bought by Prince a couple days before I was in there. And Prince made much better use of it than I did. If you listen to When Doves Cried, that’s just one example. That’s a drum, a beat and a sound that no human being ever would have gotten. No drummer, you could have had a drummer sit there for six weeks and they wouldn’t have played it just like that. It was mechanized and he messed with the tuning. It was brilliant. It was a great use of the machine. And I thought that was solid. That’s just another color on the palette from which you can paint.


LJ Rich: Yeah, maybe we talk about instead of audiences evolving, it’s the artists who have evolved. And so that then presents to you, Michael, quite an interesting scenario, which is all of these artists from Prince to the present day are going to be given these tools. Even Beethoven had a metronome and went, I’m going to make Moonlight Sonata with that. So what do we do with the artists that keep using this new technology before we understand how to work with it?


Michael Nash: Well, I think that the way that Don characterized it, the sort of artist centricity, the focus on the use of the tools creatively, we are working really closely with our operating leadership, like Don, and our creative leadership with our artist community to try to understand how they’re thinking about the tools, what they want to have access to, whether it’s in terms of ideation and development of music or the outputs and how they can be utilized. There are complex questions around copyright. Artists don’t want to invest tons of time in creating something that can’t be protected. And obviously, their supporters at Universal Music Group also want to take care to encourage a direction in terms of use of new creative tools that results in copyrightability in protection. But I think back to this question around what do audiences want, just to pull on a couple of points. What Alexi was talking about in terms of the really, really helpful data points that Deezer has provided and also the approach that they’ve taken to this new area. So maybe 20% of the uploads to the platform are pure AI generation. I believe that Deezer publicly said less than half of 1% of the consumption is that content. So that content is under-indexing by volume, by a factor of 40 to 1. And because of a lot of protections that Deezer has provided in the artist-centric construct that we’ve put together with them, which they have now broadened and incorporated more support for, you’re not really seeing the royalty pool being diluted. So I think that the market is already speaking to the question of what’s the level of interest? Not really a lot. A novelty will come along. People are talking about Velvet Sundown, which is this fake AI artist confab. It’s maybe a conceptual project. There’s been a lot of publicity about that. Because of all the publicity, they have 1 million listeners on Spotify. That would not break them into the top 10,000 acts in terms of the size of monthly listening, because these are huge platforms that have tremendous volume. I promised you when we spoke preparing for the panel that I would provide some new consumer research. So our latest consumer research says about half of consumers are interested in AI in music, but that’s mostly in terms of the utility of the service. They want to have better discovery. They want to have better organization, recommendations, better interaction with the content. And of that group, we were encouraged to hear an articulation of what we would call a moral code. The vast majority, 70 to 75 percent, say real artists matter the most. It’s the story of the artists. It’s who they are. It’s the expression of their identity. That’s what I connect with. And those are among the consumers that say that they’re most interested in AI. So I think that the marketplace is confirming what Don said, which is that people are connecting through music, understanding the expression of the human experience, people connecting through that principle. That’s still the driving force.


LJ Rich: Yes, AI as a tool is certainly where we’re going. And Victoria, I’m aware of the time, so I’m going to ask you one of the probably final questions. Audience, it would be really fun to chat quite a long time for this, but I’m aware that we have some extra programming going. So I’m going to ask you, and then we’ll just do a little roundup question at the end. So I’d love to know how the IFPI is responding to AI-generated music. How do we set the infrastructure up so that everything works?


Victoria Oakley: So the infrastructure has to be designed to take advantage of the opportunity that Michael has been describing, which is how do we make sure that we have an operating environment where artists can use these tools, can benefit from them, can freely create in the ways that they always have, but now with this new additional kind of rocket booster power. At the same time, on the flip side, we have to ensure that the operating environment protects them and their work in the way that copyright legislation always has for years and years. We’re really good at copyright. We figured it out, right? Well done us, collectively. What we now have to do is make sure we can take those principles and those rules and apply them in a generative AI world. That’s not that hard to do if we go back to your point about transparency and find a way to respect those principles. But there is both opportunity and risk here, and we have to ensure that the infrastructure enables both the protective element and the open creative element.


LJ Rich: Brilliant. So we’re near the end of the panel. I’m going to ask each of our panelists, starting with you, Alexei, what would you like help with going forward into the future? And panelists, if you can make these answers nice and short, I and the production team would be very grateful. So what would you like help with in the future? Alexei, we’ll come to you first.


Alexis Lanternier: Lots of things, but to be short, I think the value that this brings is this connection between music and user. We are the ones who know what’s happening, so I think we should, and I want to continue to give that transparency, to understand what is actually happening, how many songs are coming, who is listening to what, what is actually interesting in the AI generation, and finally, work together as a group on how to make the remuneration fair.


LJ Rich: Brilliant. Thank you. That was really, really good. Do we have time to have a very quick wrap up from each of our panelists? A very quick one? Great. Thank you. So just a very quick line on what you are looking for, what would you like help with from our incredible audience here? So Michelle, over to you.


Michele Woods: Sure. So what we’d like is for you to participate in our multilateral conversations on AI, in our new AI infrastructure interchange, participate, join the global discussion, and help to make all of the member states feel that they can take full advantage of this new technology and understand how to treat it in terms of policy.


LJ Rich: Brilliant. Thank you very much. Victoria, over to you.


Victoria Oakley: I would like all of you to help us with good legislation around the world that protects artists’ work in the first instance, but enables them to use this brilliant AI for good.


LJ Rich: Brilliant. And then Michael, I’m going to go to you next. What would you like help with?


Michael Nash: A very specific request. We would like to see deeper understanding of the application of AI to the field of music medicine. We believe that the combination of AI advancements and new neurological tech can promote a huge breakthrough in the application of music to very, very important opportunities in health and wellness and in medical applications as well.


LJ Rich: That’s very close to my heart. Thank you. And then the first word went to you, Don. The final word will also go to you. Well, what would you like help with?


Don Was: If any of you are developing something that will help me write a better song or be a more expressive musician, tell me about it.


LJ Rich: What a lovely way to end. Thank you very much to our incredible panelists. Wow. Weren’t you all amazing? Thank you. Thank you so much. Wow. OK, well, that concludes our panel. Thank you so much, audience. You’ve been amazing. But panelists, you have also been fantastic as well. That’s going to conclude our daytime programming. But do stick around because we have the AI Film Festival. So many more AI goodies to bring you as well tomorrow, including a deep dive into agentic AI and how to feel less uncertain about quantum computing. But tonight, it remains for me a pleasure to hand over to Brandon Andrews as we’re going to be starting our programming for tonight’s AI Film Festival.


L

LJ Rich

Speech speed

170 words per minute

Speech length

1441 words

Speech time

507 seconds

AI can transform music creation, production, and consumption, turning everyone into a composer with well-crafted prompts

Explanation

LJ Rich argues that AI has revolutionized music creation to the point where a single well-crafted sentence can enable anyone to compose music. She emphasizes how dramatically the technology has advanced and democratized music creation.


Evidence

In 2020, it took 10 hours to make one minute’s worth of music using AI to make the Beatles sing, but now one well-crafted sentence can turn everybody into a composer. All the music heard at registration was composed/compiled by her using AI.


Major discussion point

AI’s Role in Music Creation and Production


Topics

Legal and regulatory | Economic | Sociocultural


D

Don Was

Speech speed

140 words per minute

Speech length

1025 words

Speech time

438 seconds

AI should be viewed as a tool that enhances creativity rather than replaces artists, similar to how synthesizers and drum machines evolved

Explanation

Don Was draws parallels between current AI concerns and past technological innovations in music. He argues that new tools provide additional creative possibilities rather than threatening human artistry, using historical examples to support this perspective.


Evidence

The Lindrum machine in 1982 was initially hated by drummers but Prince used it brilliantly in ‘When Doves Cried’ to create sounds no human drummer could achieve. It became ‘just another color on the palette from which you can paint.’


Major discussion point

Artist-Centric Approach to AI Development


Topics

Legal and regulatory | Sociocultural | Economic


Agreed with

– Michael Nash
– Victoria Oakley
– Michele Woods

Agreed on

AI should be treated as a tool to enhance human creativity rather than replace artists


Music has profound power to change human emotional states and help people make sense of life’s chaos

Explanation

Don Was shares a personal story about how jazz music transformed his mood and perspective as a teenager. He argues that music serves as a powerful tool for emotional regulation and understanding life’s complexities.


Evidence

Personal anecdote about hearing Joe Henderson’s ‘Mode for Joe’ on Detroit jazz station WCHD, which matched his anguish but then taught him to ‘groove in the face of adversity,’ completely changing his mood.


Major discussion point

Music’s Universal Impact and Therapeutic Applications


Topics

Sociocultural | Human rights


Artists need tools that help them write better songs and become more expressive musicians

Explanation

Don Was expresses openness to AI developments that can enhance artistic expression and songwriting capabilities. He focuses on the practical benefits AI could provide to working musicians and creators.


Major discussion point

Artist-Centric Approach to AI Development


Topics

Legal and regulatory | Sociocultural


M

Michael Nash

Speech speed

136 words per minute

Speech length

2293 words

Speech time

1007 seconds

AI enables new creative possibilities like voice cloning for multi-language performances and hyper-personalization for fan engagement

Explanation

Michael Nash outlines specific AI applications that Universal Music Group is implementing to expand artists’ reach and create new fan experiences. He emphasizes practical, artist-beneficial uses of AI technology.


Evidence

Partnerships with SoundLabs, Hooky, and SuperTone for voice clone AI enabling artists to perform in multiple languages; ClayVision developing generative AI for hyper-personalization; ProRata working on output attribution advancements.


Major discussion point

AI’s Role in Music Creation and Production


Topics

Legal and regulatory | Economic | Sociocultural


AI-generated music represents about 20% of uploads but less than 0.5% of actual consumption, showing limited audience interest

Explanation

Michael Nash presents data showing that while AI-generated music is being uploaded in significant quantities, actual listener engagement remains very low. This suggests that audiences still prefer human-created content.


Evidence

Deezer data showing AI content under-indexing by a factor of 40 to 1; consumer research showing 70-75% of AI-interested consumers say ‘real artists matter the most’; example of Velvet Sundown having 1 million listeners but not breaking top 10,000 acts.


Major discussion point

AI’s Role in Music Creation and Production


Topics

Economic | Sociocultural


Disagreed with

– Alexis Lanternier

Disagreed on

Approach to AI-generated content on streaming platforms


Universal’s AI strategy centers on defending artists’ rights and interests while forging new creative opportunities

Explanation

Michael Nash explains Universal’s approach to AI as fundamentally artist-centric, prioritizing protection of creator rights while enabling new business models. He emphasizes that this foundation is essential for sustainable AI development in music.


Evidence

Universal’s philosophy to ‘Center the conversation on artists. Defend their rights and interests. And from that foundation, forge new creative and commercial opportunities.’ Active role in Human Artistry Campaign with 170+ supporting organizations from 40+ countries.


Major discussion point

Artist-Centric Approach to AI Development


Topics

Legal and regulatory | Human rights | Economic


Agreed with

– Don Was
– Victoria Oakley
– Michele Woods

Agreed on

AI should be treated as a tool to enhance human creativity rather than replace artists


Artists have always shaped global culture and will continue to do so by channeling AI disruption into transformation

Explanation

Michael Nash argues that despite technological disruption, artists remain the primary drivers of cultural change. He positions AI as another tool that artists will adapt and use to continue their cultural leadership role.


Evidence

Historical examples of artists leading technological adoption, such as Peter Gabriel’s pioneering use of digital synthesizers in the 1980s. Mosaic of Universal’s artists illustrating their cultural impact.


Major discussion point

Artist-Centric Approach to AI Development


Topics

Sociocultural | Legal and regulatory


Copyright protection is essential for AI innovation, not an obstacle – media tech convergence based on copyright respect has created a multi-trillion euro economy

Explanation

Michael Nash challenges the notion that copyright hinders AI development, arguing instead that respecting intellectual property rights has historically driven technological and economic growth. He positions copyright as a foundation for sustainable AI innovation.


Evidence

Apple’s marriage of iPod with iTunes in 2003 as blueprint for first trillion-dollar company; Google, Amazon, Meta following similar content strategies; tech collaboration with creative community producing enormous cultural and economic benefit.


Major discussion point

Copyright Protection and Legal Framework


Topics

Legal and regulatory | Economic


Agreed with

– Victoria Oakley
– Michele Woods

Agreed on

Copyright protection is fundamental to AI innovation, not an obstacle


Music is universal, engaging half the world’s population and serving as the soundtrack of life on the planet

Explanation

Michael Nash establishes music’s fundamental importance to human experience and culture worldwide. He uses this universality to argue for music’s central role in AI development discussions.


Evidence

Half the world’s population actively engaged; music is most popular entertainment form in dozens of developed countries; up to 90% of adults listen monthly; internet-enabled fans spend 20+ hours weekly listening; comprehensive scientific research shows music integral to every society studied.


Major discussion point

Music’s Universal Impact and Therapeutic Applications


Topics

Sociocultural | Human rights


Market-based solutions through partnerships with entrepreneurs and tech platforms are the answer to AI challenges

Explanation

Michael Nash advocates for collaborative, business-oriented approaches to AI development rather than regulatory restrictions. He emphasizes working with technology companies and startups to create mutually beneficial solutions.


Evidence

Universal has enabled numerous entrepreneurs and is working with major tech platforms in announced partnerships with significant licensing deals in development.


Major discussion point

Industry Collaboration and Future Vision


Topics

Economic | Legal and regulatory


Agreed with

– Alexis Lanternier
– Victoria Oakley
– Michele Woods

Agreed on

Industry collaboration is necessary for developing fair AI frameworks


AI combined with music can create breakthrough applications in health, wellness, and medical treatments

Explanation

Michael Nash highlights the therapeutic potential of AI-enhanced music, positioning this as a significant opportunity for beneficial AI applications. He calls for deeper exploration of music medicine applications.


Evidence

Sound Therapy collaboration with Apple using ethically developed, patented Gen AI system; integration of scientifically calibrated audio supplements into recordings by Imagine Dragons, Katy Perry, and Kacey Musgraves; neuroscientific research showing music’s influence on cognitive, emotional, and physical brain functions.


Major discussion point

Music’s Universal Impact and Therapeutic Applications


Topics

Development | Sociocultural


A

Alexis Lanternier

Speech speed

169 words per minute

Speech length

739 words

Speech time

261 seconds

Streaming platforms are receiving 180,000 AI-generated songs weekly, requiring detection tools to identify fully AI-created content

Explanation

Alexis Lanternier describes the massive influx of AI-generated music on streaming platforms and Deezer’s response through detection technology. He emphasizes the scale of the challenge and the need for technological solutions to manage it.


Evidence

180,000 AI-generated songs uploaded weekly to Deezer; songs coming from apps like Suno or Oyo that create music from prompts in seconds; Deezer built detection tools to identify songs from these apps; 7-8% of rights distribution represents billions in potential fraud annually.


Major discussion point

AI’s Role in Music Creation and Production


Topics

Legal and regulatory | Economic | Cybersecurity


Disagreed with

– Michael Nash

Disagreed on

Approach to AI-generated content on streaming platforms


Transparency is crucial – platforms should label AI-generated content and exclude it from automated recommendations

Explanation

Alexis Lanternier advocates for clear disclosure of AI-generated content to users and explains Deezer’s approach to handling such content. He emphasizes transparency as a key principle for platform responsibility.


Evidence

Deezer has put labels on AI-generated songs so people know they are AI-created; excluded AI-generated songs from automated recommendations; working to ensure transparency about what’s happening with AI music uploads and consumption.


Major discussion point

Transparency and Detection in AI Music


Topics

Legal and regulatory | Human rights | Economic


Agreed with

– Victoria Oakley
– LJ Rich

Agreed on

Transparency in AI-generated content is essential


Disagreed with

– Michael Nash

Disagreed on

Approach to AI-generated content on streaming platforms


Fair remuneration systems need to be developed collaboratively across the industry

Explanation

Alexis Lanternier identifies the need for industry-wide collaboration to determine how AI-generated content should be monetized and how revenues should be distributed. He emphasizes this as a collective challenge requiring stakeholder cooperation.


Evidence

Need to work with stakeholders like Universal Music Group to discuss the future; questions about how to treat and remunerate AI-generated content need to be discussed together as an industry.


Major discussion point

Industry Collaboration and Future Vision


Topics

Economic | Legal and regulatory


Agreed with

– Michael Nash
– Victoria Oakley
– Michele Woods

Agreed on

Industry collaboration is necessary for developing fair AI frameworks


V

Victoria Oakley

Speech speed

206 words per minute

Speech length

347 words

Speech time

101 seconds

The focus should start with creativity and artists, then move to technology as an enabling tool, rather than the reverse

Explanation

Victoria Oakley praises the artist-centric approach to AI discussions, contrasting it with technology-first conversations that often become divisive. She advocates for prioritizing creative considerations over technological capabilities.


Evidence

Observation that the conversation unusually started with creativity and artists before moving to technology as a tool, rather than the typical divisive ‘one versus the other’ approach.


Major discussion point

Artist-Centric Approach to AI Development


Topics

Sociocultural | Legal and regulatory


Agreed with

– Don Was
– Michael Nash
– Michele Woods

Agreed on

AI should be treated as a tool to enhance human creativity rather than replace artists


Traditional copyright principles can be applied to generative AI if transparency and respect for existing rules are maintained

Explanation

Victoria Oakley argues that existing copyright frameworks are sufficient for the AI era if properly applied with transparency. She emphasizes building on established legal principles rather than creating entirely new systems.


Evidence

Statement that ‘We’re really good at copyright. We figured it out’ and that the challenge is applying those principles and rules in a generative AI world, which ‘is not that hard to do’ with transparency and respect for principles.


Major discussion point

Copyright Protection and Legal Framework


Topics

Legal and regulatory | Human rights


Agreed with

– Michael Nash
– Michele Woods

Agreed on

Copyright protection is fundamental to AI innovation, not an obstacle


Good legislation is needed worldwide to protect artists’ work while enabling them to use AI tools effectively

Explanation

Victoria Oakley calls for balanced legislation that provides both protection for creators and freedom to innovate with AI tools. She emphasizes the need for laws that enable rather than restrict beneficial AI use.


Major discussion point

Copyright Protection and Legal Framework


Topics

Legal and regulatory | Human rights


The infrastructure must enable both creative opportunities and protective elements for artists

Explanation

Victoria Oakley argues for a balanced approach to AI infrastructure that simultaneously protects artists’ rights and enables creative innovation. She emphasizes the need for systems that serve both protective and enabling functions.


Evidence

Need for operating environment where artists can use AI tools and benefit from them while being protected by copyright legislation; infrastructure must enable both protective and open creative elements.


Major discussion point

Industry Collaboration and Future Vision


Topics

Legal and regulatory | Infrastructure | Economic


Agreed with

– Michael Nash
– Alexis Lanternier
– Michele Woods

Agreed on

Industry collaboration is necessary for developing fair AI frameworks


M

Michele Woods

Speech speed

125 words per minute

Speech length

304 words

Speech time

145 seconds

AI and intellectual property discussions must remain human-centric, treating AI as a tool for creators

Explanation

Michele Woods emphasizes WIPO’s approach to AI policy development, focusing on human creators rather than technology-first solutions. She argues that maintaining human-centricity is essential for beneficial AI development.


Evidence

WIPO’s approach focuses on human-centric AI; AI as a tool for creators can be extremely positive; traditional intersection between copyright and new technologies will work if AI remains a tool.


Major discussion point

Copyright Protection and Legal Framework


Topics

Legal and regulatory | Human rights | Development


Agreed with

– Michael Nash
– Victoria Oakley

Agreed on

Copyright protection is fundamental to AI innovation, not an obstacle


Global multilateral conversations and policy development are needed to help all member states understand and regulate AI technology

Explanation

Michele Woods describes WIPO’s role in facilitating international discussions on AI and intellectual property. She emphasizes the need for coordinated global approaches to AI governance and policy development.


Evidence

WIPO holds conversations on AI and intellectual property with member states and stakeholders; serves as UN agency bringing copyright into multilateral discussions; has new AI infrastructure interchange for global participation.


Major discussion point

Transparency and Detection in AI Music


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Michael Nash
– Alexis Lanternier
– Victoria Oakley

Agreed on

Industry collaboration is necessary for developing fair AI frameworks


S

Session video 1

Speech speed

104 words per minute

Speech length

96 words

Speech time

55 seconds

Sound therapy using AI can integrate scientifically calibrated audio supplements into artist recordings for wellness benefits

Explanation

The video explains how AI-powered sound therapy works by seamlessly integrating specific audio frequencies into music for therapeutic purposes. It describes the scientific approach to combining entertainment with wellness benefits.


Evidence

Guidance from hundreds of scientists and artists; gamma frequencies for focus, theta for relaxing, delta for better sleep; curated playlists for specific purposes; recommendation to use headphones in quiet spaces.


Major discussion point

Music’s Universal Impact and Therapeutic Applications


Topics

Development | Sociocultural | Legal and regulatory


S

Sesion video 2

Speech speed

142 words per minute

Speech length

157 words

Speech time

66 seconds

AI video production requires custom model training and multiple specialized tools to create consistent, high-quality content

Explanation

The video demonstrates that creating AI-generated music videos involves a complex multi-step process requiring custom AI model training, multiple software tools, and careful coordination. It shows that effective AI video production is not simply about using one tool but orchestrating various AI technologies together.


Evidence

Started with storyboards; trained custom AI model using photos and videos of Keith Urban; used Mary with key frames for smooth morphing transitions; enhanced live-action frames using Adobe Firefly; trained separate Laura using on-set photos and 3D car scan; chained five frames in Mary for complex shots


Major discussion point

AI’s Role in Music Creation and Production


Topics

Legal and regulatory | Economic | Sociocultural


S

Session video 3

Speech speed

124 words per minute

Speech length

81 words

Speech time

39 seconds

AI-generated music videos can achieve professional quality and artistic vision when properly executed

Explanation

The video clip demonstrates that AI tools can produce polished, commercially viable music video content that maintains artistic integrity and professional production values. It shows the practical output of sophisticated AI video production techniques in a real commercial context.


Evidence

High-quality music video footage for Keith Urban’s ‘Straight Lines’ showing seamless integration of AI-generated visuals with professional music production; smooth transitions, consistent character representation, and cinematic quality


Major discussion point

AI’s Role in Music Creation and Production


Topics

Economic | Sociocultural | Legal and regulatory


Agreements

Agreement points

AI should be treated as a tool to enhance human creativity rather than replace artists

Speakers

– Don Was
– Michael Nash
– Victoria Oakley
– Michele Woods

Arguments

AI should be viewed as a tool that enhances creativity rather than replaces artists, similar to how synthesizers and drum machines evolved


Universal’s AI strategy centers on defending artists’ rights and interests while forging new creative opportunities


The focus should start with creativity and artists, then move to technology as an enabling tool, rather than the reverse


AI and intellectual property discussions must remain human-centric, treating AI as a tool for creators


Summary

All speakers agree that AI should serve as a creative tool that empowers artists rather than replacing them, emphasizing the importance of maintaining human-centric approaches to AI development in music


Topics

Legal and regulatory | Sociocultural | Human rights


Transparency in AI-generated content is essential

Speakers

– Alexis Lanternier
– Victoria Oakley
– LJ Rich

Arguments

Transparency is crucial – platforms should label AI-generated content and exclude it from automated recommendations


Traditional copyright principles can be applied to generative AI if transparency and respect for existing rules are maintained


Summary

Speakers agree that clear labeling and transparency about AI-generated content is fundamental for both consumer awareness and regulatory compliance


Topics

Legal and regulatory | Human rights | Economic


Copyright protection is fundamental to AI innovation, not an obstacle

Speakers

– Michael Nash
– Victoria Oakley
– Michele Woods

Arguments

Copyright protection is essential for AI innovation, not an obstacle – media tech convergence based on copyright respect has created a multi-trillion euro economy


Traditional copyright principles can be applied to generative AI if transparency and respect for existing rules are maintained


AI and intellectual property discussions must remain human-centric, treating AI as a tool for creators


Summary

All speakers agree that existing copyright frameworks provide a solid foundation for AI development and that respecting intellectual property rights enables rather than hinders innovation


Topics

Legal and regulatory | Economic | Human rights


Industry collaboration is necessary for developing fair AI frameworks

Speakers

– Michael Nash
– Alexis Lanternier
– Victoria Oakley
– Michele Woods

Arguments

Market-based solutions through partnerships with entrepreneurs and tech platforms are the answer to AI challenges


Fair remuneration systems need to be developed collaboratively across the industry


The infrastructure must enable both creative opportunities and protective elements for artists


Global multilateral conversations and policy development are needed to help all member states understand and regulate AI technology


Summary

Speakers unanimously agree that addressing AI challenges requires collaborative efforts across the industry, including platforms, labels, artists, and policymakers working together


Topics

Economic | Legal and regulatory | Infrastructure


Similar viewpoints

Both speakers view AI as part of a historical continuum of technological innovation in music, emphasizing that artists have always adapted new tools to advance their creative expression

Speakers

– Don Was
– Michael Nash

Arguments

AI should be viewed as a tool that enhances creativity rather than replaces artists, similar to how synthesizers and drum machines evolved


Artists have always shaped global culture and will continue to do so by channeling AI disruption into transformation


Topics

Sociocultural | Legal and regulatory


Both speakers emphasize music’s fundamental role in human experience and its therapeutic/transformative power, which they use to justify its central importance in AI development discussions

Speakers

– Michael Nash
– Don Was

Arguments

Music is universal, engaging half the world’s population and serving as the soundtrack of life on the planet


Music has profound power to change human emotional states and help people make sense of life’s chaos


Topics

Sociocultural | Human rights


Both speakers from international organizations emphasize that existing legal frameworks can accommodate AI development if properly applied with human-centric principles

Speakers

– Victoria Oakley
– Michele Woods

Arguments

Traditional copyright principles can be applied to generative AI if transparency and respect for existing rules are maintained


AI and intellectual property discussions must remain human-centric, treating AI as a tool for creators


Topics

Legal and regulatory | Human rights


Unexpected consensus

Limited consumer interest in purely AI-generated music

Speakers

– Michael Nash
– Alexis Lanternier

Arguments

AI-generated music represents about 20% of uploads but less than 0.5% of actual consumption, showing limited audience interest


Streaming platforms are receiving 180,000 AI-generated songs weekly, requiring detection tools to identify fully AI-created content


Explanation

Despite concerns about AI flooding the market, both industry executives present data showing that while AI-generated content is being uploaded in massive quantities, actual consumer engagement remains extremely low, suggesting natural market resistance to purely artificial content


Topics

Economic | Sociocultural


AI’s potential in music therapy and medical applications

Speakers

– Michael Nash
– Don Was
– Session video 1

Arguments

AI combined with music can create breakthrough applications in health, wellness, and medical treatments


Music has profound power to change human emotional states and help people make sense of life’s chaos


Sound therapy using AI can integrate scientifically calibrated audio supplements into artist recordings for wellness benefits


Explanation

Unexpectedly, speakers found strong consensus around AI’s therapeutic applications in music, moving beyond traditional entertainment uses to medical and wellness applications, suggesting a new frontier for beneficial AI development


Topics

Development | Sociocultural


Overall assessment

Summary

The discussion revealed remarkable consensus among speakers from diverse backgrounds (record labels, streaming platforms, international organizations) on key principles: AI as a creative tool rather than replacement, importance of copyright protection, need for transparency, and value of industry collaboration


Consensus level

High level of consensus with strong implications for unified industry approach to AI governance. The agreement suggests that despite different organizational perspectives, there is a shared vision for human-centric AI development in music that balances innovation with creator protection. This consensus could facilitate more coordinated policy development and industry standards.


Differences

Different viewpoints

Approach to AI-generated content on streaming platforms

Speakers

– Alexis Lanternier
– Michael Nash

Arguments

Streaming platforms are receiving 180,000 AI-generated songs weekly, requiring detection tools to identify fully AI-created content


Transparency is crucial – platforms should label AI-generated content and exclude it from automated recommendations


AI-generated music represents about 20% of uploads but less than 0.5% of actual consumption, showing limited audience interest


Summary

Alexis focuses on the massive scale of AI uploads requiring active detection and exclusion from recommendations, while Michael emphasizes that the low consumption rates suggest the market is naturally filtering out AI content without heavy intervention


Topics

Legal and regulatory | Economic | Cybersecurity


Unexpected differences

Significance of AI-generated music volume versus consumption

Speakers

– Alexis Lanternier
– Michael Nash

Arguments

Streaming platforms are receiving 180,000 AI-generated songs weekly, requiring detection tools to identify fully AI-created content


AI-generated music represents about 20% of uploads but less than 0.5% of actual consumption, showing limited audience interest


Explanation

This disagreement is unexpected because both speakers are presenting data from the same platform (Deezer) but drawing different conclusions. Alexis treats the high upload volume as a significant problem requiring active intervention, while Michael uses the low consumption data to suggest the issue is self-regulating through market forces


Topics

Economic | Legal and regulatory


Overall assessment

Summary

The discussion shows remarkably high consensus on fundamental principles (artist-centricity, human-centric AI, copyright protection) with disagreements primarily focused on implementation approaches and the urgency of intervention needed for AI-generated content management


Disagreement level

Low to moderate disagreement level. The speakers share common goals and values but differ on tactical approaches. This suggests a mature discussion where stakeholders can work together despite different operational perspectives. The main implication is that while there’s broad agreement on direction, coordination will be needed on specific implementation strategies across platforms, legal frameworks, and international policy development


Partial agreements

Partial agreements

Similar viewpoints

Both speakers view AI as part of a historical continuum of technological innovation in music, emphasizing that artists have always adapted new tools to advance their creative expression

Speakers

– Don Was
– Michael Nash

Arguments

AI should be viewed as a tool that enhances creativity rather than replaces artists, similar to how synthesizers and drum machines evolved


Artists have always shaped global culture and will continue to do so by channeling AI disruption into transformation


Topics

Sociocultural | Legal and regulatory


Both speakers emphasize music’s fundamental role in human experience and its therapeutic/transformative power, which they use to justify its central importance in AI development discussions

Speakers

– Michael Nash
– Don Was

Arguments

Music is universal, engaging half the world’s population and serving as the soundtrack of life on the planet


Music has profound power to change human emotional states and help people make sense of life’s chaos


Topics

Sociocultural | Human rights


Both speakers from international organizations emphasize that existing legal frameworks can accommodate AI development if properly applied with human-centric principles

Speakers

– Victoria Oakley
– Michele Woods

Arguments

Traditional copyright principles can be applied to generative AI if transparency and respect for existing rules are maintained


AI and intellectual property discussions must remain human-centric, treating AI as a tool for creators


Topics

Legal and regulatory | Human rights


Takeaways

Key takeaways

AI should be treated as a creative tool that enhances rather than replaces human artists, similar to how synthesizers and drum machines evolved music production


An artist-centric approach to AI development is essential, focusing on defending artists’ rights while creating new creative and commercial opportunities


Copyright protection is fundamental to AI innovation in music, not an obstacle – respecting intellectual property has historically driven successful tech-creative industry partnerships


Transparency in AI-generated content is crucial, with platforms needing to label AI-created music and provide detection tools


Market demand shows limited consumer interest in purely AI-generated music (20% of uploads but <0.5% of consumption), indicating audiences still prefer human artistic expression


Music’s universal impact and therapeutic applications present significant opportunities when combined with AI technology


Collaboration between tech platforms, record labels, artists, and policymakers is essential for developing fair and effective AI music frameworks


Resolutions and action items

Deezer will continue providing transparency by labeling AI-generated songs and excluding them from automated recommendations


Universal Music Group will continue forging partnerships with AI innovators while defending artist rights through licensing deals


WIPO will facilitate multilateral conversations on AI and intellectual property policy development


IFPI will work toward developing good legislation worldwide that protects artists while enabling AI tool usage


Industry stakeholders will collaborate on developing fair remuneration systems for AI-generated content


Continued research and development in music medicine applications combining AI and neurological technology


Participation in global policy discussions to ensure all member states can benefit from AI technology


Unresolved issues

How to establish fair remuneration systems for AI-generated or AI-assisted music content


What constitutes appropriate disclosure requirements for AI use in music creation


How to balance artist protection with innovation and accessibility of AI tools


Specific policy frameworks needed for different countries and jurisdictions regarding AI music regulation


Long-term implications of AI on music industry employment and traditional creative processes


Technical standards for AI detection and content attribution across different platforms


How to handle hybrid human-AI collaborations in terms of copyright and ownership


Suggested compromises

Implementing transparency measures (labeling AI content) while allowing AI-generated music to exist on platforms


Excluding purely AI-generated content from automated recommendations while not banning it entirely


Focusing on market-based solutions through licensing and partnerships rather than restrictive regulations


Treating AI as a creative tool similar to previous technological innovations (synthesizers, drum machines) rather than a fundamental threat


Balancing artist protection with enabling creative experimentation through proper licensing frameworks


Developing detection tools for fraud prevention while supporting legitimate AI-assisted creative work


Thought provoking comments

The anti-copyright premise of some AI industry commentators is deeply flawed. Copyright is not the enemy of innovation. Quite the opposite. Media tech convergence predicated on respect for copyright has produced a multi-trillion euro economy.

Speaker

Michael Nash


Reason

This comment directly challenges a prevalent narrative in AI discourse that copyright restrictions hinder innovation. Nash reframes copyright as an enabler rather than a barrier, providing concrete evidence through the success of companies like Apple, Google, and Amazon that built trillion-dollar businesses by respecting creative rights.


Impact

This shifted the conversation from a typical ‘AI vs. artists’ framing to a collaborative model. It established the foundation for discussing market-based solutions and set the tone for the entire panel to focus on partnership rather than replacement, influencing how other panelists approached their responses.


The audience poll showing people want to know if music contains AI but not if it contains synthesizers or drum machines, followed by LJ Rich’s observation: ‘This is technology. It has always been upendings with music. Okay. So why is the conversation so nuanced?’

Speaker

LJ Rich (moderator)


Reason

This interactive moment brilliantly exposed the inconsistency in public perception about AI versus other music technologies. It revealed that the concern isn’t really about technology in music per se, but something specifically perceived as different about AI, highlighting the psychological and cultural dimensions of the AI debate.


Impact

This moment became a pivotal reference point that grounded the entire discussion in audience reality rather than abstract theory. It forced panelists to address why AI feels different from previous technological disruptions and influenced the direction toward discussing transparency and labeling rather than prohibition.


Don Was’s story about Prince and the Lindrum drum machine: ‘Prince made much better use of it than I did… That’s a drum, a beat and a sound that no human being ever would have gotten. No drummer, you could have sat there for six weeks and they wouldn’t have played it just like that. It was mechanized and he messed with the tuning. It was brilliant.’

Speaker

Don Was


Reason

This historical parallel provided crucial context by showing how previous ‘threatening’ technologies became creative tools in the hands of artists. The specific example of Prince creating something impossible for humans alone demonstrated how constraints and mechanical precision can spark rather than limit creativity.


Impact

This comment fundamentally reframed the AI discussion from fear-based to opportunity-based. It provided a concrete template for how artists might approach AI tools and influenced other speakers to focus on creative potential rather than existential threats to human musicianship.


Alexis Lanternier’s data revelation: ‘Maybe 20% of the uploads to the platform are pure AI generation. I believe that Deezer publicly said less than half of 1% of the consumption is that content. So that content is under-indexing by volume, by a factor of 40 to 1.’

Speaker

Michael Nash (referencing Alexis Lanternier’s data)


Reason

This data point cut through speculation and fear-mongering with hard evidence about actual consumer behavior. It revealed that while AI-generated content is being created in large volumes, audiences aren’t actually choosing to consume it, suggesting market forces may naturally regulate quality and authenticity.


Impact

This empirical evidence shifted the conversation from hypothetical concerns to data-driven analysis. It influenced the discussion toward understanding consumer preferences and market dynamics rather than regulatory approaches, and supported the argument that human artistry remains central to what audiences value.


Michael Nash’s consumer research finding: ‘The vast majority, 70 to 75 percent, say real artists matter the most. It’s the story of the artists. It’s who they are. It’s the expression of their identity. That’s what I connect with.’

Speaker

Michael Nash


Reason

This insight revealed that even consumers interested in AI applications in music still prioritize human connection and authentic artistic expression. It suggests that AI’s role will be as an enhancement to human creativity rather than a replacement, addressing fundamental concerns about the dehumanization of art.


Impact

This finding reinforced the human-centric approach advocated throughout the discussion and provided empirical support for the collaborative rather than replacement model. It influenced the panel’s conclusion that the infrastructure should be designed to amplify rather than supplant human artistry.


Overall assessment

These key comments collectively transformed what could have been a typical ‘humans vs. machines’ debate into a nuanced exploration of creative collaboration and market dynamics. The discussion evolved from addressing fears about AI replacing artists to examining how technology can enhance human creativity while respecting rights and maintaining authenticity. The combination of historical perspective (Don Was), empirical data (Alexis/Michael), audience engagement (LJ Rich), and policy framework (Michelle/Victoria) created a comprehensive view that balanced opportunity with protection. The conversation’s arc moved from establishing AI as a tool rather than a threat, through examining market realities, to concluding with specific asks for legislation, transparency, and continued collaboration – ultimately positioning AI as the latest chapter in music’s long history of technological evolution rather than a revolutionary disruption.


Follow-up questions

How do we stay ahead of AI technology in music creation and ensure humans stay in the loop?

Speaker

LJ Rich


Explanation

This fundamental question about maintaining human agency in AI-driven music creation was posed at the beginning but not fully resolved during the discussion.


What does AI mean for creativity, originality, and profound artistic communication?

Speaker

Don Was


Explanation

These core philosophical questions about AI’s impact on fundamental aspects of artistic expression were raised but require deeper exploration.


How do we make remuneration fair for AI-generated content?

Speaker

Alexis Lanternier


Explanation

The economic implications of AI music generation and how to fairly compensate creators in this new landscape needs collaborative solution development.


What do we do with artists who use new AI technology before we understand how to work with it?

Speaker

LJ Rich


Explanation

This addresses the challenge of managing innovation when artists adopt AI tools faster than industry frameworks can be established.


How can we develop deeper understanding of AI applications in music medicine?

Speaker

Michael Nash


Explanation

Nash specifically requested help exploring the combination of AI advancements and neurological technology for breakthroughs in music therapy and medical applications.


How do we ensure good legislation around the world that protects artists’ work while enabling AI use?

Speaker

Victoria Oakley


Explanation

The need for balanced global policy frameworks that both protect creators and enable innovation requires international coordination and expertise.


How do we apply traditional copyright principles in a generative AI world?

Speaker

Victoria Oakley


Explanation

While copyright frameworks exist, their application to AI-generated content presents new challenges that need to be addressed.


What tools can help artists write better songs or be more expressive musicians?

Speaker

Don Was


Explanation

This represents the practical need for AI development focused on enhancing rather than replacing human creativity.


How do we participate in multilateral conversations on AI and help member states understand how to treat AI in policy terms?

Speaker

Michele Woods


Explanation

International cooperation and knowledge sharing is needed to develop coherent global approaches to AI governance in creative industries.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.