OpenAI CEO emphasises democratic control in the future of AI

Sam Altman, co-founder and CEO of OpenAI, raises a critical question: ‘Who will control the future of AI?’. He frames it as a choice between a democratic vision, led by the US and its allies to disseminate AI benefits widely, and an authoritarian one, led by nations like Russia and China, aiming to consolidate power through AI. Altman underscores the urgency of this decision, given the rapid advancements in AI technology and the high stakes involved.

Altman warns that while the United States currently leads in AI development, this advantage is precarious due to substantial investments by authoritarian governments. He highlights the risks if these regimes take the lead, such as restricted AI benefits, enhanced surveillance, and advanced cyber weapons. To prevent this, Altman proposes a four-pronged strategy – robust security measures to protect intellectual property, significant investments in physical and human infrastructure, a coherent commercial diplomacy policy, and establishing international norms and safety protocols.

He emphasises proactive collaboration between the US government and the private sector to implement these measures swiftly. Altman believes that proactive efforts today in security, infrastructure, talent development, and global governance can secure a competitive advantage and broad societal benefits. Ultimately, Altman advocates for a democratic vision for AI, underpinned by strategic, timely, and globally inclusive actions to maximise the technology’s benefits while minimising risks.

OpenAI announces major reorganisation to bolster AI safety measures

OpenAI’s AI safety leader, Aleksander Madry, is now working on a new significant research project, according to CEO Sam Altman. OpenAI executives Joaquin Quinonero Candela and Lilian Weng will take over the preparedness team, which evaluates the readiness of the company’s models for general AI. The move is part of a broader strategy to unify OpenAI’s safety efforts.

OpenAI’s preparedness team ensures the safety and readiness of its AI models. Following Madry’s shift to a new research role, he will have an expanded position within the research organization. OpenAI is also addressing safety concerns surrounding its advanced chatbots, which can engage in human-like conversations and generate multimedia content from text prompts.

Joaquin Quinonero Candela and Lilian Weng will lead the preparedness team as part of this strategic change. Researcher Tejal Patwardhan will manage much of the team’s work, ensuring the continued focus on AI safety. The reorganization follows the recent formation of a Safety and Security Committee, led by board members including Sam Altman.

The reshuffle comes amid rising safety concerns as OpenAI’s technologies become more powerful and widely used. The Safety and Security Committee was established earlier this year in preparation for training the next generation of AI models. These developments reflect OpenAI’s ongoing commitment to AI safety and responsible innovation.

Queensland premier criticises AI use in political advertising

The prime minister of Australian state Queensland, Steven Miles, has condemned an AI-generated video created by the LNP opposition, calling it a ‘turning point for our democracy.’ The TikTok video depicts the Queensland premier dancing under text about rising living costs and is clearly marked as AI-generated. Miles has stated that the state Labor party will not use AI-generated advertisements in the upcoming election campaign.

Miles expressed concerns about the potential dangers of AI in political communication, highlighting the need for caution as videos are more likely to be believed than doctored photos. Despite rejecting AI for their own content, Miles dismissed the need for truth in advertising laws, asserting that Labor has no intention of creating deepfake videos.

The LNP defended their use of AI, emphasising that the video was clearly labelled and aimed at highlighting issues like higher rents and increased power prices under Labor. The Electoral Commission of Queensland noted that while the state’s electoral act does not specifically address AI, any false statements about a candidate’s character can be prosecuted.

Experts, including communications lecturer Susan Grantham and QUT’s Patrik Wikstrom, have warned about the broader implications of AI in politics. Grantham pointed out that politicians already using AI for lighter content are at greater risk of being targeted. Wikstrom stressed that the real issue is political communication designed to deceive, echoing concerns raised by a UK elections watchdog about AI deepfakes undermining elections. Australia is also planning to implement tougher laws focusing on deepfakes.

US senators introduce COPIED Act to combat intellectual property theft in creative industry

The Content Origin Protection and Integrity from Edited and Deepfaked Media Bill, also known as the COPIED Act, was introduced on 11 July 2024 by US lawmakers, Senators Marsha Blackburn, Maria Cantrell and Martin Heinrich. The bill is expected to safeguard the intellectual property of creatives, particularly journalists, publishers, broadcasters and artists.

In recent times, the work and images of creatives have been used or modified without consent, at times to generate income. The push for legislation in the area was intensified in January after explicit AI-generated images of the US musician Taylor Swift surfaced on X

According to the bill, images, videos, audio clips and texts are considered deepfakes if they contain ‘synthetic or synthetically modified content that appears authentic to a reasonable person and creates a false understanding or impression’. If moved into legislation, the bill restricts online platforms where US-based customers frequent, and annual revenue of at least $50 million is generated or where 25 million active users are registered for three consecutive months.

Under the bill, companies that deploy or develop AI models must install a feature allowing users to tag such images with contextual or content provenance information, such as their source and history, in a machine-readable format. After that, it would be illegal to remove such tags for any other reason than research, use these images to train subsequent AI models or generate content. Victims will then have the right to sue offenders. 

The COPIED Act is backed by several artist-affiliated groups, including SAG-AFTRA, the National Music Publishers’ Association, the Songwriters Guild of America (SGA), the National Association of Broadcasters as well as The US National Institute of Standards and Technology (NIST), the US Patent and Trademark Office (USPTO) and the US Copyright Office. The bill also has received bipartisan support.

K-Pop’s AI revolution divides fans

AI is currently a hot topic in the K-Pop community, as several top groups, including Seventeen, have begun using the technology to create music videos and write lyrics. Seventeen, one of the most successful K-Pop acts, has incorporated AI-generated scenes in their latest single, ‘Maestro,’ and experimented with AI in songwriting. Band member Woozi expressed a desire to develop alongside technology rather than resist it.

The use of AI has divided fans. Some, like super fan Ashley Peralta, appreciate AI’s ability to help artists overcome creative blocks but worry it might disconnect fans from the artists’ authentic emotions. Podcaster Chelsea Toledo shares similar concerns, fearing AI-generated lyrics might dilute Seventeen’s reputation as a self-producing group known for their personal touch in songwriting and choreography.

Industry professionals, such as producer Chris Nairn, recognise South Korea’s progressive approach to music production. While he acknowledges AI’s potential, he doubts its ability to match top-tier songwriting’s innovation and uniqueness. Music journalist Arpita Adhya points out the immense pressure on K-Pop artists to produce frequent content, which may drive the adoption of AI.

Why does this matter?

The debate reflects broader concerns in the music industry, where Western artists like Billie Eilish and Nicki Minaj have called for regulation to protect human artistry from AI’s encroachment. Fans and industry insiders continue to grapple with the balance between embracing technological advancements and preserving the authenticity that connects artists with their audiences.

Vimeo introduces AI labelling for videos

Vimeo has joined TikTok, YouTube, and Meta in requiring creators to label AI-generated content. Announced on Wednesday, this new policy mandates that creators disclose when realistic content is produced using AI. The updated terms of service aim to prevent confusion between genuine and AI-created videos, addressing the challenge of distinguishing real from fake content due to advanced generative AI tools.

Not all AI usage requires labelling; animated content, videos with obvious visual effects, or minor AI production assistance are exempt. However, videos that feature altered depictions of celebrities or events must include an AI content label. Vimeo’s AI tools, such as those that edit out long pauses, will also prompt labelling.

Creators can manually indicate AI usage when uploading or editing videos, specifying whether AI was used for audio, visuals, or both. Vimeo plans to develop automated systems to detect and label AI-generated content to enhance transparency and reduce the burden on creators. CEO Philip Moyer emphasised the importance of protecting user-generated content from AI training models, aligning Vimeo with similar policies at YouTube.

AI impact in music production: Nearly 25% of producers embrace innovation

A recent survey by Tracklib reveals that 25% of music producers are now integrating AI into their creative processes, marking a significant adoption of technology within the industry. However, most producers exhibit resistance towards AI, citing concerns over losing creative control as a primary barrier.

Among those using AI, the survey found that most employ it for stem separation (73.9%) rather than full song creation, which is used by only a small fraction (3%). Concerns among non-users primarily revolve around artistic integrity (82.2%) and doubts about AI’s ability to maintain quality (34.5%), with additional concerns including cost and copyright issues.

Interestingly, the survey highlights a stark divide between perceptions of assistive AI, which aids in music creation, and generative AI, which directly generates elements or entire songs. While some producers hold a positive view of assistive AI, generative AI faces stronger opposition, especially among younger respondents.

Overall, the survey underscores a cautious optimism about AI’s future impact on music production, with 70% of respondents expecting it to have a significant influence going forward. Despite current reservations, Tracklib predicts continued adoption of music AI, noting it is entering the “early majority” phase of adoption according to technology adoption models.

AI tool lets YouTube creators erase copyrighted songs

YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.

Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.

YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.

In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Morgan Freeman responds to AI voice scam on TikTok

Actor Morgan Freeman, renowned for his distinctive voice, recently addressed concerns over a video circulating on TikTok featuring a voice purportedly his own but created using AI. The video, depicting a day in his niece’s life, prompted Freeman to emphasise the importance of reporting unauthorised AI usage. He thanked his fans on social media for their vigilance in maintaining authenticity and integrity, underscoring the need to protect against such deceptive practices.

This isn’t the first time Freeman has encountered unauthorised use of his likeness. Previously, his production company’s EVP, Lori McCreary, encountered deepfake videos attempting to mimic Freeman, including one falsely depicting him firing her. Such incidents highlight the growing prevalence of AI-generated content, prompting discussions about its ethical implications and the need for heightened awareness.

Freeman’s case joins a broader trend of celebrities, from Taylor Swift to Tom Cruise, facing similar challenges with AI-generated deepfakes. These instances underscore ongoing concerns about digital identity theft and the blurred lines between real and fabricated content in the digital age.

AI brings Judy Garland’s voice to life

Although Judy Garland never recorded herself reading ‘The Wonderful Wizard of Oz,’ fans will soon be able to hear her rendition thanks to a new app by ElevenLabs. The AI company has launched the Reader app, which can convert text into voice-overs using digitally produced voices of deceased celebrities, including Garland, James Dean, and Burt Reynolds. The app can transform articles, e-books, and other text formats into audio.

Dustin Blank, head of partnerships at ElevenLabs, emphasised the company’s respect for the legacies of these celebrities. The company has made agreements with the estates of the actors, though compensation details remain undisclosed. That initiative highlights AI’s potential in Hollywood, especially for creating content using synthetic voices, but it also raises important questions about the licensing and ethical use of AI.

The use of AI-generated celebrity voices comes amid growing concerns about authenticity and copyright in creative industries. ElevenLabs had previously faced scrutiny when its tool was reportedly used to create a fake robocall from President Joe Biden. Similar controversies have arisen, such as OpenAI’s introduction of a voice similar to Scarlett Johansson’s, which she publicly criticised.

As AI technology advances, media companies are increasingly utilising it for voiceovers. NBC recently announced the use of an AI version of sportscaster Al Michaels for Olympics recaps on its Peacock streaming platform, with Michaels receiving compensation. While the market for AI-generated voices remains uncertain, the demand for audiobooks narrated by recognisable voices suggests a promising future for this technology.