Singapore’s digital development minister, Josephine Teo, has expressed concerns about the future of AI governance, emphasising the need for an internationally agreed-upon framework. Speaking at the Reuters NEXT conference in Singapore, Teo highlighted that while Singapore is more excited than worried about AI, the absence of global standards could lead to a ‘messy’ future.
Teo pointed out the necessity for specific legislation to address challenges posed by AI, particularly focusing on using deepfakes during elections. She stressed that implementing clear and effective laws will be crucial as AI technology advances to manage its impact on society and ensure responsible use.
Singapore’s proactive stance on AI reflects its commitment to balancing technological innovation with necessary regulatory measures. The country aims to harness the benefits of AI while mitigating potential risks, especially in critical areas like electoral integrity.
Microsoft has warned about a new jailbreaking technique called Skeleton Key, which can prompt AI models to disclose harmful information by bypassing their behavioural guidelines. Detailed in a report published on 26 June, Microsoft explained that Skeleton Key forces AI models to respond to illicit requests by modifying their behavioural guidelines to provide a warning rather than refusing the request outright. A technique like this, called Explicit: forced instruction-following, can lead models to produce harmful content.
The report highlighted an example where a model was manipulated to provide instructions for making a Molotov cocktail under the guise of an educational context. The prompt allowed the model to deliver the information with only a prefixed warning by instructing the model to update its behaviour. Microsoft tested the Skeleton Key technique between April and May 2024 on various AI models, including Meta LLama3-70b, Google Gemini Pro, and GPT 3.5 and 4.0, finding it effective but noting that attackers need legitimate access to the models.
Microsoft has addressed the issue in its Azure AI-managed models using prompt shields and has shared its findings with other AI providers. The company has also updated its AI offerings, including its Copilot AI assistants, to prevent guardrail bypassing. Furthermore, the latest disclosure underscores the growing problem of generative AI models being exploited for malicious purposes, following similar warnings from other researchers about vulnerabilities in AI models.
Why does it matter?
In April 2024, Anthropic researchers discovered a technique that could force AI models to provide instructions for constructing explosives. Earlier this year, researchers at Brown University found that translating malicious queries into low-resource languages could induce prohibited behaviour in OpenAI’s GPT-4. These findings highlight the ongoing challenges in ensuring the safe and responsible use of advanced AI models.
YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.
Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.
Good news creators: our updated Erase Song tool helps you easily remove copyright-claimed music from your video (while leaving the rest of your audio intact). Learn more… https://t.co/KeWIw3RFeH
YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.
In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.
Why does it matter?
The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.
Actor Morgan Freeman, renowned for his distinctive voice, recently addressed concerns over a video circulating on TikTok featuring a voice purportedly his own but created using AI. The video, depicting a day in his niece’s life, prompted Freeman to emphasise the importance of reporting unauthorised AI usage. He thanked his fans on social media for their vigilance in maintaining authenticity and integrity, underscoring the need to protect against such deceptive practices.
This isn’t the first time Freeman has encountered unauthorised use of his likeness. Previously, his production company’s EVP, Lori McCreary, encountered deepfake videos attempting to mimic Freeman, including one falsely depicting him firing her. Such incidents highlight the growing prevalence of AI-generated content, prompting discussions about its ethical implications and the need for heightened awareness.
Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an A.I. voice imitating me. Your dedication helps authenticity and integrity remain paramount. Grateful. #AI#scam#imitation#IdentityProtection
Freeman’s case joins a broader trend of celebrities, from Taylor Swift to Tom Cruise, facing similar challenges with AI-generated deepfakes. These instances underscore ongoing concerns about digital identity theft and the blurred lines between real and fabricated content in the digital age.
New Zealand has made a significant shift in its approach to combating terrorist and violent extremist content (TVEC) online, transitioning the Christchurch Call to Action into a non-governmental organisation. Launched in response to the 2019 Christchurch mosque attacks, where the perpetrator live-streamed the violence on social media, the Call initially united governments, tech companies, and civil society to pledge 25 commitments aimed at curbing such content. In a strategic move, New Zealand has relinquished direct funding, now relying on contributions from tech giants like Meta and Microsoft to sustain its operations.
The decision reflects a broader strategy to preserve the Call’s multistakeholder model, which is essential for navigating complex global internet challenges without governmental dominance. That model mirrors successful precedents like the Internet Engineering Task Force and ICANN, which are pivotal to today’s internet infrastructure. By fostering consensus among diverse stakeholders, the Call aims to uphold free expression while effectively addressing the spread of TVEC online.
Former New Zealand Prime Minister Jacinda Ardern, now leading the Call as its Patron, faces the challenge of enhancing its legitimacy and impact. With new funding avenues secured, efforts will focus on expanding stakeholder participation, raising awareness, and holding parties accountable to their commitments. The initiative must also adapt to emerging threats, such as extremists’ misuse of generative AI tools, ensuring its relevance and effectiveness in combating evolving forms of online extremism.
The European Commission has requested that Amazon provide detailed information regarding its measures to comply with the Digital Services Act (DSA) obligations. Specifically, the Commission is interested in the transparency of Amazon’s recommender systems. Amazon has been given a deadline of 26 July to respond.
The DSA mandates that major tech companies, like Amazon, take more responsibility in addressing illegal and harmful content on their platforms. The regulatory push aims to create a safer and more predictable online environment for users. Amazon stated that it is currently reviewing the EU’s request and plans to work closely with the European Commission.
A spokesperson for Amazon expressed support for the Commission’s objectives, emphasising the company’s commitment to a safe and trustworthy shopping experience. Amazon highlighted its significant investments in protecting its platform from bad actors and illegal content and noted that these efforts align with DSA compliance.
Russian disinformation campaigns are targeting social media to destabilise France’s political scene during its legislative campaign, according to a study by the French National Centre for Scientific Research (CNRS). The study highlights Kremlin strategies such as normalising far-right ideologies and weakening the ‘Republican front’ that opposes the far-right Rassemblement National (RN).
Researchers noted that Russia’s influence tactics, including astroturfing and meme wars, have been used previously during the 2016 US presidential elections and the 2022 French presidential elections to support RN figurehead Marine Le Pen. The Kremlin’s current efforts aim to exploit ongoing global conflicts, such as the Israeli-Palestinian conflict, to influence French political dynamics.
Despite these findings, the actual impact of these disinformation campaigns remains uncertain. Some experts argue that while such interference may sway voter behaviour or amplify tensions, the overall effect is limited. The CNRS study focused on activity on X (formerly Twitter) and acknowledged that further research is needed to understand the broader implications of these digital disruptions.
Global streaming companies are contesting new Canadian regulations requiring them to contribute to local news funding, arguing the federal government has acted without legal justification. In June, the Canadian Radio-television and Telecommunications Commission (CRTC) announced that all major online streaming services must allocate 5% of their Canadian revenues to support the domestic broadcasting system, including news production. The Motion Picture Association-Canada, representing Netflix, Walt Disney Co., and others, has filed applications for a judicial review, claiming the CRTC’s decision lacks a legal basis.
The CRTC stated that the funds would support areas of immediate need in the broadcasting system, such as local news, French-language, and Indigenous content. The regulator expects the rules, effective in September, to generate roughly CAD 200 million annually. However, the streaming companies argue that it is unreasonable to compel foreign entities to support Canadian news production.
The measure, introduced under a law passed last year, aims to ensure that online streaming services promote Canadian content and support local jobs. The Motion Picture Association-Canada, representing other platforms like Paramount, Sony, NBCUniversal, and Warner Bros Discovery, is leading the legal challenge against these regulations.
According to CEO Mark Zuckerberg, Meta Platforms’ latest social media app, Threads, has reached over 175 million monthly active users just before its first anniversary. Launched on 5 July last year, Threads aimed to attract users from Twitter, now rebranded as X, during its tumultuous acquisition by Elon Musk. The app quickly gained 100 million users within a week, partly due to its integration with Instagram, but some early users eventually left.
Despite its rapid user growth, Threads has struggled with engagement. Market intelligence firm Sensor Tower reports a significant decline in user activity, with the average sessions and time spent on the app dropping considerably since its launch. Threads has yet to introduce advertising, making little to no revenue for Meta. The platform’s recent integration into the Fediverse, which allows interaction across various social media sites, has yet to boost engagement substantially.
Analysts point out that Threads needs a more precise identity and original content, which could hinder its growth. There is ongoing speculation about whether Meta will maintain Threads as a standalone app or integrate it further with Instagram. Despite these challenges, advertisers’ interest in Threads remains high.
A recent research paper from Google reveals that generative AI already distorts socio-political reality and scientific consensus. The paper, titled ‘Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data,’ was co-authored by researchers from Google DeepMind, Jigsaw, and Google.org.
It categorises various ways generative AI tools are misused, analysing around 200 incidents reported in the media and academic papers between January 2023 and March 2024. Unlike warnings about hypothetical future risks, this research focuses on the real harm generative AI is currently causing, such as flooding the internet with generated text, audio, images, and videos.
The researchers found that most AI misuse involves exploiting system capabilities rather than attacking the models themselves. However, this misuse blurs the lines between authentic and deceptive content, undermining public trust. AI-generated content is being used for impersonation, creating non-consensual intimate images, and amplifying harmful content. These activities often uphold the terms of service of AI tools, highlighting a significant challenge in regulating AI misuse.
Google’s research also emphasises the environmental impact of generative AI. The increasing integration of AI into various products drives energy consumption, making it difficult to reduce emissions. Despite efforts to improve data centre efficiency, the overall rise in AI use has outpaced these gains. The paper calls for a multi-faceted approach to mitigate AI misuse, involving collaboration between policymakers, researchers, industry leaders, and civil society.