Washington Post launches AI chatbot for climate queries

The Washington Post has introduced a new AI-driven chatbot named Climate Answers, designed to respond to user inquiries about climate issues using information from its articles. The undertaking underscores the Post’s broader strategy to leverage AI to enhance user engagement and accessibility to its journalistic content.

Chief Technology Officer Vineet Khosla highlighted that while the chatbot focuses solely on climate queries, plans include expanding its capabilities to cover other topics. Climate Answers was developed collaboratively by the Post’s product, engineering, and editorial teams, with support from AI firms like OpenAI and Meta’s Llama.

The chatbot operates by sourcing responses from a custom large-language model that synthesises information from multiple Washington Post articles on climate. Crucially, the Post ensures that all answers provided by Climate Answers are grounded in verified journalism, prioritising accuracy and reliability.

Why does it matter?

The Post’s AI initiative demonstrates its broader experimentation in integrating AI into its platform, including recent developments like AI-generated article summaries. The goal is to enhance user experience and engagement, particularly among younger readers who may prefer summarised content as a gateway to deeper exploration of news stories.

Looking ahead, the Washington Post remains open to partnerships that expand the reach of its journalism while maintaining fairness and integrity in content distribution. As the media landscape evolves, the Post monitors user interaction metrics closely to gauge the impact of AI-driven tools on audience engagement and content consumption habits.

EU designates XNXX as VLOP

The EU has designated the adult content platform XNXX as a Very Large Online Platform (VLOP) under its Digital Services Act (DSA), citing its average of 45 million monthly users in the EU. The designation comes with stringent requirements for the platform, including data sharing with authorities and researchers, risk management, and external independent audits.

Under the DSA rules, XNXX has four months to implement measures to protect users, especially minors, and address systemic risks associated with its services. Failure to provide accurate information can result in significant fines imposed by the European Commission.

That follows the EU’s December 2023 designation of three other adult content platforms (Pornhub, Stripchat, and XVideos) as VLOPs, indicating a broader regulatory push to ensure safer online environments across such platforms.

Indian data protection law under fire for inadequate child online safety measures

India’s data protection law, the Digital Personal Data Protection Act (DPDPA), must hold platforms accountable for child safety, according to a panel discussion hosted by the Citizen Digital Foundation (CDF). The webinar, ‘With Alice, Down the Rabbit Hole’, explored the challenges of online child safety and age assurance in India, highlighting the significant threat posed by subversive content and online threats to children.

Nidhi Sudhan, the panel moderator, criticised tech companies for paying lip service to child safety while employing engagement-driven algorithms that can be harmful to children. YouTube was highlighted as a major concern, with CDF researcher Aditi Pillai noting the issues with its algorithms. Dhanya Krishnakumar, a journalist and parent, emphasised the difficulty of imposing age verification without causing additional harm, such as peer pressure and cyberbullying, and stressed the need for open discussions to improve digital literacy.

Aparajita Bharti, co-founder of the Quantum Hub and Young Leaders for Active Citizenship (YLAC), argued that India requires a different approach from the West, as many parents lack the resources to ensure online child safety. Arnika Singh, co-founder of Social & Media Matters, pointed out that India’s diversity necessitates context-specific solutions, rather than one-size-fits-all policies.

The panel called for better accountability from tech platforms and more robust measures within the DPDPA. Nivedita Krishnan, director of law firm Pacta, warned that the DPDPA’s requirement for parental consent could unfairly burden parents with accountability for their children’s online activities. Chitra Iyer, co-founder and CEO of consultancy Space2Grow, highlighted the need for platforms to prioritise user safety over profit. Arnika Singh concluded that the DPDPA requires stronger enforcement mechanisms and should consider international models for better regulation.

Matlock denies AI bot rumours amid concerns over campaign image

Mark Matlock, a political candidate for the right-wing Reform UK party, has affirmed that he is indeed a real person, dispelling rumours that he might be an AI bot. The suspicions arose from a highly edited campaign image and his absence from critical events, prompting a thread on social media platform X that questioned his existence.

The speculation about AI involvement is partially plausible, especially considering that an AI company executive recently used an AI persona to run for Parliament in the UK, though he garnered only 179 votes. However, Matlock clarified that he was severely ill with pneumonia during the election period, rendering him unable to attend events. He provided the original campaign photo, explaining that only minor edits were made.

Why does it matter?

The incident highlights the broader implications of AI in politics. The 2024 elections in the US and elsewhere are already witnessing the impact of AI tools, from deepfake videos to AI-generated political ads. As the use of such technology grows, candidates must maintain transparency and authenticity to avoid similar controversies.

User concerns grow as AI reshapes online interactions

As AI continues to evolve, it’s reshaping online platforms and stirring concerns among longtime users. At a recent tech conference, concerns were raised about AI-generated content flooding forums like Reddit and Stack Overflow, mimicking human interactions. Reddit moderator Sarah Gilbert highlighted the frustration felt by many contributors who see their genuine contributions overshadowed by AI-generated posts.

Stack Overflow, a hub for programming solutions, faced backlash when it initially banned AI-generated responses due to inaccuracies. However, it’s now embracing AI through partnerships to enhance user experience, sparking debates about the balance between human input and AI automation. CEO Prashanth Chandrasekar acknowledged the challenges, noting their efforts to maintain a community-driven knowledge base amidst technological shifts.

Meanwhile, social media platforms like Meta (formerly Facebook) are under scrutiny for using AI to train models on user-generated content without explicit consent. That has prompted regulatory action in countries like Brazil, where fines were imposed for non-compliance with data protection laws. In Europe and the US, similar concerns over privacy and transparency persist as AI integration grows.

The debate underscores broader issues of digital ethics and the future of online interaction, where authenticity and user privacy collide with technological advancements. Platforms must navigate these complexities to retain user trust while embracing AI’s potential to innovate and automate online experiences.

Singapore advocates for international AI standards

Singapore’s digital development minister, Josephine Teo, has expressed concerns about the future of AI governance, emphasising the need for an internationally agreed-upon framework. Speaking at the Reuters NEXT conference in Singapore, Teo highlighted that while Singapore is more excited than worried about AI, the absence of global standards could lead to a ‘messy’ future.

Teo pointed out the necessity for specific legislation to address challenges posed by AI, particularly focusing on using deepfakes during elections. She stressed that implementing clear and effective laws will be crucial as AI technology advances to manage its impact on society and ensure responsible use.

Singapore’s proactive stance on AI reflects its commitment to balancing technological innovation with necessary regulatory measures. The country aims to harness the benefits of AI while mitigating potential risks, especially in critical areas like electoral integrity.

Microsoft details threat from new AI jailbreaking method

Microsoft has warned about a new jailbreaking technique called Skeleton Key, which can prompt AI models to disclose harmful information by bypassing their behavioural guidelines. Detailed in a report published on 26 June, Microsoft explained that Skeleton Key forces AI models to respond to illicit requests by modifying their behavioural guidelines to provide a warning rather than refusing the request outright. A technique like this, called Explicit: forced instruction-following, can lead models to produce harmful content.

The report highlighted an example where a model was manipulated to provide instructions for making a Molotov cocktail under the guise of an educational context. The prompt allowed the model to deliver the information with only a prefixed warning by instructing the model to update its behaviour. Microsoft tested the Skeleton Key technique between April and May 2024 on various AI models, including Meta LLama3-70b, Google Gemini Pro, and GPT 3.5 and 4.0, finding it effective but noting that attackers need legitimate access to the models.

Microsoft has addressed the issue in its Azure AI-managed models using prompt shields and has shared its findings with other AI providers. The company has also updated its AI offerings, including its Copilot AI assistants, to prevent guardrail bypassing. Furthermore, the latest disclosure underscores the growing problem of generative AI models being exploited for malicious purposes, following similar warnings from other researchers about vulnerabilities in AI models.

Why does it matter?

In April 2024, Anthropic researchers discovered a technique that could force AI models to provide instructions for constructing explosives. Earlier this year, researchers at Brown University found that translating malicious queries into low-resource languages could induce prohibited behaviour in OpenAI’s GPT-4. These findings highlight the ongoing challenges in ensuring the safe and responsible use of advanced AI models.

AI tool lets YouTube creators erase copyrighted songs

YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.

Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.

YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.

In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Morgan Freeman responds to AI voice scam on TikTok

Actor Morgan Freeman, renowned for his distinctive voice, recently addressed concerns over a video circulating on TikTok featuring a voice purportedly his own but created using AI. The video, depicting a day in his niece’s life, prompted Freeman to emphasise the importance of reporting unauthorised AI usage. He thanked his fans on social media for their vigilance in maintaining authenticity and integrity, underscoring the need to protect against such deceptive practices.

This isn’t the first time Freeman has encountered unauthorised use of his likeness. Previously, his production company’s EVP, Lori McCreary, encountered deepfake videos attempting to mimic Freeman, including one falsely depicting him firing her. Such incidents highlight the growing prevalence of AI-generated content, prompting discussions about its ethical implications and the need for heightened awareness.

Freeman’s case joins a broader trend of celebrities, from Taylor Swift to Tom Cruise, facing similar challenges with AI-generated deepfakes. These instances underscore ongoing concerns about digital identity theft and the blurred lines between real and fabricated content in the digital age.

New Zealand transforms Christchurch Call into tech-supported NGO

New Zealand has made a significant shift in its approach to combating terrorist and violent extremist content (TVEC) online, transitioning the Christchurch Call to Action into a non-governmental organisation. Launched in response to the 2019 Christchurch mosque attacks, where the perpetrator live-streamed the violence on social media, the Call initially united governments, tech companies, and civil society to pledge 25 commitments aimed at curbing such content. In a strategic move, New Zealand has relinquished direct funding, now relying on contributions from tech giants like Meta and Microsoft to sustain its operations.

The decision reflects a broader strategy to preserve the Call’s multistakeholder model, which is essential for navigating complex global internet challenges without governmental dominance. That model mirrors successful precedents like the Internet Engineering Task Force and ICANN, which are pivotal to today’s internet infrastructure. By fostering consensus among diverse stakeholders, the Call aims to uphold free expression while effectively addressing the spread of TVEC online.

Former New Zealand Prime Minister Jacinda Ardern, now leading the Call as its Patron, faces the challenge of enhancing its legitimacy and impact. With new funding avenues secured, efforts will focus on expanding stakeholder participation, raising awareness, and holding parties accountable to their commitments. The initiative must also adapt to emerging threats, such as extremists’ misuse of generative AI tools, ensuring its relevance and effectiveness in combating evolving forms of online extremism.