Vimeo introduces AI labelling for videos

Vimeo has joined TikTok, YouTube, and Meta in requiring creators to label AI-generated content. Announced on Wednesday, this new policy mandates that creators disclose when realistic content is produced using AI. The updated terms of service aim to prevent confusion between genuine and AI-created videos, addressing the challenge of distinguishing real from fake content due to advanced generative AI tools.

Not all AI usage requires labelling; animated content, videos with obvious visual effects, or minor AI production assistance are exempt. However, videos that feature altered depictions of celebrities or events must include an AI content label. Vimeo’s AI tools, such as those that edit out long pauses, will also prompt labelling.

Creators can manually indicate AI usage when uploading or editing videos, specifying whether AI was used for audio, visuals, or both. Vimeo plans to develop automated systems to detect and label AI-generated content to enhance transparency and reduce the burden on creators. CEO Philip Moyer emphasised the importance of protecting user-generated content from AI training models, aligning Vimeo with similar policies at YouTube.

Rising threat of deepfake pornography for women

As deepfake pornography becomes an increasing threat to women online, both international and domestic lawmakers face difficulties in creating effective protections for victims. The issue has gained prominence through cases like that of Amy Smith, a student in Paris who was targeted with manipulated nude images and harassed by an anonymous perpetrator. Despite reporting the crime to multiple authorities, Smith found little support due to the complexities of tracking faceless offenders across borders.

Recent data shows that deepfake pornography is predominantly used for malicious purposes, with 98% of such videos being explicit. The FBI has identified a rise in “sextortion schemes,” where altered images are used for blackmail. Public awareness of these crimes is often heightened by high-profile cases, but many victims are not celebrities and face immense challenges in seeking justice.

Efforts are underway to address these issues through new legislation. In the US, proposed bills aim to hold perpetrators accountable and require prompt removal of deepfake content from the internet. Additionally, President Biden’s recent executive order seeks to develop technology for detecting and tracking deepfake images. In Europe, the AI Act introduces regulations for AI systems but faces criticism for its limited scope. While these measures represent progress, experts caution that they may not fully prevent future misuse of deepfake technology.

US voters prefer cautious AI regulation over China race

A recent poll by the AI Policy Institute has shed light on strong public opinion in the United States regarding the regulation of AI.

Contrary to claims from the tech industry that strict regulations could hinder competition with China, a majority of American voters prioritise safety and control over the rapid development of AI. The poll reveals that 75% of both Democrats and Republicans prefer a cautious approach to AI development to prevent its misuse by adversaries.

The debate underscores growing concerns about national security and technological competitiveness. While China leads in AI patents, with over 38,000 registered compared to the US’s 6,300, Americans seem wary of sacrificing regulatory oversight in favour of expedited innovation.

Most respondents advocate for stringent safety measures and testing requirements to mitigate potential risks associated with powerful AI technologies.

Moreover, the poll highlights widespread support for restrictions on exporting advanced AI models to countries like China, reflecting broader apprehensions about technology transfer and national security. Despite the absence of comprehensive federal AI regulation in the US, states like California have begun to implement their own measures, prompting varied responses from tech industry leaders and policymakers alike.

Bumble fights AI scammers with new reporting tool

With the instances of scammers using AI-generated photos and videos on dating apps, Bumble has added a new feature that lets users report suspected AI-generated profiles. Now, users can select ‘Fake profile’ and then choose ‘Using AI-generated photos or videos’ among other reporting options such as inappropriate content, underage users, and scams. By allowing users to report such profiles, Bumble aims to reduce the misuse of AI in creating misleading profiles.

Earlier in February this year, Bumble introduced the ‘Deception Detector’, which combines AI and human moderators to detect and eliminate fake profiles and scammers. Following this measure, Bumble has witnessed a 45% overall reduction in reported spam and scams. Another notable feature of Bumble is its ‘Private Detector‘ AI tool that blurs unsolicited nude photos.

Risa Stein, Bumble’s VP of Product, emphasised the importance of creating a safe space and stated, ‘We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously so our community feels confident in making connections.’

Meta will remove content in which ‘Zionist’ is used as a proxy term for antisemitism

Meta announced on Tuesday that it will begin removing more posts that target ‘Zionists’ when the term is used to refer to Jewish people and Israelis, rather than supporters of the political movement. This decision is based on the claim that the world can take on new meanings and become a proxy term for nationality. Meta categorises numerous ‘protected characteristics,’ including nationality, race, and religion.

Previously, Meta’s approach has treated the word ‘Zionist’ as a proxy for Jewish or Israeli people in two specific cases: when Zionists are compared to rats, reflecting antisemitic imagery, and when context clearly indicates that the word means ‘Jew’ or ‘Israeli.’ Now, Meta will remove content attacking ‘Zionists’ when it is not explicitly about the political movement and when it uses certain antisemitic stereotypes, dehumanises, denies the existence of, or threatens or calls for harm or intimidation of ‘Jews’ or ‘Israelis.’

The policy change has been praised by the World Jewish Congress. Its president, Ronald S. Lauder, stated, ‘By recognizing and addressing the misuse of the term ‘Zionist,’ Meta is taking a bold stand against those who seek to mask their hatred of Jews.’ Meta has previously reported significant decreases in hate speech on its platforms.

A recurring question during consultations was how to handle comparisons of Zionists to criminals. Meta does not allow content that compares “protected characteristics” to criminals, but currently believes such comparisons can be used as shorthand for comments on larger military actions. The issue has been referred to an oversight board. Meta consulted with 145 stakeholders from civil society and academia across various global regions for this policy update.

Washington Post launches AI chatbot for climate queries

The Washington Post has introduced a new AI-driven chatbot named Climate Answers, designed to respond to user inquiries about climate issues using information from its articles. The undertaking underscores the Post’s broader strategy to leverage AI to enhance user engagement and accessibility to its journalistic content.

Chief Technology Officer Vineet Khosla highlighted that while the chatbot focuses solely on climate queries, plans include expanding its capabilities to cover other topics. Climate Answers was developed collaboratively by the Post’s product, engineering, and editorial teams, with support from AI firms like OpenAI and Meta’s Llama.

The chatbot operates by sourcing responses from a custom large-language model that synthesises information from multiple Washington Post articles on climate. Crucially, the Post ensures that all answers provided by Climate Answers are grounded in verified journalism, prioritising accuracy and reliability.

Why does it matter?

The Post’s AI initiative demonstrates its broader experimentation in integrating AI into its platform, including recent developments like AI-generated article summaries. The goal is to enhance user experience and engagement, particularly among younger readers who may prefer summarised content as a gateway to deeper exploration of news stories.

Looking ahead, the Washington Post remains open to partnerships that expand the reach of its journalism while maintaining fairness and integrity in content distribution. As the media landscape evolves, the Post monitors user interaction metrics closely to gauge the impact of AI-driven tools on audience engagement and content consumption habits.

EU designates XNXX as VLOP

The EU has designated the adult content platform XNXX as a Very Large Online Platform (VLOP) under its Digital Services Act (DSA), citing its average of 45 million monthly users in the EU. The designation comes with stringent requirements for the platform, including data sharing with authorities and researchers, risk management, and external independent audits.

Under the DSA rules, XNXX has four months to implement measures to protect users, especially minors, and address systemic risks associated with its services. Failure to provide accurate information can result in significant fines imposed by the European Commission.

That follows the EU’s December 2023 designation of three other adult content platforms (Pornhub, Stripchat, and XVideos) as VLOPs, indicating a broader regulatory push to ensure safer online environments across such platforms.

Indian data protection law under fire for inadequate child online safety measures

India’s data protection law, the Digital Personal Data Protection Act (DPDPA), must hold platforms accountable for child safety, according to a panel discussion hosted by the Citizen Digital Foundation (CDF). The webinar, ‘With Alice, Down the Rabbit Hole’, explored the challenges of online child safety and age assurance in India, highlighting the significant threat posed by subversive content and online threats to children.

Nidhi Sudhan, the panel moderator, criticised tech companies for paying lip service to child safety while employing engagement-driven algorithms that can be harmful to children. YouTube was highlighted as a major concern, with CDF researcher Aditi Pillai noting the issues with its algorithms. Dhanya Krishnakumar, a journalist and parent, emphasised the difficulty of imposing age verification without causing additional harm, such as peer pressure and cyberbullying, and stressed the need for open discussions to improve digital literacy.

Aparajita Bharti, co-founder of the Quantum Hub and Young Leaders for Active Citizenship (YLAC), argued that India requires a different approach from the West, as many parents lack the resources to ensure online child safety. Arnika Singh, co-founder of Social & Media Matters, pointed out that India’s diversity necessitates context-specific solutions, rather than one-size-fits-all policies.

The panel called for better accountability from tech platforms and more robust measures within the DPDPA. Nivedita Krishnan, director of law firm Pacta, warned that the DPDPA’s requirement for parental consent could unfairly burden parents with accountability for their children’s online activities. Chitra Iyer, co-founder and CEO of consultancy Space2Grow, highlighted the need for platforms to prioritise user safety over profit. Arnika Singh concluded that the DPDPA requires stronger enforcement mechanisms and should consider international models for better regulation.

Matlock denies AI bot rumours amid concerns over campaign image

Mark Matlock, a political candidate for the right-wing Reform UK party, has affirmed that he is indeed a real person, dispelling rumours that he might be an AI bot. The suspicions arose from a highly edited campaign image and his absence from critical events, prompting a thread on social media platform X that questioned his existence.

The speculation about AI involvement is partially plausible, especially considering that an AI company executive recently used an AI persona to run for Parliament in the UK, though he garnered only 179 votes. However, Matlock clarified that he was severely ill with pneumonia during the election period, rendering him unable to attend events. He provided the original campaign photo, explaining that only minor edits were made.

Why does it matter?

The incident highlights the broader implications of AI in politics. The 2024 elections in the US and elsewhere are already witnessing the impact of AI tools, from deepfake videos to AI-generated political ads. As the use of such technology grows, candidates must maintain transparency and authenticity to avoid similar controversies.

User concerns grow as AI reshapes online interactions

As AI continues to evolve, it’s reshaping online platforms and stirring concerns among longtime users. At a recent tech conference, concerns were raised about AI-generated content flooding forums like Reddit and Stack Overflow, mimicking human interactions. Reddit moderator Sarah Gilbert highlighted the frustration felt by many contributors who see their genuine contributions overshadowed by AI-generated posts.

Stack Overflow, a hub for programming solutions, faced backlash when it initially banned AI-generated responses due to inaccuracies. However, it’s now embracing AI through partnerships to enhance user experience, sparking debates about the balance between human input and AI automation. CEO Prashanth Chandrasekar acknowledged the challenges, noting their efforts to maintain a community-driven knowledge base amidst technological shifts.

Meanwhile, social media platforms like Meta (formerly Facebook) are under scrutiny for using AI to train models on user-generated content without explicit consent. That has prompted regulatory action in countries like Brazil, where fines were imposed for non-compliance with data protection laws. In Europe and the US, similar concerns over privacy and transparency persist as AI integration grows.

The debate underscores broader issues of digital ethics and the future of online interaction, where authenticity and user privacy collide with technological advancements. Platforms must navigate these complexities to retain user trust while embracing AI’s potential to innovate and automate online experiences.