Meta oversight board calls for clearer rules on AI-generated pornography

Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.

In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.

Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.

US Senate passes bill to combat AI deepfakes

The US Senate has unanimously passed the DEFIANCE Act, allowing victims of nonconsensual intimate images created by AI, known as deepfakes, to sue their creators for damages. The bill enables victims to pursue civil remedies against those who produced or distributed sexually explicit deepfakes with malicious intent. Victims identifiable in these deepfakes can receive up to $150,000 in damages and up to $250,000 if linked to sexual assault, stalking, or harassment.

The legislative move follows high-profile incidents, such as AI-generated explicit images of Taylor Swift appearing on social media and similar cases affecting high school girls across the country. Senate Majority Leader Chuck Schumer emphasised the widespread impact of malicious deepfakes, highlighting the urgent need for protective measures.

Schumer described the DEFIANCE Act as part of broader efforts to implement AI safeguards to prevent significant harm. He called on the House to pass the bill, which has a companion bill awaiting consideration. Schumer assured victims that the government is committed to addressing the issue and protecting individuals from the abuses of AI technology.

China’s new video-generating AI faces limitations due to political censorship

A new AI video-generating model, Kling, developed by Beijing-based Kuaishou, is now widely available but with significant limitations. Initially launched in a waitlisted access for users with Chinese phone numbers, Kling can now be accessed by anyone providing their email. The model generates five-second videos based on user prompts, simulating physics like rustling leaves and flowing water with a resolution of 720p.

However, Kling censors politically sensitive topics. Prompts related to ‘Democracy in China,’ ‘Chinese President Xi Jinping,’ and ‘Tiananmen Square protests’ result in error messages. The censorship occurs at the prompt level, allowing for the generation of videos related to these topics as long as they are not explicitly mentioned.

That behaviour likely stems from intense political pressure from the Chinese government. The Cyberspace Administration of China (CAC) is actively testing AI models to ensure they align with core socialist values and has proposed a blacklist of sources for training AI models. Companies must prepare models that produce ‘safe’ answers to thousands of questions, which may slow China’s AI development and create two classes of models: those heavily filtered and those less so.

The dichotomy raises questions about the broader implications for the AI ecosystem, as restrictive policies may hinder technological advancement and innovation.

Trump allies hinder disinformation research leading up to US election

A legal campaign led by allies of former US president Donald Trump requested an investigation within the misinformation research field, claiming an alleged conspiracy to censor conservative voices online. Under this investigation, academics in the field who worked at tracking election misinformation online were scrutinised daily, including regular scanning of their correspondence with AI software and searching for messages from government agencies or tech companies.

Disinformation has proliferated online as the US election approaches, especially after significant events such as the assassination attempt on Trump and President Biden’s withdrawal from the race. Due to the political scrutiny, researchers held back from publicly reporting some of their insights on misinformation issues related to public affairs.

Last month, the Supreme Court reversed a lower-court ruling restricting tech companies and the government from communicating about misinformation online. But the ruling hasn’t deterred Republicans from bringing lawsuits and sending a string of legal demands.

According to the investigation by The Washington Post, the GOP campaign has eroded the once thriving ecosystem of academics, nonprofits and tech industry initiatives dedicated to addressing the spread of misinformation online. Many prominent researchers in the field, like Claire Wardle, Stefanie Friedhoff, Ryan Calo and Kate Starbird, have expressed their concerns for academic freedom and democracy.

Social media platforms asked to tackle cybercrimes in Malaysia

Malaysia is urging social media platforms to strengthen their efforts in combating cybercrimes, including scams, cyberbullying, and child pornography. The government has seen a significant rise in harmful online content and has called on companies like Meta and TikTok to enhance their monitoring and enforcement practices.

In the first quarter of 2024 alone, Malaysia reported 51,638 cases of harmful content referred to social media platforms, surpassing the 42,904 cases from the entire previous year. Communications Minister Fahmi Fadzil noted that some platforms are more cooperative than others, with Meta showing the highest compliance rates—85% for Facebook, 88% for Instagram, and 79% for WhatsApp. TikTok followed with a 76% compliance rate, while Telegram and X had lower rates.

The government has directed social media firms to address these issues more effectively, but it is up to the platforms to remove content that violates their community guidelines. Malaysia’s communications regulator continues highlighting problematic content to these firms, aiming to curb harmful online activity.

Queensland premier criticises AI use in political advertising

The prime minister of Australian state Queensland, Steven Miles, has condemned an AI-generated video created by the LNP opposition, calling it a ‘turning point for our democracy.’ The TikTok video depicts the Queensland premier dancing under text about rising living costs and is clearly marked as AI-generated. Miles has stated that the state Labor party will not use AI-generated advertisements in the upcoming election campaign.

Miles expressed concerns about the potential dangers of AI in political communication, highlighting the need for caution as videos are more likely to be believed than doctored photos. Despite rejecting AI for their own content, Miles dismissed the need for truth in advertising laws, asserting that Labor has no intention of creating deepfake videos.

The LNP defended their use of AI, emphasising that the video was clearly labelled and aimed at highlighting issues like higher rents and increased power prices under Labor. The Electoral Commission of Queensland noted that while the state’s electoral act does not specifically address AI, any false statements about a candidate’s character can be prosecuted.

Experts, including communications lecturer Susan Grantham and QUT’s Patrik Wikstrom, have warned about the broader implications of AI in politics. Grantham pointed out that politicians already using AI for lighter content are at greater risk of being targeted. Wikstrom stressed that the real issue is political communication designed to deceive, echoing concerns raised by a UK elections watchdog about AI deepfakes undermining elections. Australia is also planning to implement tougher laws focusing on deepfakes.

LinkedIn adds games and AI tools to increase user visits

LinkedIn is introducing AI-powered career advice and interactive games in an effort to encourage daily visits and drive growth. The Financial Times reported that this initiative is part of a broader overhaul aimed at increasing user engagement on the Microsoft-owned platform, which currently lags behind entertainment-focused social media sites like Facebook and TikTok.

With slowing revenue growth, analysts have suggested that LinkedIn must diversify its income streams beyond subscriptions and make the platform more engaging. Editor in Chief Daniel Roth emphasised the goal of building a daily habit for users to share knowledge, get information, and interact with content on the site. The efforts reflect LinkedIn’s push to enhance the user experience, such as unveiling AI-driven job hunting features and detecting fake accounts, as well as disabling targeted ads.

In June, LinkedIn recorded 1.5 million content interactions per minute, though it did not disclose site traffic or active user figures. Data from Similarweb showed that visits reached 1.8 billion in June, but the growth rate has slowed significantly since early 2024. For continued growth, media analyst Kelsey Chickering noted that LinkedIn needs to become ‘stickier’ and offer more than just job listings and applications.

Moreover, LinkedIn is becoming a significant platform for consumer engagement, with companies like Amazon and Nike attracting millions of followers. The platform’s fastest-growing demographic is Generation Z, many of whom shop via social media. The trend highlights LinkedIn’s potential as a robust avenue for retailers to reach a sophisticated and influential audience.

Global tech outage hits Meta’s content moderators

A global tech outage on Friday affected some external vendors responsible for content moderation on Meta’s platforms, including Facebook, Instagram, WhatsApp, and Threads. According to a Meta spokesperson, the outage temporarily impacted several tools used by these vendors, causing minimal disruption to Meta’s support operations but not significantly affecting content moderation efforts.

The outage led to a SEV1 alert at Meta, indicating a critical issue that required immediate attention. Meta relies on a combination of AI and human review to moderate the billions of posts made on its platforms. While Meta staff handle some reviews, most are outsourced to vendors like Teleperformance and Concentrix, who employ numerous workers to identify and address rule violations such as hate speech and violence.

Despite the outage disrupting vendor access to key systems that route flagged content for review, operations continued as expected. Concentrix reported monitoring and addressing the impacts of the outage, while Teleperformance did not provide a comment. Meta confirmed that the issues had been resolved earlier in the day, ensuring minimal to no impact on their content moderation processes.

Singapore blocks 95 accounts linked to exiled Chinese tycoon Guo Wengui

Singapore has ordered five social media platforms to block access to 95 accounts linked to exiled Chinese tycoon Guo Wengui. These accounts posted over 120 times from April 17 to May 10, alleging foreign interference in Singapore’s leadership transition. The Home Affairs Ministry stated that the posts suggested a foreign actor influenced the selection of Singapore’s new prime minister.

Singapore’s Foreign Interference (Countermeasures) Act, enacted in October 2021, was used for the first time to address this issue. Guo Wengui, recently convicted in the US for fraud, has a history of opposing Beijing. Together with former Trump adviser Steve Bannon, he launched the New Federal State of China, aimed at overthrowing China’s Communist Party.

The ministry expressed concern that Guo’s network could spread false narratives detrimental to Singapore’s interests and sovereignty. Blocking these accounts was deemed necessary to prevent potential hostile information campaigns targeting Singapore.

Guo and his affiliated organisations have been known to push various Singapore-related narratives. The coordinated actions and previous attempts to use Singapore to advance their agenda highlighted their capability to undermine Singapore’s social cohesion and sovereignty.

Musk’s Grok AI struggles with news accuracy

Grok, Elon Musk’s AI model available on the X platform, encountered significant issues in accuracy following the attempted assassination of former President Donald Trump. The AI model posted incorrect headlines, including one falsely claiming Vice President Kamala Harris had been shot and another wrongly identifying the shooter as an antifa member. These errors stemmed from Grok’s inability to discern sarcasm and verify unverified claims on X.

After announcing plans to develop TruthGPT, Elon Musk has promoted Grok as a revolutionary tool for news aggregation, leveraging real-time posts from millions of users. Despite its potential, the incident underscores Grok’s limitations, particularly in handling breaking news. The model’s humorous design can also be a drawback, leading to the spread of misinformation and confusion.

The reliance on AI for news summaries raises concerns about accuracy and context, especially during critical events. Former Facebook public-policy director Katie Harbath emphasized the need for human oversight in providing context and verifying facts. The incident with Grok mirrors challenges faced by other AI models, such as OpenAI’s ChatGPT, which includes disclaimers to manage user expectations.