US prosecutors intensify efforts to combat AI-generated child abuse content

US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.

Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.

The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.

Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.

US military explores deepfake use

The United States’ Special Operations Command (SOCOM) is pursuing the development of sophisticated deepfake technology to create virtual personas indistinguishable from real humans, as per a procurement document from the Department of Defense’s Joint Special Operations Command (JSOC).

These artificial avatars would operate on social media and online platforms, featuring realistic expressions and high-quality images akin to government IDs. JSOC also seeks technologies to produce convincing facial and background videos, including ‘selfie videos’, to avoid detection by social media algorithms.

US state agencies have previously announced frameworks to combat foreign information manipulation, citing national security threats from these technologies. Despite recognising the global dangers posed by deepfakes, SOCOM’s initiative underscores a willingness to engage with the technology for potential military advantage.

Experts expressed concern over the ethical implications and potential for increased misinformation, warning of the entirely deceptive nature of deepfakes, with no legitimate applications beyond deceit, possibly encouraging further global misuse. Furthermore, such practices pose the risk of diminished public trust in government communications, exacerbated by perceived hypocrisy in deploying such technology.

Why does it matter?

This plan reflects an ongoing interest in leveraging digital manipulation for military purposes, despite previous incidents where platforms like Meta dismantled similar US-linked networks. It further shows a contradiction in the US’s stance on deepfake use, as it simultaneously condemns similar actions by countries like Russia and China.

X redirects user’s lawsuits to conservative Texan courts

X (formerly Twitter), has updated its terms of service, requiring users to file any lawsuits against the company in Texas’ Northern District in the US, a court known for conservative rulings. This change, effective November 15, appears to align with Musk’s increasing support for conservative causes, including backing Donald Trump’s 2024 presidential campaign. Critics argue the move is an attempt to ‘judge-shop,’ as the Northern District has become a popular destination for right-leaning litigants seeking to block parts of President Biden’s agenda.

X’s headquarters are in Bastrop, Texas, located in the Western District, but the company has chosen the Northern District for legal disputes. This district already hosts two lawsuits filed by X, including one against Media Matters after the watchdog group published a report linking ads on the platform to posts promoting Nazism. The move to steer legal cases to this specific court highlights the company’s efforts to benefit from a legal environment more favorable to conservative causes.

Meta’s oversight board investigates anti-immigration posts on Facebook

Meta’s Oversight Board has initiated a detailed investigation into how the company handles anti-immigration content on Facebook, following numerous user complaints. Helle Thorning-Schmidt, co-chair of the board and former Danish prime minister, underscored the crucial task of balancing free speech with the need to protect vulnerable groups from hate speech.

The investigation particularly focuses on two contentious posts. The first is a meme from a page linked to Poland’s far-right Confederation party, featuring former prime minister Donald Tusk in a racially charged image that alludes to the EU’s immigration pact. The image utilises language perceived as a racial slur in Poland, raising ethical concerns about its impact. The second case involves an AI-generated image posted on a German Facebook page opposing leftist and green parties. It portrays a woman with Aryan features in a stop gesture with accompanying text condemning immigrants as ‘gang-rape specialists,’ a narrative linked to perceived outcomes of the Green Party’s immigration policies. This portrayal not only uses inflammatory rhetoric but also touches on deeply sensitive cultural issues within Germany.

Thorning-Schmidt highlighted the importance of examining Meta’s current approach to managing ‘coded speech’—subtle language or imagery that carries derogatory implications while avoiding direct violations of community standards.

The board’s investigation will assess whether Meta’s policies on hate speech are robust enough to protect individuals and communities at risk of discrimination, while still allowing for critical discourse on immigration matters. Meta’s policy is designed to protect refugees, migrants, immigrants, and asylum seekers from severe attacks while allowing critique of immigration laws.

Why does it matter?

The outcome of this investigation could prompt significant changes in how Meta moderates content on sensitive topics like immigration, striking a balance between curbing hate speech and preserving freedom of expression. Moreover, Meta’s oversight board tackling politically sensitive posts shows the broader challenges social media platforms face in moderating content that balances the fine line between free expression and inciting division. It highlights the ongoing debate on the role of these platforms in managing nuanced or politically sensitive content, potentially setting a precedent.

DOJ issues warning on trade association Information exchanges

The US Department of Justice (DOJ) has released a significant Statement of Interest, urging scrutiny of surveys and information exchanges managed by trade associations. The DOJ expressed concerns that such exchanges may create unique risks to competition, particularly when competitors share sensitive information exclusively among themselves.

According to the DOJ, antitrust laws will evaluate the context of any information exchange to determine its potential impact on competition. Sharing competitively sensitive information could disproportionately benefit participating companies at the expense of consumers, workers, and other stakeholders. The department noted that advancements in AI technology have intensified these concerns, allowing large amounts of detailed information to be exchanged quickly, potentially heightening the risk of anticompetitive behaviour.

This guidance follows the DOJ’s withdrawal of long-standing rules that established “safety zones” for information exchanges, which previously indicated that certain types of sharing were presumed lawful. By retracting this guidance, the DOJ signals a shift toward a more cautious, case-by-case approach, urging businesses to prioritise proactive risk management.

The DOJ’s statement, made in relation to an antitrust case in the pork industry, has wider implications for various sectors, including real estate. It highlights the need for organisations, such as Multiple Listing Services (MLS) and trade associations, to evaluate their practices and avoid environments that could lead to price-fixing or other anticompetitive behaviours. The DOJ encourages trade association executives to review their information-sharing protocols, educate members on legal risks, and monitor practices to ensure compliance with antitrust laws.

Electronics and mobility sectors unite in Japan

Japan’s largest annual electronics event opened alongside a mobility show, marking the first joint trade fair of its kind. The collaboration reflects the increasing convergence of technology and automotive industries, especially as vehicles become more autonomous and connected.

The trade show, hosted by the Japan Electronics and Information Technology Industries Association (JEITA) and Japan Automobile Manufacturers Association (JAMA), aims to promote cross-industry innovation. AI emerged as a core theme, with around half of the 800 tech exhibitors presenting AI-driven products and solutions.

Toyota Motor showcased a portable hydrogen tank capable of powering electric generators during disasters, promoting hydrogen as a sustainable energy source. Panasonic highlighted its perovskite solar cells, which can be installed on car windows to enhance power efficiency for electric vehicles, while Sony demonstrated a safety system that uses image sensors to detect driver fatigue.

NEC presented an AI-powered service capable of summarising movies or creating accident reports from dashcam footage, offering applications in various fields. TDK introduced a brain-inspired semiconductor chip that reduces AI electricity consumption to one-hundredth of current levels. The fair runs until Friday at Chiba’s Makuhari Messe, with free entrance for online registrants.

Australia to restrict teen social media use

The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.

Refugee youth like 14-year-old Tereza Hussein, who relies on social media to connect with distant family, fear the policy will cut off essential lifelines. Experts argue that banning platforms could increase mental health struggles, especially for teens already managing anxiety or isolation. Youth advocates are calling for better content moderation instead of blanket bans.

Government of Australia aims to trial age verification as a first step, though the specific platforms and age limits remain unclear. Similar attempts elsewhere, including in France and the US, have faced challenges with tech-savvy users bypassing restrictions through virtual private networks (VPNs).

Prime Minister Anthony Albanese has promoted the idea, highlighting parents’ desire for children to be more active offline. Critics, however, suggest the ban reflects outdated nostalgia, with experts cautioning that social media plays a crucial role in the daily lives of young people today. Legislation is expected by the end of the year.

Meta faces lawsuits over teen mental health concerns

A federal judge in California has ruled that Meta must face lawsuits from several US states alleging that Facebook and Instagram contribute to mental health problems among teenagers. The states argue that Meta’s platforms are deliberately designed to be addictive, harming young users. Over 30 states, including California, New York, and Florida, filed these lawsuits last year.

Judge Yvonne Gonzalez Rogers rejected Meta’s attempt to dismiss the cases, though she did limit some claims. Section 230 of US law, which offers online platforms legal protections, shields Meta from certain accusations. However, the judge found enough evidence to allow the lawsuits to proceed, enabling the plaintiffs to gather further evidence and pursue a potential trial.

The decision also impacts personal injury cases filed by individual users against Meta, TikTok, YouTube, and Snapchat. Meta is the only company named in the state lawsuits, with plaintiffs seeking damages and changes to allegedly harmful business practices. California Attorney General Rob Bonta welcomed the ruling, stating that Meta should be held accountable for the harm it has caused to young people.

Meta disagrees with the decision, insisting it has developed tools to support parents and teenagers, such as new Teen Accounts on Instagram. Google also refuted the allegations, saying its efforts to create a safer online experience for young people remain a priority. Many other lawsuits across the US accuse social media platforms of fuelling anxiety, depression, and body-image concerns through addictive algorithms.

Big Tech’s AI models fall short of new EU AI Act’s standards

A recent assessment of some of the top AI models has revealed significant gaps in compliance with the EU regulations, particularly in cybersecurity resilience and preventing discriminatory outputs. The study by Swiss startup LatticeFlow in collaboration with the EU officials, tested generative AI models from major tech companies like Meta, OpenAI, and Alibaba. The findings are part of an early attempt to measure compliance with the EU’s upcoming AI Act, which will be phased in over the next two years. Companies that fail to meet these standards could face fines of up to €35 million or 7% of their global annual turnover.

LatticeFlow’s ‘Large Language Model (LLM) Checker’ evaluated the AI models across multiple categories, assigning scores between 0 and 1. While many models received respectable scores, such as Anthropic’s ‘Claude 3 Opus,’ which scored 0.89, others revealed vulnerabilities. For example, OpenAI’s ‘GPT-3.5 Turbo’ received a low score of 0.46 for discriminatory output, and Alibaba’s ‘Qwen1.5 72B Chat’ scored even lower at 0.37, highlighting the persistent issue of AI reflecting human biases in areas like gender and race.

In cybersecurity testing, some models also struggled. Meta’s ‘Llama 2 13B Chat’ scored 0.42 in the ‘prompt hijacking’ category, a type of cyberattack where malicious prompts are used to extract sensitive information. Mistral’s ‘8x7B Instruct’ model fared similarly poorly, scoring 0.38. These results show the need for tech companies to strengthen security measures to meet the EU’s strict standards.

While the EU is still finalising the enforcement details of its AI Act, expected by 2025, LatticeFlow’s test provides an early roadmap for companies to fine-tune their models. LatticeFlow CEO Petar Tsankov expressed optimism, noting that the test results are mainly positive and offer guidance for companies to improve their models’ compliance with the forthcoming regulations.

The European Commission, though unable to verify external tools, has welcomed this initiative, calling it a ‘first step’ toward translating the AI Act into enforceable technical requirements. As tech companies prepare for the new rules, the LLM Checker is expected to play a crucial role in helping them ensure compliance.

India investigates WhatsApp’s privacy policy

WhatsApp is facing potential sanctions from India’s Competition Commission (CCI) over its controversial 2021 privacy policy update, which has raised significant privacy concerns. The CCI is reportedly preparing to take action against the messaging platform, owned by Meta, for allegedly breaching antitrust laws related to user data handling. The policy, which allows WhatsApp to share certain user data with Meta, has faced widespread criticism from regulators and users who view it as intrusive and unfair.

The CCI’s investigation suggests that WhatsApp’s data-sharing practices, particularly involving business transaction data, may give Meta an unfair competitive advantage, violating provisions against the abuse of dominance. A draft order has been prepared to penalise both WhatsApp and Meta, as the CCI’s director general has submitted findings indicating these violations.

In response, WhatsApp stated that the case is still under judicial review and defended its privacy policy by noting that users had the choice to accept the update without losing access to their accounts. If sanctions are imposed, this could represent a pivotal moment in India’s efforts to regulate major tech firms and establish precedents for the intersection of privacy and competition laws in the digital age.