US prosecutors intensify efforts to combat AI-generated child abuse content

US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.

Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.

The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.

Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.

Mekong partnership strengthens regional digital security

The Mekong-US Partnership (MUSP) recently hosted a policy dialogue on online scams, bringing together government representatives from Thailand, the US, and Vietnam. The seminar, held in Bangkok, focused on addressing cybersecurity issues and fostering cooperation to combat online crime across the Mekong region. The event was organised by the Ministry of Foreign Affairs and the Stimson Center, with support from the US Department of State.

Discussions centred around strategies to prevent online scams, enhance risk management, and ensure the security of digital financial systems. Thai officials, including Ekapong Harimcharoen from the Ministry of Digital Economy and Society, highlighted national policies and shared insights with international partners. Participants explored collaborative efforts to build a secure online environment and promote regional connectivity under the MUSP framework.

Thailand is taking significant steps to expand its digital economy, projected to contribute 11% to GDP by 2027. Several laws and initiatives are already in place, such as the Personal Data Protection Act (PDPA) and the Cyber Security Act. These measures aim to protect data, promote responsible AI development, and safeguard critical infrastructure sectors including healthcare, banking, and telecommunications.

With remote work and cloud technologies becoming more prominent, the demand for cybersecurity solutions is growing. Thailand aims to position itself as a regional leader in information and communications technology while tackling the evolving challenges of cybercrime. Cooperation under the MUSP framework is expected to enhance resilience in the digital landscape of the Mekong sub-region.

Meta’s oversight board investigates anti-immigration posts on Facebook

Meta’s Oversight Board has initiated a detailed investigation into how the company handles anti-immigration content on Facebook, following numerous user complaints. Helle Thorning-Schmidt, co-chair of the board and former Danish prime minister, underscored the crucial task of balancing free speech with the need to protect vulnerable groups from hate speech.

The investigation particularly focuses on two contentious posts. The first is a meme from a page linked to Poland’s far-right Confederation party, featuring former prime minister Donald Tusk in a racially charged image that alludes to the EU’s immigration pact. The image utilises language perceived as a racial slur in Poland, raising ethical concerns about its impact. The second case involves an AI-generated image posted on a German Facebook page opposing leftist and green parties. It portrays a woman with Aryan features in a stop gesture with accompanying text condemning immigrants as ‘gang-rape specialists,’ a narrative linked to perceived outcomes of the Green Party’s immigration policies. This portrayal not only uses inflammatory rhetoric but also touches on deeply sensitive cultural issues within Germany.

Thorning-Schmidt highlighted the importance of examining Meta’s current approach to managing ‘coded speech’—subtle language or imagery that carries derogatory implications while avoiding direct violations of community standards.

The board’s investigation will assess whether Meta’s policies on hate speech are robust enough to protect individuals and communities at risk of discrimination, while still allowing for critical discourse on immigration matters. Meta’s policy is designed to protect refugees, migrants, immigrants, and asylum seekers from severe attacks while allowing critique of immigration laws.

Why does it matter?

The outcome of this investigation could prompt significant changes in how Meta moderates content on sensitive topics like immigration, striking a balance between curbing hate speech and preserving freedom of expression. Moreover, Meta’s oversight board tackling politically sensitive posts shows the broader challenges social media platforms face in moderating content that balances the fine line between free expression and inciting division. It highlights the ongoing debate on the role of these platforms in managing nuanced or politically sensitive content, potentially setting a precedent.

Intel faces scrutiny as China calls for security review over national security concerns

The Cybersecurity Association of China (CSAC) has urged a security review of Intel’s products in China, alleging that the US chipmaker poses a national security risk. Although CSAC is an industry group, it has strong connections to the Chinese government, and its claims may prompt action from the Cyberspace Administration of China (CAC).

CSAC’s post on WeChat accuses Intel’s chips, including its Xeon processors used for AI, of containing vulnerabilities and backdoors allegedly tied to the US NSA. The group warns that using Intel products threatens China’s national security and critical infrastructure.

This recommendation comes amid growing US-China tensions over technology and trade. Last year, the CAC banned Chinese infrastructure operators from using products from Micron Technology after a security review, raising concerns that Intel could face a similar outcome.

Intel’s China unit responded, emphasising its commitment to product safety and quality. The company stated on its WeChat account that it will cooperate with authorities to clarify concerns. If the CAC carries out a security review, it could impact Intel’s sales in its significant Chinese market. Intel’s shares recently dropped 2.7% in US premarket trading.

Ukraine accuses Russia of intensifying cyber misinformation

Russia is using generative AI to ramp up disinformation campaigns against Ukraine, warned Ukraine’s Deputy Foreign Minister, Anton Demokhin, during a cyber conference in Singapore. He explained that AI is enabling Russia to spread false narratives on a larger and more complex scale, making it increasingly difficult to detect and counter. The spread of disinformation is a growing focus for Russia, alongside ongoing cyberattacks targeting Ukraine.

Ukrainian officials have previously reported that Russia’s FSB and military intelligence agencies are behind many of these efforts, with the goal of undermining public trust and spreading confusion. Demokhin stressed that Russia’s disinformation efforts are global, calling for international cooperation to tackle this emerging threat. He also mentioned that Ukraine is using AI to track these campaigns but declined to comment on any offensive cyber operations.

Meanwhile, other Russian cyberattacks are targeting Ukraine’s critical infrastructure and supply chains, seeking to disrupt essential services. Ukraine continues to collaborate with the International Criminal Court on investigating Russian cyber activities as potential war crimes.

DOJ issues warning on trade association Information exchanges

The US Department of Justice (DOJ) has released a significant Statement of Interest, urging scrutiny of surveys and information exchanges managed by trade associations. The DOJ expressed concerns that such exchanges may create unique risks to competition, particularly when competitors share sensitive information exclusively among themselves.

According to the DOJ, antitrust laws will evaluate the context of any information exchange to determine its potential impact on competition. Sharing competitively sensitive information could disproportionately benefit participating companies at the expense of consumers, workers, and other stakeholders. The department noted that advancements in AI technology have intensified these concerns, allowing large amounts of detailed information to be exchanged quickly, potentially heightening the risk of anticompetitive behaviour.

This guidance follows the DOJ’s withdrawal of long-standing rules that established “safety zones” for information exchanges, which previously indicated that certain types of sharing were presumed lawful. By retracting this guidance, the DOJ signals a shift toward a more cautious, case-by-case approach, urging businesses to prioritise proactive risk management.

The DOJ’s statement, made in relation to an antitrust case in the pork industry, has wider implications for various sectors, including real estate. It highlights the need for organisations, such as Multiple Listing Services (MLS) and trade associations, to evaluate their practices and avoid environments that could lead to price-fixing or other anticompetitive behaviours. The DOJ encourages trade association executives to review their information-sharing protocols, educate members on legal risks, and monitor practices to ensure compliance with antitrust laws.

SimpliSafe launches new outdoor monitoring solution

SimpliSafe has launched the Active Guard Outdoor Protection service, enhancing its security offerings with a combination of AI and human monitoring. Priced at $50 per month, this new tier builds on its $32 indoor monitoring plan, providing 24/7 protection for outdoor spaces through advanced surveillance.

The new service relies on the Outdoor Security Camera Series 2, which features an ‘AI for the Familiar Face’ feature. This AI minimises false alarms by identifying known visitors. If an unrecognised person is detected, a human agent is alerted and can intervene by activating lights, triggering a siren, or notifying the authorities.

Executives at SimpliSafe emphasise that human agents retain the final decision-making authority, using AI only as a support tool. Hooman Shahidi, SVP of Product, stated that the company prioritises human judgement and workforce diversity to ensure fair monitoring practices. CEO Christian Cerda noted that while the company explores generative AI, it remains cautious about implementing new technologies.

The Series 2 camera costs $200 and offers HD recording, a 140-degree field of view, and two-way communication. It can be powered by batteries or connected to a power source and is waterproof for outdoor use. SimpliSafe, founded in 2006, operates primarily in the US but has expanded to the UK since 2019.

Microsoft warns of rising cyber threats from nations

A recent Microsoft report claims that Russia, China, and Iran are increasingly collaborating with cybercriminals to conduct cyber espionage and hacking operations. This partnership blurs the lines between state-directed activities and the illicit financial pursuits typical of criminal networks. National security experts emphasise that this collaboration allows governments to amplify their cyber capabilities without incurring additional costs while offering criminals new profit avenues and the security of government protection.

The report, which analyses cyber threats from July 2023 to June 2024, highlights the significant increase in cyber incidents, with Microsoft reporting over 600 million attacks daily. Russia has focused its efforts primarily on Ukraine, attempting to infiltrate military and governmental systems while spreading disinformation to weaken international support. Meanwhile, as the US election approaches, both Russia and Iran are expected to intensify their cyber operations aimed at American voters.

Despite allegations, countries like China, Russia, and Iran have denied collaborating with cybercriminals. China’s embassy in Washington dismissed these claims as unfounded, asserting that the country actively opposes cyberattacks. Efforts to combat foreign disinformation are increasing, yet the fluid nature of the internet complicates these initiatives, as demonstrated by the rapid resurgence of websites previously seized by US authorities.

Overall, the evolving landscape of cyber threats underscores the growing interdependence between state actors and cybercriminals, posing significant risks to national security and public trust.

Kenya strengthens ICT sector through new regulatory framework and ICT Authority Bill 2024

The Kenya Communications Authority (CA) has mandated that all dealers of ICT equipment, including manufacturers, vendors, importers, and service providers, undergo a type approval process before connecting devices to the Public Switched Telecommunication Network (PSTN).

That requirement applies to a wide range of devices, such as smartphones, routers, modems, tablets, vehicle trackers, and other networking equipment, thus ensuring that these products meet national and internationally recognised standards. The directive aims to safeguard consumer health, uphold public interest, secure telecommunications networks within the country and enforce compliance through legal penalties.

Specifically, non-compliance can lead to fines reaching up to Ksh5 million ($38,759) and prison sentences of up to three years for serious infractions, while lesser offences carry penalties of up to Ksh250,000 ($1,937). Furthermore, the CA’s regulations address cybercrime by equipping authorities with the means to detect, prevent, investigate, and prosecute computer-related offences, thereby contributing to a safer digital environment in Kenya.

Additionally, to boost revenue, the Kenyan government plans to block devices imported without proper tax documentation from network activation, specifically targeting phones and other ICT equipment lacking tax records. That move strengthens regulatory control over ICT imports, promoting fair taxation and compliance with local laws.

Moreover, the proposed ICT Authority Bill 2024, introduced in May, will require ICT operators to secure operational licenses, further enhancing the quality, security, and efficiency of ICT services in Kenya. Ultimately, the bill aims to support Kenya’s digital economy and ensure that ICT infrastructure aligns with national development goals.

Thousands of users impacted by Facebook and Instagram outage

On Monday, Meta Platforms’ social media platforms Facebook and Instagram experienced a significant outage affecting thousands of users across the US. According to Downdetector, a website that tracks service interruptions, the outage peaked around 1:35 p.m. ET, with over 12,000 users reporting issues with Facebook and more than 5,000 for Instagram.

By 2:09 p.m. ET, the number of reported problems had decreased significantly to around 659 for Facebook and 450 for Instagram. Downdetector’s data is based on user-submitted reports, so the actual number of impacted users may differ.

Meta Platforms did not respond to requests for comment. Earlier this year, a similar issue disrupted services globally for more than two hours, affecting hundreds of thousands of users. That event saw 550,000 disruption reports for Facebook and around 92,000 for Instagram.