The discovery of AI chatbots resembling deceased teenagers Molly Russell and Brianna Ghey on Character.ai has drawn intense backlash, with critics denouncing the platform’s moderation. Character.ai, which lets users create digital personas, faced criticism after ‘sickening’ replicas of Russell, who died by suicide at 14, and Ghey, who was murdered in 2023, appeared on the platform. The Molly Rose Foundation, a charity named in Russell’s memory, described these chatbots as a ‘reprehensible’ failure of moderation.
Concerns about the platform’s handling of sensitive content have already led to legal action in the US, where a mother is suing Character.ai after claiming her 14-year-old son took his own life following interactions with a chatbot. Character.ai insists it prioritises safety and actively moderates avatars in line with user reports and internal policies. However, after being informed of the Russell and Ghey chatbots, it removed them from the platform, saying it strives to ensure user protection but acknowledges the challenges in regulating AI.
Amidst rapid advancements in AI, experts stress the need for regulatory oversight of platforms hosting user-generated content. Andy Burrows, head of the Molly Rose Foundation, argued stronger regulation is essential to prevent similar incidents, while Brianna Ghey’s mother, Esther Ghey, highlighted the manipulation risks in unregulated digital spaces. The incident underscores the emotional and societal harm that can arise from unsupervised AI-generated personas.
The case has sparked wider debates over the responsibilities of companies like Character.ai, which states it bans impersonation and dangerous content. Despite automated tools and a growing trust and safety team, the platform faces calls for more effective safeguards. AI moderation remains an evolving field, but recent cases have underscored the pressing need to address risks linked to online platforms and user-created chatbots.
Clacton County High School in Essex, UK, has issued a warning to parents about a WhatsApp group called ‘Add Everyone,’ which reportedly exposes children to explicit and inappropriate material. In a Facebook post, the school advised parents to ensure their children avoid joining the group, urging them to block and report it if necessary. The warning comes amid rising concern about online safety for young people, though the school noted it had no reports of its students joining the group.
Parents have reacted strongly to the warning, with many sharing experiences of their children being added to groups containing inappropriate content. One parent described it as ‘absolutely disgusting’ and ‘scary’ that young users could be added so easily, while others expressed relief that their children left the group immediately. A similar alert was issued by Clacton Coastal Academy, which posted on social media about explicit content circulating in WhatsApp groups, though it clarified that no students at their academy had reported it.
UK, Essex Police are also investigating reports from the region about unsolicited and potentially illegal content being shared via WhatsApp. Police emphasised that, while WhatsApp can be useful for staying connected, it can also be a channel for unsolicited and abusive material. The police have encouraged parents and students to use online reporting tools to report harmful content and reminded parents to discuss online safety measures with their children.
Russia has slapped Google with an astronomical fine of $20 decillion, or 2 undecillion rubles, over the tech giant’s removal of Russian state-backed TV channels from YouTube. This 33-digit penalty, which has been mounting for four years since the initial court case in 2020, far exceeds Google’s entire market value and dwarfs even the global GDP, which stands at around $110 trillion.
Legal experts note that such an enormous fine is largely symbolic. Roman Yankovsky from the HSE Institute of Education explained that Russia has no real way to enforce this penalty internationally, as Google’s market cap sits at just over $2 trillion. The original case stemmed from YouTube’s ban of the Russian channel Tsargrad, following US sanctions imposed on the channel’s parent company.
While Google hasn’t commented, analysts view the fine as part of Russia’s broader pushback against Western tech companies and their content policies.
ForceField is unveiling its new technology at the 2024 TechCrunch Disrupt, introducing tools aimed at fighting deepfakes and manipulated content. Unlike platforms that flag AI-generated media, ForceField authenticates content directly from devices, ensuring the integrity of digital evidence. Using its HashMarq API, the startup verifies the authenticity of data streams by generating a secure digital signature in real time.
The company uses blockchain technology for smart contracts, safeguarding content without relying on cryptocurrencies or web3 solutions. This system authenticates data collected across various platforms, from mobile apps to surveillance cameras. By tracking metadata like time, location, and surrounding signals, ForceField provides insights that aid journalists, law enforcement, and organisations in verifying the accuracy of submitted media.
ForceField was inspired by CEO MC Spano’s personal experience in 2018, when she struggled to submit video evidence following an assault. Her frustration with the justice system sparked the creation of technology that could simplify evidence submission and ensure its acceptance. Now the startup is working with clients such as Erie Insurance and plans to launch commercially by early 2025, focusing initially on the insurance sector but with applications in media and law enforcement.
The company, which is entirely woman-led, has gained financial backing from several angel investors and strategic partnerships. Spano aims to raise a seed round by year’s end, highlighting the importance of diversity in tech leadership. As AI-generated content continues to flood the internet, ForceField’s tools offer a new way to validate authenticity and restore trust in digital information.
On 21 and 24 October, DiploFoundation provided just-in time reporting from the UN Security Council sessions on scientific development and on women, peace, and security. Supported by Switzerland, this initiative aims to enhance the work of the UN Security Council and the broader UN system.
At the core of this effort is DiploAI, an advanced platform shaped by years of training on UN materials, which played a crucial role in unlocking the knowledge generated by the Security Council’s deliberations. This knowledge, often trapped in video recordings and transcripts, is now more accessible, providing valuable insights for diplomacy and global peace.
Unlocking the power of AI for peace and security
AI-supported reporting from the UN Security Council (UNSC) demonstrates the potential of combining cutting-edge technology with deep expertise in peace and security. This effort is part of ongoing work by DiploAI, which has been providing detailed reports on Security Council sessions in 2023-2024 and has covered the UN General Assembly (UNGA) for eight consecutive years. DiploAI is actively contributing to expanding the UN’s knowledge ecosystem.
Seamless interplay between experts and AI
The success of this initiative lies in the seamless interplay between DiploAI and security experts well-versed in UNSC procedures. The collaboration began with tailoring the AI system to the unique needs of the Council, using input from experts and diplomats to build a relevant knowledge base. Experts supplied key documents and session materials, which enhanced the AI’s contextual understanding. Feedback loops on keywords, topics, and focus areas ensured the AI’s output remained both accurate and diplomatically relevant.
A pivotal moment in this collaboration was the analysis of New Agenda for Peace , where Security Council experts helped DiploAI identify over 400 critical topics, laying the foundation for a comprehensive taxonomy on peace and security at the UN. This expertise, combined with DiploAI’s technical capabilities, has resulted in an AI system attuned to the subtleties of diplomatic language and priorities. Furthermore, the project introduced a Knowledge Graph—a visual tool for displaying sentiment and relational analysis between statements and topics—which adds new depth to the analysis of Council sessions.
Building on this foundation, DiploAI developed a custom chatbot capable of moving beyond standard Q&A interactions. By integrating data from all 2024 sessions and associated documents, the chatbot allows users to interact conversationally with the content, providing in-depth answers and real-time insights. This evolution marks a significant leap forward in accessing and understanding diplomatic data—shifting from static reports to interactive exploration of session materials.
AI and diplomatic sensitivities
The development of DiploAI’s Q&A module, refined through approximately ten iterations with feedback from UNSC experts, underscores the value of human-AI(-nism) collaboration. This module addresses essential diplomatic questions, with iterative refinements ensuring that responses meet the Council’s standards for accuracy and relevance. The result is an AI system capable of addressing critical inquiries while respecting the sensitivity required in diplomatic settings.
What’s new?
DiploAI’s suite of tools—including real-time meeting transcription and analysis—has transformed reporting and transparency at the UNSC. By integrating customized AI systems like retrieval-augmented generation (RAG) and knowledge graphs, DiploAI adds context, depth, and relevance to the extracted information. Trained on a vast corpus of diplomatic knowledge generated at Diplo over the last two decades, the AI system generates context-specific responses, providing comprehensive answers to questions about transcribed sessions.
Here are some numbers from 10 UNSC meetings that took place between January 2023 and October 2024:
Number of speakers and speech length
Unique speakers: 185
Total time: 201,221.25 min – 2.0 days, 7.0 hours, 53.0 minutes, 41.0 seconds
Total speeches: 583
Total length: 396,172 words, or 0.67 ‘War and Peace’ books
Frequency of selected topics
Name of the topic
Number of times it was mentioned
Name of the session
development
1,665 mentions
The session that most mentioned development: UNSC meeting: Peace and common development (919 mentions)
climate change
451 mentions
The session that most mentioned climate change: UNSC meeting: Climate change and food insecurity (329 mentions)
human rights
360 mentions
The session that most mentioned human rights: UNSC meeting: Peace and common development (93 mentions)
civilians
136 mentions
The session that most mentioned civilians: UNSC meeting: Peacekeeping (72 mentions)
international humanitarian law
27 mentions
The session that most mentioned international humanitarian law: UNSC meeting: Multilateral cooperation (6 mentions)
In conclusion…
DiploAI’s reporting from the Security Council, supported by Switzerland, shows how AI can enhance diplomacy while staying grounded in human expertise and practical needs. This blend of technical capability and domain-specific knowledge demonstrates how AI, when developed collaboratively, can contribute to more inclusive, informed, and impactful diplomacy.
Google has extended its AI Overviews in Search to more than 100 countries and territories. Initially launched in the US in May, the feature provides summarised snapshots at the top of search results. It now serves over one billion users globally each month.
The expanded rollout introduces more language options, including English, Hindi, Indonesian, Japanese, Portuguese, and Spanish. Google aims to enhance the usability of the tool, offering new features like in-line links, which improve website traffic by embedding source links directly within the text.
AI Overviews are also playing a role in the company’s advertising strategy. Ads will now appear within the AI-generated summaries for mobile users in the US, marking a new direction for Google’s ad business by integrating advertising more seamlessly.
Despite some challenges at launch, including incorrect information that raised concerns, Google has made significant improvements. Fine-tuning efforts are ongoing, and the feature has also been introduced to Google Shopping, further expanding its presence across the platform.
A new podcast titled Virtually Parkinsonbrings back the voice of Sir Michael Parkinson, using AI technology to simulate the late chat show host. Produced by Deep Fusion Films with support from Parkinson’s family, the series aims to recreate his interview style across eight episodes, featuring new conversations with prominent guests.
Mike Parkinson, son of the late broadcaster, explained that the family wanted listeners to know the voice is an AI creation, ensuring transparency. He noted the project was inspired by conversations he had with his father before he passed, saying Sir Michael would have found the concept intriguing, despite being a technophobe.
The release comes amid growing controversy around AI’s role in the creative arts, with many actors and presenters fearing it could undermine their careers. Though AI is often criticised for replacing real talent, Parkinson’s son argued that the podcast offers a unique way to extend his father’s legacy, without replacing a living presenter.
Co-creator Jamie Anderson clarified that the AI version acts as an autonomous host, conducting interviews in a way reflective of Sir Michael’s original style. The podcast seeks to introduce his legacy to younger audiences, while also raising ethical questions about the use of AI to recreate deceased individuals.
Meta Platforms has expressed concerns over Malaysia’s plan to require social media platforms to obtain regulatory licenses by 1 January 2025. The Malaysian government’s new regulation aims to combat online threats like scams, cyberbullying, and sexual crimes. However, Meta’s director of public policy for Southeast Asia, Rafael Frankel, criticised the timeline, arguing it’s ‘exceptionally accelerated’ and lacks clear guidelines, potentially hindering digital innovation and economic growth.
Malaysia announced in July that any social media or messaging service with over eight million users would need to comply or face legal repercussions. The policy has sparked backlash from industry groups, including Meta, which asked the government in August to reconsider. Communications Minister Fahmi Fadzil reiterated that tech companies must align with local laws to continue operating in Malaysia, signalling no plans for delay.
Frankel emphasised that Meta has yet to decide whether to apply for the license due to the vague regulatory framework, pointing out that similar regulations typically take years to finalise to avoid stifling innovation. While Malaysia’s communications ministry has yet to comment, Fahmi recently met with Meta representatives, thanking them for their cooperation but urging more action against harmful content, particularly regarding minors.
Meta has stated its shared commitment to online safety and is collaborating with Malaysian authorities to remove harmful content. Frankel argued that Meta already prioritises online safety and doesn’t require a licensing framework. Despite ongoing concerns, Meta hopes to work with the government to find a middle ground on the regulations before implementation.
A new app called Loops is aiming to be the TikTok of the fediverse, an open-source social network ecosystem. Loops, which just opened for signups, will feature short, looping videos similar to TikTok’s format. Although still in development, the platform plans to be open-source and integrate with ActivityPub, the protocol that powers other federated apps like Mastodon and Pixelfed.
Loops is the latest project from Daniel Supernault, creator of Pixelfed, and will operate under the Pixelfed umbrella. Unlike mainstream social media, Loops promises not to sell user data to advertisers, nor will it use content to train AI models. Users will retain full ownership of their videos, granting Loops only limited permissions for use.
Like other fediverse platforms, Loops will rely on user donations for funding rather than investor support, with plans to accept contributions through Patreon and similar platforms. The app will also allow users on other federated networks, like Mastodon, to interact with Loops content seamlessly. Loops is currently seeking community input on its policies and looking for moderators to guide the platform’s early stages.
The consumer rights organisation, Brazil’s Collective Defense Institute, has launched two lawsuits against the Brazilian divisions of TikTok, Kwai, and Meta Platforms, seeking damages of 3 billion reais ($525 million). The lawsuits accuse these companies of neglecting to implement adequate protections to prevent young users from excessive social media use, which could harm children’s mental health.
The lawsuits highlight a growing debate over social media regulation in Brazil, especially after a high-profile legal dispute between Elon Musk’s X platform and a Brazilian Supreme Court justice led to significant fines. The consumer rights group is pushing for these platforms to establish clear data protection protocols and issue stronger warnings about the risks of social media addiction for minors.
Based on research into the effects of unregulated social media usage, particularly among teenagers, the lawsuits argue for urgent changes. Attorney Lillian Salgado, representing the plaintiffs, stressed the need for Brazil to adopt safety measures similar to those used in developed countries, including modifying algorithms, managing user data for those under 18, and enhancing account oversight for minors.
In response, Meta stated it has prioritised youth safety for over a decade, creating over 50 tools to protect teens. Meta also announced that a new ‘Teen Account’ feature on Instagram will soon launch in Brazil, automatically limiting what teenagers see and controlling who can contact them. TikTok said it had not received notice of the case, while Kwai emphasised that user safety, particularly for minors, is a primary focus.