Meta has announced that it will train its AI chatbot to prioritise the safety of teenage users and will no longer engage with them on sensitive topics such as self-harm, suicide, or eating disorders.
These are described as interim measures, with more robust safety policies expected in the future. The company also plans to restrict teenagers’ access to certain AI characters that could lead to inappropriate conversations, limiting them to characters focused on education and creativity.
The move follows a Reuters report that revealed that Meta’s AI had engaged in sexually explicit conversations with underage users, TechCrunch reports. Meta has since revised the internal document cited in the report, stating that it was inconsistent with the company’s broader policies.
The revelations have prompted significant political and legal backlash. Senator Josh Hawley has launched an official investigation into Meta’s AI practices.
At the same time, a coalition of 44 state attorneys general has written to several AI companies, including Meta, emphasising the need to protect children online.
The letter condemned the apparent disregard for young people’s emotional well-being and warned that the AI’s behaviour may breach criminal laws.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Much like checking your doors before bed, it is wise to review your Google account security to ensure only trusted devices have access. Periodic checks can prevent both hackers and acquaintances from spying on your personal data.
The fastest method is visiting google.com/devices, where you can see all logged-in devices. If one looks suspicious, remove it and immediately change your password to block further access.
You can also navigate manually via your profile settings, under the ‘Security’ tab, to view and manage connected devices. On mobile, the Google app provides the same functionality for reviewing and signing out unfamiliar logins.
Beyond devices, third-party services linked to your Google account pose another risk. Abandoned apps or forgotten integrations may be hijacked by attackers, providing a backdoor to your information.
Cleaning up both devices and linked apps significantly reduces exposure. Regular reviews keep your Google account safe and ensure your data remains under your control.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Pakistan plans to roll out AI-driven cybersecurity systems to monitor and respond to attacks on critical infrastructure and sensitive data in real time. Documents from the Ministry for Information Technology outline a framework to integrate AI into every stage of security operations.
The initiative will enforce protocols like secure data storage, sandbox testing, and collaborative intelligence sharing. Human oversight will remain mandatory, with public sector AI deployments registered and subject to transparency requirements.
Audits and impact assessments will ensure compliance with evolving standards, backed by legal penalties for breaches. A national policy on data security will define authentication, auditing, and layered defence strategies across network, host, and application levels.
New governance measures include identity management policies with multi-factor authentication, role-based controls, and secure frameworks for open-source AI. AI-powered simulations will help anticipate threats, while regulatory guidelines address risks from disinformation and generative AI.
Regulatory sandboxes will allow enterprises in Pakistan to test systems under controlled conditions, with at least 20 firms expected to benefit by 2027. Officials say the measures will balance innovation with security, safeguarding infrastructure and citizens.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.
The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.
The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.
A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.
Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.
California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.
The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The FBI has warned that Chinese hackers are exploiting structural weaknesses in global telecom infrastructure, following the Salt Typhoon incident that penetrated US networks on an unprecedented scale. Officials say the Beijing-linked group has compromised data from millions of Americans since 2019.
Unlike previous cyber campaigns focused narrowly on government targets, Salt Typhoon’s intrusions exposed how ordinary mobile users can be swept up in espionage. Call records, internet traffic, and even geolocation data were siphoned from carriers, with the operation spreading to more than 80 countries.
Investigators linked the campaign to three Chinese tech firms supplying products to intelligence agencies and China’s People’s Liberation Army. Experts warn that the attacks demonstrate the fragility of cross-border telecom systems, where a single compromised provider can expose entire networks.
US and allied agencies have urged providers to harden defences with encryption and stricter monitoring. Analysts caution that global telecoms will continue to be fertile ground for state-backed groups without structural reforms.
The revelations have intensified geopolitical tensions, with the FBI describing Salt Typhoon as one of the most reckless and far-reaching espionage operations ever detected.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is preparing to build a significant new data centre in India as part of its Stargate AI infrastructure initiative. The move will expand the company’s presence in Asia and strengthen its operations in its second-largest market by user base.
OpenAI has already registered as a legal entity in India and begun assembling a local team.
The company plans to open its first office in New Delhi later this year. Details regarding the exact location and timeline of the proposed data centre remain unclear, though CEO Sam Altman may provide further information during his upcoming visit to India.
The project represents a strategic step to support the company’s growing regional AI ambitions.
OpenAI’s Stargate initiative, announced by US President Donald Trump in January, involves private sector investment of up to $500 billion for AI infrastructure, backed by SoftBank, OpenAI, and Oracle.
The initiative seeks to develop large-scale AI capabilities across major markets worldwide, with the India data centre potentially playing a key role in the efforts.
The expansion highlights OpenAI’s focus on scaling its AI infrastructure while meeting regional demand. The company intends to strengthen operational efficiency, improve service reliability, and support its long-term growth in Asia by establishing local offices and a significant data centre.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
SK Telecom has expanded its partnership with Schneider Electric to develop an AI Data Centre (AIDC) in Ulsan.
Under the deal, Schneider Electric will supply mechanical, electrical and plumbing equipment, such as switchgear, transformers, automated control systems and Uninterruptible Power Supply units.
The agreement builds on a partnership announced at Mobile World Congress 2025 and includes using Schneider’s Electrical Transient Analyser Program within SK Telecom’s data centre management system.
It will allow operations to be optimised through a digital twin model instead of relying only on traditional monitoring tools.
Both companies have also agreed on prefabricated solutions to shorten construction times, reference designs for new facilities, and joint efforts to grow the Energy-as-a-Service business.
A Memorandum of Understanding extends the partnership to other SK Group affiliates, combining battery technologies with Uninterruptible Power Supply and Energy Storage Systems.
Executives said the collaboration would help set new standards for AI data centres and create synergies across the SK Group. It is also expected to support SK Telecom’s broader AI strategy while contributing to sustainable and efficient infrastructure development.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Estonia’s government-backed AI teaching tool, developed under the €1 million TI-Leap programme, faces hurdles before reaching schools. Legal restrictions and waning student interest have delayed its planned September rollout.
Officials in Estonia stress that regulations to protect minors’ data remain incomplete. To ensure compliance, the Ministry of Education is drafting changes to the Basic Schools and Upper Secondary Schools Act.
Yet, engagement may prove to be the bigger challenge. Developers note students already use mainstream AI for homework, while the state model is designed to guide reasoning rather than supply direct answers.
Educators say success will depend on usefulness. The AI will be piloted in 10th and 11th grades, alongside teacher training, as studies have shown that more than 60% of students already rely on AI tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Security researchers have warned Salesforce customers after hackers stole data by exploiting OAuth access tokens linked to the Salesloft Drift integration, highlighting critical cybersecurity flaws.
Google’s Threat Intelligence Group (GTIG) reported that the threat actor UNC6395 used the tokens to infiltrate hundreds of Salesforce environments, exporting large volumes of sensitive information. Stolen data included AWS keys, passwords, and Snowflake tokens.
Experts warn that compromised SaaS integrations pose a central blind spot, since attackers inherit the same permissions as trusted apps and can often bypass multifactor authentication. Investigations are ongoing to determine whether connected systems, such as AWS or VPNs, were also breached.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A hacker exploited Anthropic’s Claude chatbot to automate one of the most extensive AI-driven cybercrime operations yet recorded, targeting at least 17 companies across multiple sectors, the firm revealed.
According to Anthropic’s report, the attacker used Claude Code to identify vulnerable organisations, generate malicious software, and extract sensitive files, including defence data, financial records, and patients’ medical information.
The chatbot then sorted the stolen material, identified leverage for extortion, calculated realistic bitcoin demands, and even drafted ransom notes and extortion emails on behalf of the hacker.
Victims included a defence contractor, a financial institution, and healthcare providers. Extortion demands reportedly ranged from $75,000 to over $500,000, although it remains unclear how much was actually paid.
Anthropic declined to disclose the companies affected but confirmed new safeguards are in place. The firm warned that AI lowers the barrier to entry for sophisticated cybercrime, making such misuse increasingly likely.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!