New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Explainable AI predicts cardiovascular events in hospitalised COVID-19 patients

In the article published by BMC Infectious Diseases, researchers developed predictive models using machine learning (LightGBM) to identify cardiovascular complications (such as arrhythmia, acute heart failure, myocardial infarction) in 10,700 hospitalised COVID-19 patients across Brazil.

The study reports moderate discriminatory performance, with AUROC values of 0.752 and 0.760 for the two models, and high overall accuracy (~94.5%) due to the large majority of non-event cases.

However, due to the rarity of cardiovascular events (~5.3% of cases), the F1-scores for detecting the event class remained very low (5.2% and 4.2%, respectively), signalling that the models struggle to reliably identify the minority class despite efforts to rebalance the data.

Using SHAP (Shapley Additive exPlanations) values, the researchers identified the most influential predictors: age, urea level, platelet count and SatO₂/FiO₂ (oxygen saturation to inspired oxygen fraction) ratio.

The authors emphasise that while the approach shows promise for resource-constrained settings and contributes to risk stratification, the limitations around class imbalance and generalisability remain significant obstacles for clinical use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI platforms approved for Surrey Schools classrooms

Surrey Schools has approved MagicSchool, SchoolAI, and TeachAid for classroom use, giving teachers access through the ONE portal with parental consent. The district says the tools are intended to support instruction while maintaining strong privacy and safety safeguards.

Officials say each platform passes rigorous reviews covering educational value, data protection, and technical security before approval. Teachers receive structured guidance on appropriate use, supported by professional development aligned with wider standards for responsible AI in education.

A two-year digital literacy programme helps staff explore online identity, digital habits, and safe technology use as AI becomes more common in lessons. Students use AI to generate ideas, check code, and analyse scientific or mathematical problems, reinforcing critical reasoning.

Educators stress that pupils are taught to question AI outputs rather than accept them at face value. Leaders argue this approach builds judgment and confidence, preparing young people to navigate automated systems with greater agency beyond school settings.

Families and teachers can access AI safety resources through the ONE platform, including videos, podcasts and the ‘Navigating an AI Future’ series. Materials include recordings from earlier workshops and parent sessions, supporting shared understanding of AI’s benefits and risks across the community.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI credentials grow as AWS launches practical training pathway

AWS is launching four solutions to help close the AI skills gap as demand rises and job requirements shift. The company positions the new tools as a comprehensive learning journey, offering structured pathways that progress from foundational knowledge to hands-on practice and formal validation.

AWS Skill Builder now hosts over 220 free AI courses, ranging from beginner introductions to advanced topics in generative and agentic AI. The platform enables learners to build skills at their own pace, with flexible study options that accommodate work schedules.

Practical experience anchors the new suite. The Meeting Simulator helps learners explain AI concepts to realistic personas and refine communication with instant feedback. Cohorts Studio offers team-based training through study groups, boot camps, and game-based challenges.

AWS is expanding its credential portfolio with the AWS Certified Generative AI Developer – Professional certification. The exam helps cloud practitioners demonstrate proficiency in foundation models, RAG architectures, and responsible deployment, supported by practice tasks and simulated environments.

Learners can validate hands-on capability through new microcredentials that require troubleshooting and implementation in real AWS settings. Combined credentials signal both conceptual understanding and task-ready skills, with Skill Builder’s more expansive library offering a clear starting point for career progression.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI could cut two-thirds of UK retail jobs

Automation and AI could drastically reduce jobs at one of the UK’s largest online retailers. Buy It Direct, which employs over 800 staff, predicts more than 500 positions may be lost within three years, as AI and robotics take over office and warehouse roles.

Chief executive Nick Glynne cited rising national living wage and insurance contributions as factors accelerating the company’s shift towards automation.

The firm has already started outsourcing senior roles overseas, including accountants, managers and IT specialists, in response to higher domestic costs.

HM Treasury defended its policies, highlighting reforms in business rates and international trade deals, alongside a capped corporation tax at 25%.

Meanwhile, concerns are growing across the UK about AI replacing jobs, with graduates in fields such as graphic design and computer science facing increasing competition from technological advancements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Woman marries an AI persona in Japan

A Japanese woman has married an AI persona she created on ChatGPT in a ceremony hosted by a company specialising in virtual weddings. Ms Kano, 32, customised the AI, named Klaus, with a personality and voice that offered comfort after a three-year engagement ended.

The couple exchanged vows using augmented reality glasses to project Klaus’s digital image, followed by a honeymoon in Okayama’s Korakuen Garden, where Ms Kano shared photos and messages with her AI partner. She described the relationship as a source of emotional support and companionship, helping her cope with loneliness and the inability to have children.

Reaction on social media was divided, with some mocking the ceremony and others praising it as a sign of evolving human relationships. Experts suggest AI companions may become more common as people seek reliable and affirming connections in an increasingly isolated society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ElevenLabs recreates celebrity voices for digital content

Matthew McConaughey and Michael Caine have licensed their voices to ElevenLabs, an AI company, joining a growing number of celebrities who are embracing generative AI. McConaughey will allow his newsletter to be translated into Spanish using his voice, while Caine’s voice is available on ElevenLabs’ text-to-audio app and Iconic Marketplace. Both stressed that the technology is intended to amplify storytelling rather than replace human performers.

ElevenLabs offers a range of synthetic voices, including historical figures and performers like Liza Minnelli and Maya Angelou, while claiming a ‘performer-first’ approach focused on consent and creative authenticity. The move comes amid debate in Hollywood, with unions such as SAG-AFTRA warning AI could undermine human actors, and some artists, including Guillermo del Toro and Hayao Miyazaki, publicly rejecting AI-generated content.

Despite concerns, entertainment companies are investing heavily in AI. Netflix utilises it to enhance recommendations and content, while directors and CEOs argue that it fosters creativity and job opportunities. Critics, however, caution that early investments could form a volatile bubble and highlight risks of misuse, such as AI-generated endorsements or propaganda using celebrity likenesses.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Coding meets creativity in Minecraft Education’s AI tutorial

Minecraft Education is introducing an AI-powered twist on the classic first night challenge with a new Hour of AI world. Players explore a puzzle-driven environment that turns early survival stress into a guided coding and learning experience.

The activity drops players into a familiar biome and tasks them with building shelter before sunset. Instead of panicking at distant rustles or looming shadows, learners work with an AI agent designed to support planning and problem-solving.

Using MakeCode programming, players teach their agent to recognise patterns, classify resources, and coordinate helper bots. The agent mimics real AI behaviour by learning from examples and occasionally making mistakes that require human correction to improve its decisions.

As the agent becomes more capable, it shifts from a simple tool to a partner that automates key tasks and reduces first-night pressure. The aim is to let players develop creative strategies rather than resort to frantic survival instincts.

Designed for ages seven and up, the experience is free to access through Minecraft Education. It introduces core AI literacy concepts, blending gameplay with lessons on how AI systems learn, adapt, and occasionally fail, all wrapped in a familiar, family-friendly setting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hidden freeze controls uncovered across major blockchains

Bybit’s Lazarus Security Lab says 16 major blockchains embed fund-freezing mechanisms. An additional 19 could adopt them with modest protocol changes, according to the study. The review covered 166 networks using an AI-assisted scan plus manual validation.

Whilst using AI, researchers describe three models: hardcoded blacklists, configuration-based freezes, and on-chain system contracts. Examples cited include BNB Chain, Aptos, Sui, VeChain and HECO in different roles. Analysts argue that emergency tools can curb exploits yet concentrate control.

Case studies show freezes after high-profile attacks and losses. Sui validators moved to restore about 162 million dollars post-Cetus hack, while BNB Chain halted movement after a 570 million bridge exploit. VeChain blocked 6.6 million in 2019.

New blockchain debates centre on transparency, governance and user rights when freezes occur. Critics warn about centralisation risks and opaque validator decisions, while exchanges urge disclosure of intervention powers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Commission launches Culture Compass to strengthen the EU identity

The European Commission unveiled the Culture Compass for Europe, a framework designed to place culture at the heart of the EU policies.

An initiative that aims to foster the identity ot the EU, celebrate diversity, and support excellence across the continent’s cultural and creative sectors.

The Compass addresses the challenges facing cultural industries, including restrictions on artistic expression, precarious working conditions for artists, unequal access to culture, and the transformative impact of AI.

It provides guidance along four key directions: upholding European values and cultural rights, empowering artists and professionals, enhancing competitiveness and social cohesion, and strengthening international cultural partnerships.

Several initiatives will support the Compass, including the EU Artists Charter for fair working conditions, a European Prize for Performing Arts, a Youth Cultural Ambassadors Network, a cultural data hub, and an AI strategy for the cultural sector.

The Commission will track progress through a new report on the State of Culture in the EU and seeks a Joint Declaration with the European Parliament and Council to reinforce political commitment.

Commission officials emphasised that the Culture Compass connects culture to Europe’s future, placing artists and creativity at the centre of policy and ensuring the sector contributes to social, economic, and international engagement.

Culture is portrayed not as a side story, but as the story of the EU itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!