Cybercriminals abandon Kido extortion attempt amid public backlash

Hackers who stole data and images of children from Kido Schools have removed the material from the darknet and claimed to delete it. The group, calling itself Radiant, had demanded a £600,000 Bitcoin ransom, but Kido did not pay.

Radiant initially blurred the photos but kept the data online before later removing all content and issuing an apology. Experts remain sceptical, warning that cybercriminals often claim to delete stolen data while secretly keeping or selling it.

The breach exposed details of around 8,000 children and their families, sparking widespread outrage. Cybersecurity experts described the extortion attempt as a ‘new low’ for hackers and said Radiant likely backtracked due to public pressure.

Radiant said it accessed Kido’s systems by buying entry from an ‘initial access broker’ and then stealing data from accounts linked to Famly, an early years education platform. The Famly told the BBC its infrastructure was not compromised.

Kido confirmed the incident and stated that they are working with external specialists and authorities. With no ransom paid and Radiant abandoning its attempt, the hackers appear to have lost money on the operation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok controversies shadow Musk’s new Grokipedia project

Elon Musk has announced that his company xAI is developing Grokipedia, a planned Wikipedia rival powered by its Grok AI chatbot. He described the project as a step towards achieving xAI’s mission of understanding the universe.

In a post on X, Musk called Grokipedia a ‘necessary improvement over Wikipedia,’ renewing his criticism of the platform’s funding model and what he views as ideological bias. He has long accused Wikimedia of leaning left and reflecting ‘woke’ influence.

Despite Musk’s efforts to position Grok as a solution to bias, the chatbot has occasionally turned on its creator. Earlier this year, it named Musk among the people doing the most harm to the US, alongside Donald Trump and Vice President JD Vance.

The Grok 4 update also drew controversy when users reported that the chatbot praised and adopted the surname of a controversial historical figure in its responses, sparking criticism of its safety. Such incidents raised questions about the limits of Musk’s oversight.

Grok is already integrated into X as a conversational assistant, providing context and explanations in real time. Musk has said it will power the platform’s recommendation algorithm by late 2025, allowing users to customise their feeds dynamically through direct requests.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Portugal to bring AI into bureaucracy to save time

The Portuguese government is preparing to bring AI into public administration to accelerate licensing procedures and cut delays, according to State Reform Minister Gonçalo Matias.

Speaking at a World Tourism Day conference in Tróia, he said AI can play a key role in streamlining decision-making while maintaining human oversight at the final stage.

Matias explained that the reform will reallocate staff from routine tasks to work of higher value, while introducing a system of prior notifications.

Under the plan, citizens and businesses in Portugal will be allowed to begin most activities without a licence, with tacit approval granted if the administration fails to respond within set deadlines.

The minister said the reforms will be tied to strict accountability measures, emphasising a ‘trust contract’ between citizens, businesses and the public administration. He argued the initiative will not only speed up processes but also foster greater efficiency and responsibility across government services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool detects tiny brain lesions, offering hope of epilepsy cure

Australian researchers have developed an AI tool that can identify tiny brain lesions in children with epilepsy, a breakthrough they say could enable faster diagnoses and pave the way for potential cures.

Scientists from the Murdoch Children’s Research Institute and The Royal Children’s Hospital designed the ‘AI epilepsy detective’ to detect lesions as small as a blueberry in up to 94 percent of cases. These cortical dysplasias are often invisible to doctors reviewing MRI scans, with around 80 percent of cases previously missed during human examination.

In a study published in Epilepsia, the team tested the tool on 71 children and 23 adults with focal epilepsy. Seventeen children were part of the test group, and 12 underwent surgery after the lesions were identified using the AI. Eleven are now seizure-free.

Lead researcher Dr Emma Macdonald-Laurs said earlier lesion identification can speed surgery referrals and improve outcomes. ‘Identifying the cause early lets us tailor treatment options and helps neurosurgeons plan and navigate surgery,’ she explained. ‘More accurate imaging allows neurosurgeons to develop a safer surgical roadmap and avoid removing healthy brain tissue.’

Brain lesions are one of the most common causes of drug-resistant seizures, yet they can be challenging to detect using conventional imaging techniques. The researchers now hope to expand the use of their AI tool across paediatric hospitals in Australia with additional funding.

One child, five-year-old Royal, experienced frequent seizures before doctors using the tool identified and removed the lesion responsible. His mother said he is seizure-free and has returned to his ‘calm, friendly, and patient’ self.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How OpenAI designs Sora’s recommendation feed for creativity and safety

OpenAI outlines the core principles behind Sora’s content feed in its Sora Feed Philosophy document. The company states that the feed is designed to spark creativity, foster connections, and maintain a safe user environment.

To achieve these goals, OpenAI says it prioritises creativity over passive consumption. The ranking is steered not simply for engagement, but to encourage active participation. Users can also influence what they see via steerable ranking controls.

Another guiding principle is putting users in control. For instance, parental settings let caretakers turn off feed personalisation or continuous scroll for teen accounts.

OpenAI also emphasises connection. The feed is biassed toward content from people you know or connect with, rather than purely global content, so the experience feels more communal.

In terms of safety and expression, OpenAI embeds guardrails at the content creation level. Because every post is generated within Sora, the system can block disallowed content before it appears.

The feed layers additional filtering, removing or deprioritising harmful or unsafe material (e.g. violent, sexual, hate, self-harm content). At the same time, the design aims not to over-censor, allowing space for genuine expression and experimentation.

On how the feed works, OpenAI says it considers signals like user activity (likes, comments, remixes), location data, ChatGPT history (unless turned off), engagement metrics, and author-level data (e.g. follower counts). Safety signals also weigh in to suppress or filter content flagged as inappropriate.

OpenAI describes the feed as a ‘living, breathing’ system. It expects to update and refine algorithms based on user behaviour and feedback while staying aligned with its founding principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Liverpool scientists develop low-cost AI blood test for Alzheimer’s

Scientists at the University of Liverpool have developed a low-cost blood test that could enable earlier detection of Alzheimer’s disease. The handheld devices, powered by AI and equipped with polymer-based biosensors, deliver results with accuracy comparable to hospital tests at a fraction of the cost.

Alzheimer’s affects more than 55 million people worldwide and remains the most common cause of dementia. Existing hospital tests are accurate but expensive and inaccessible in many clinics, delaying diagnosis and treatment, particularly in low- and middle-income countries.

One study utilised plastic antibodies on a porous gold surface to detect p-tau181, matching high-end laboratory methods. Another built a circuit-board device with a chemical coating that distinguished healthy from patient samples at a lower cost.

The platform is linked to a low-cost reader and a web app that utilises AI for instant analysis. Lead researcher Dr Sanjiv Sharma said the aim was to make Alzheimer’s testing ‘as accessible as checking blood pressure or blood sugar.’

The World Health Organisation has called for decentralised brain disease diagnostics. Researchers say these technologies bring that vision closer to reality, offering hope for earlier treatment and better care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sora 2.0 release reignites debate on intellectual property in AI video

OpenAI has launched Sora 2.0, the latest version of its video generation model, alongside an iOS app available by invitation in the US and Canada. The tool offers advances in physical realism, audio-video synchronisation, and multi-shot storytelling, with built-in safeguards for security and identity control.

The app allows users to create, remix, or appear in clips generated from text or images. A Pro version, web interface, and developer API are expected soon, extending access to the model.

Sora 2.0 has reignited debate over intellectual property. According to The Wall Street Journal, OpenAI has informed studios and talent agencies that their universes could appear in generated clips unless they opt out.

The company defends its approach as an extension of fan creativity, while stressing that real people’s images and voices require prior consent, validated through a verified cameo system.

By combining new creative tools with identity safeguards, OpenAI aims to position Sora 2.0 as a leading platform in the fast-growing market for AI-generated video.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How AI is transforming healthcare and patient management

AI is moving from theory to practice in healthcare. Hospitals and clinics are adopting AI to improve diagnostics, automate routine tasks, support overworked staff, and cut costs. A recent GoodFirms survey shows strong confidence that AI will become essential to patient care and health management.

Survey findings reveal that nearly all respondents believe AI will transform healthcare. Robotic surgery, predictive analytics, and diagnostic imaging are gaining momentum, while digital consultations and wearable monitors are expanding patient access.

AI-driven tools are also helping reduce human errors, improve decision-making, and support clinicians with real-time insights.

Challenges remain, particularly around data privacy, transparency, and the risk of over-reliance on technology. Concerns about misdiagnosis, lack of human empathy, and job displacement highlight the need for responsible implementation.

Even so, the direction is clear: AI is set to be a defining force in healthcare’s future, enabling more efficient, accurate, and equitable systems worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Four new Echo devices debut with Amazon’s next-gen Alexa+

Amazon has unveiled four new Echo devices powered by Alexa+, its next-generation AI assistant. The lineup includes Echo Dot Max, Echo Studio, Echo Show 8, and Echo Show 11, all designed for personalised, ambient AI-driven experiences. Buyers will automatically gain access to Alexa+.

At the core are the new AZ3 and AZ3 Pro chips, which feature AI accelerators, powering advanced models for speech, vision, and ambient interaction. The Echo Dot Max, priced at $99.99, features a two-speaker system with triple the bass, while the Echo Studio, priced at $219.99, adds spatial audio and Dolby Atmos.

The Echo Show 8 and Echo Show 11 introduce HD displays, enhanced audio, and intelligent sensing capabilities. Both feature 13-megapixel cameras that adapt to lighting and personalise interactions. The Echo Show 8 will cost $179.99, while the Echo Show 11 is priced at $219.99.

Beyond hardware, Alexa+ brings deeper conversational skills and more intelligent daily support, spanning home organisation, entertainment, health, wellness, and shopping. Amazon also introduced the Alexa+ Store, a platform for discovering third-party services and integrations.

The Echo Dot Max and Echo Studio will launch on October 29, while the Echo Show 8 and Echo Show 11 arrive on November 12. Amazon positions the new portfolio as a leap toward making ambient AI experiences central to everyday living.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!