AI supports doctors in spotting broken bones

Hospitals in Lincolnshire, UK, are introducing AI to assist doctors in identifying fractures and dislocations, with the aim to speeding up treatment and improving patient care. The Northern Lincolnshire and Goole NHS Foundation Trust will launch a two-year NHS England pilot later this month.

AI software will provide near-instant annotated X-rays alongside standard scans, highlighting potential issues for clinicians to review. Patients under the age of two, as well as those undergoing chest, spine, skull, facial or soft tissue imaging, will not be included in the pilot.

Consultants emphasise that AI is an additional tool, not a replacement, and clinicians will retain the final say on diagnosis and treatment. Early trials in northern Europe suggest the technology can help meet rising demand, and the trust is monitoring its impact closely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI Scientist Kosmos links every conclusion to code and citations

OpenAI chief Sam Altman has praised Future House’s new AI Scientist, Kosmos, calling it an exciting step toward automated discovery. The platform upgrades the earlier Robin system and is now operated by Edison Scientific, which plans a commercial tier alongside free access for academics.

Kosmos addresses a key limitation in traditional models: the inability to track long reasoning chains while processing scientific literature at scale. It uses structured world models to stay focused on a single research goal across tens of millions of tokens and hundreds of agent runs.

A single Kosmos run can analyse around 1,500 papers and more than 40,000 lines of code, with early users estimating that this replaces roughly six months of human work. Internal tests found that almost 80 per cent of its conclusions were correct.

Future House reported seven discoveries made during testing, including three that matched known results and four new hypotheses spanning genetics, ageing, and disease. Edison says several are now being validated in wet lab studies, reinforcing the system’s scientific utility.

Kosmos emphasises traceability, linking every conclusion to specific code or source passages to avoid black-box outputs. It is priced at $200 per run, with early pricing guarantees and free credits for academics, though multiple runs may still be required for complex questions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital accessibility drives revenue as AI adoption rises

Research highlights that digital accessibility is now viewed as a driver of business growth rather than a compliance requirement.

A survey of over 1,600 professionals across the US, UK, and Europe found 75% of organisations linking accessibility improvements to revenue gains, while 91% reported enhanced user experience and 88% noted brand reputation benefits.

AI is playing an increasingly central role in accessibility initiatives. More than 80% of organisations now use AI tools to support accessibility, particularly in mature programmes with formal policies, accountability structures, and dedicated budgets.

Leaders in these organisations view AI as a force multiplier, complementing human expertise rather than replacing it. Despite progress, many organisations still implement accessibility late in digital development processes. Only around 28% address accessibility during planning, and 27% during design stages.

Leadership support and effective training emerged as key success factors. Organisations with engaged executives and strong accessibility training were far more likely to achieve revenue and operational benefits while reducing perceived legal risk.

As AI adoption accelerates and regulatory frameworks expand, companies treating accessibility strategically are better positioned to gain competitive advantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Eurofiber France reportedly hit by data breach

Eurofiber France has suffered a data breach affecting its internal ticket management system and ATE customer portal, reportedly discovered on 13 November. The incident allegedly involved unauthorised access via a software vulnerability, with the full extent still unclear.

Sources indicate that approximately 3,600 customers could be affected, including major French companies and public institutions. Reports suggest that some of the allegedly stolen data, ranging from documents to cloud configurations, may have appeared on the dark web for sale.

Eurofiber has emphasised that Dutch operations are not affected.

The company moved quickly to secure affected systems, increasing monitoring and collaborating with cybersecurity specialists to investigate the incident. The French privacy regulator, CNIL, has been informed, and Eurofiber states that it will continue to update customers as the investigation progresses.

Founded in 2000, Eurofiber provides fibre optic infrastructure across the Netherlands, Belgium, France, and Germany. Primarily owned by Antin Infrastructure Partners and partially by Dutch pension fund PGGM, the company remains operational while assessing the impact of the breach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teenagers still face harmful content despite new protections

In the UK and other countries, teenagers continue to encounter harmful social media content, including posts about bullying, suicide and weapons, despite the Online Safety Act coming into effect in July.

A BBC investigation using test profiles revealed that some platforms continue to expose young users to concerning material, particularly on TikTok and YouTube.

The experiment, conducted with six fictional accounts aged 13 to 15, revealed differences in exposure between boys and girls.

While Instagram showed marked improvement, with no harmful content displayed during the latest test, TikTok users were repeatedly served posts about self-harm and abuse, and one YouTube profile encountered videos featuring weapons and animal harm.

Experts warned that changes will take time and urged parents to monitor their children’s online activity actively. They also recommended open conversations about content, the use of parental controls, and vigilance rather than relying solely on the new regulatory codes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How neurotech is turning science fiction into lived reality

Some experts now say neurotechnology could be as revolutionary as AI, as devices advance rapidly from sci-fi tropes into practical reality. Researchers can already translate thoughts into words through brain implants, and spinal implants are helping people with paralysis regain movement.

King’s College London neuroscientist Anne Vanhoestenberghe told AFP, ‘People do not realise how much we’re already living in science fiction.’

Her lab works on implants for both brain and spinal systems, not just restoring function, but reimagining communication.

At the same time, the technology carries profound ethical risks. There is growing unease about privacy, data ownership and the potential misuse of neural data.

Some even warn that our ‘innermost thoughts are under threat.’ Institutions like UNESCO are already moving to establish global neurotech governance frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New guidelines by Apple curb how apps send user data to external AI systems

Apple has updated its App Review Guidelines to require developers to disclose and obtain permission before sharing personal data with third-party AI systems. The company says the change enhances user control as AI features become more prevalent across apps.

The revision arrives ahead of Apple’s planned 2026 release of an AI-enhanced Siri, expected to take actions across apps and rely partly on Google’s Gemini technology. Apple is also moving to ensure external developers do not pass personal data to AI providers without explicit consent.

Previously, rule 5.1.2(i) already limited the sharing of personal information without permission. The update adds explicit language naming third-party AI as a category that requires disclosure, reflecting growing scrutiny of how apps use machine learning and generative models.

The shift could affect developers who use external AI systems for features such as personalisation or content generation. Enforcement details remain unclear, as the term ‘AI’ encompasses a broad range of technologies beyond large language models.

Apple released several other guideline updates alongside the AI change, including support for its new Mini Apps Programme and amendments involving creator tools, loan products, and regulated services such as crypto exchanges.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ohanian predicts AI-driven jobs growth despite economic jitters

Reddit co-founder Alexis Ohanian says AI remains a durable long-term trend despite growing investor concern that the sector has inflated a market bubble. He argues the technology is now too deeply embedded in workflows to be dismissed as hype.

Tech stocks fell sharply on Thursday as uncertainty over US interest rate cuts prompted investors to seek safer assets. The Nasdaq Composite slid more than two percent, and the AI-driven Magnificent Seven posted broad losses, with Nvidia among the hardest-hit names.

Ohanian says valuations are not his focus but insists the underlying innovations are meaningful, pointing to faster software development as an example of measurable progress. He maintains confidence in technology trends even amid short-term market swings.

He also believes AI will create more roles than it eliminates, despite estimates that widespread adoption could disrupt up to seven percent of the US workforce. He argues that major technological shifts consistently open new career paths.

Ohanian notes that jobs once unimaginable, such as full-time online content creation, are now mainstream aspirations. He expects AI-led change to follow a similar pattern, delivering overall gains while acknowledging that the transition may be uneven.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU investigates Google over potential Digital Markets Act breach

The European Commission has opened an investigation into whether Google may be breaching the Digital Markets Act by unfairly demoting news publishers in search results.

An inquiry that centres on Google’s ‘site reputation abuse policy’, which appears to lower rankings for publishers that host content from commercial partners, even when those partnerships support legitimate ways of monetising online journalism.

The Commission is examining whether Alphabet’s approach restricts publishers from conducting business, innovating, and cooperating with third-party content providers. Officials highlighted concerns that such demotions may undermine revenue at a difficult moment for the media sector.

These proceedings do not imply a final decision; instead, they allow the EU to gather evidence and assess Google’s practices in detail.

If the Commission finds evidence of non-compliance, it will present preliminary findings and request corrective measures. The investigation is expected to conclude within 12 months.

Under the DMA, infringements can lead to fines of up to ten percent of a company’s worldwide turnover, rising to twenty percent for repeated violations, alongside possible structural remedies.

Senior Commissioners stressed that gatekeepers must offer fair and non-discriminatory access to their platforms. They argued that protecting publishers’ ability to reach audiences supports media pluralism, innovation, and democratic resilience.

Google Search, designated as a core platform service under the DMA, has been required to comply fully with the regulation since March 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!