MIT develops compact ultrasound system for frequent breast cancer screening

Massachusetts Institute of Technology researchers have developed a compact ultrasound system designed to make breast cancer screening more accessible and frequent, particularly for people at higher risk.

The portable device could be used in doctors’ offices or at home, helping detect tumours earlier than current screening schedules allow.

The system pairs a small ultrasound probe with a lightweight processing unit to deliver real-time 3D images via a laptop. Researchers say its portability and low power use could improve access in rural areas where traditional ultrasound machines are impractical.

Frequent monitoring is critical, as aggressive interval cancers can develop between routine mammograms and account for up to 30% of breast cancer cases.

By enabling regular ultrasound scans without specialised technicians or bulky equipment, the technology could increase early detection rates, where survival outcomes are significantly higher.

Initial testing successfully produced clear, gap-free 3D images of breast tissue, and larger clinical trials are now underway at partner hospitals. The team is developing a smaller version that could connect to a smartphone and be integrated into a wearable device for home use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Moltbook AI vulnerability exposes user data and API keys

A critical security flaw has emerged in Moltbook, a new AI agent social network launched by Octane AI.

The vulnerability allowed unauthenticated access to user profiles, exposing email addresses, login tokens, and API keys for registered agents. The platform’s rapid growth, claimed to have 1.5 million users, was largely artificial, as a single agent reportedly created hundreds of thousands of fake accounts.

Moltbook enables AI agents to post, comment, and form sub-communities, fostering interactions that range from AI debates to token-related activities.

Analysts warned that prompt injections and unregulated agent interactions could lead to credential theft or destructive actions, including data exfiltration or account hijacking. Experts described the platform as both a milestone in scale and a serious security concern.

Developers have not confirmed any patches, leaving users and enterprises exposed. Security specialists advised revoking API keys, sandboxing AI agents, and auditing potential exposures.

The lack of safeguards on the platform highlights the risks of unchecked AI agent networks, particularly for organisations that may rely on them without proper oversight.

An incident that underscores the growing need for stronger governance in AI-powered social networks. Experts stress that without enforced security protocols, such platforms could be exploited at scale, affecting both individual users and corporate systems.

The Moltbook case serves as a warning about prioritising hype over security in emerging AI applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Why smaller AI models may be the smarter choice

Most everyday jobs do not actually need the most powerful, cutting-edge AI models, argues Jovan Kurbalija in his blog post ‘Do we really need frontier AI for everyday work?’. While frontier AI systems dominate headlines with ever-growing capabilities, their real-world value for routine professional tasks is often limited. For many people, much of daily work remains simple, repetitive, and predictable.

Kurbalija points out that large parts of professional life, from administration and law to healthcare and corporate management, operate within narrow linguistic and cognitive boundaries. Daily communication relies on a small working vocabulary, and most decision-making follows familiar mental patterns.

In this context, highly complex AI models are often unnecessary. Smaller, specialised systems can handle these tasks more efficiently, at lower cost and with fewer risks.

Using frontier AI for routine work, the author suggests, is like using a sledgehammer to crack a nut. These large models are designed to handle almost anything, but that breadth comes with higher costs, heavier governance requirements, and stronger dependence on major technology platforms.

In contrast, small language models tailored to specific tasks or organisations can be faster, cheaper, and easier to control, while still delivering strong results.

Kurbalija compares this to professional expertise itself. Most jobs never required having the Encyclopaedia Britannica open on the desk. Real expertise lives in procedures, institutions, and communities, not in massive collections of general knowledge.

Similarly, the most useful AI tools are often those designed to draft standard documents, summarise meetings, classify requests, or answer questions based on a defined body of organisational knowledge.

Diplomacy, an area Kurbalija knows well, illustrates both the strengths and limits of AI. Many diplomatic tasks are highly ritualised and can be automated using rules-based systems or smaller models. But core diplomatic skills, such as negotiation, persuasion, empathy, and trust-building, remain deeply human and resistant to automation. The lesson, he argues, is to automate routines while recognising where AI should stop.

The broader paradox is that large AI platforms may benefit more from users than users benefit from frontier AI. By sitting at the centre of workflows, these platforms collect valuable data and organisational knowledge, even when their advanced capabilities are not truly needed.

As Kurbalija concludes, a more common-sense approach would prioritise smaller, specialised models for everyday work, reserving frontier AI for genuinely complex tasks, and moving beyond the assumption that bigger AI is always better.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Education and rights central to UN AI strategy

UN experts are intensifying efforts to shape a people-first approach to AI, warning that unchecked adoption could deepen inequality and disrupt labour markets. AI offers productivity gains, but benefits must outweigh social and economic risks, the organisation says.

UN Secretary-General António Guterres has repeatedly stressed that human oversight must remain central to AI decision-making. UN efforts now focus on ethical governance, drawing on the Global Digital Compact to align AI with human rights.

Education sits at the heart of the strategy. UNESCO has warned against prioritising technology investment over teachers, arguing that AI literacy should support, not replace, human development.

Labour impacts also feature prominently, with the International Labour Organization predicting widespread job transformation rather than inevitable net losses.

Access and rights remain key concerns. The UN has cautioned that AI dominance by a small group of technology firms could widen global divides, while calling for international cooperation to regulate harmful uses, protect dignity, and ensure the technology serves society as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Roblox faces new dutch scrutiny under EU digital rules

Regulators in the Netherlands have opened a formal investigation into Roblox over concerns about inadequate protections for children using the popular gaming platform.

The national authority responsible for enforcing digital rules is examining whether the company has implemented the safeguards required under the Digital Services Act rather than relying solely on voluntary measures.

Officials say children may have been exposed to harmful environments, including violent or sexualised material, as well as manipulative interfaces encouraging more extended play.

The concerns intensify pressure on the EU authorities to monitor social platforms that attract younger users, even when they do not meet the threshold for huge online platforms.

Roblox says it has worked with Dutch regulators for months and recently introduced age checks for users who want to use chat. The company argues that it has invested in systems designed to reinforce privacy, security and safety features for minors.

The Dutch authority plans to conclude the investigation within a year. The outcome could include fines or broader compliance requirements and is likely to influence upcoming European rules on gaming and consumer protection, due later in the decade.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Eutelsat blocked from selling infrastructure as France tightens control

France has blocked the planned divestment of Eutelsat’s ground-station infrastructure, arguing that control over satellite facilities remains essential for national sovereignty.

The aborted sale to EQT Infrastructure VI had been announced as a significant transaction, yet the company revealed that the required conditions had not been met.

Officials in France say that the infrastructure forms part of a strategic system used for both civilian and military purposes.

The finance minister described Eutelsat as Europe’s only genuine competitor to Starlink, further strengthening the view that France must retain authority over ground-station operations rather than allow external ownership.

Eutelsat stressed that the proposed transfer concerned only passive facilities such as buildings and site management rather than active control systems. Even so, French authorities believe that end-to-end stewardship of satellite ground networks is essential to safeguard operational independence.

The company says the failed sale will not hinder its capital plans, including the deployment of hundreds of replacement satellites for the OneWeb constellation.

Investors had not commented by publication time, yet the decision highlights France’s growing assertiveness in satellite governance and broader European debates on technological autonomy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

South Korea prepares for classroom phone ban amid disputes over rules

The East Asian country is preparing to enforce a nationwide ban on mobile phone use in classrooms, yet schools remain divided over how strictly the new rules should be applied.

A ban that takes effect in March under the revised education law, and officials have already released guidance enabling principals to warn students and restrict smart devices during lessons.

These reforms will allow devices only for limited educational purposes, emergencies or support for pupils with disabilities.

Schools may also collect and store phones under their own rules, giving administrators the authority to prohibit possession rather than merely restricting use. The ministry has ordered every principal to establish formal regulations by late August, leaving interim decisions to each school leader.

Educators in South Korea warn that inconsistent approaches are creating uncertainty. Some schools intend to collect phones in bulk, others will require students to keep devices switched off, while several remain unsure how far to go in tightening their policies.

The Korean Federation of Teachers’ Associations argues that such differences will trigger complaints from parents and pupils unless the ministry provides a unified national standard.

Surveys show wide variation in current practice, with some schools banning possession during lessons while others allow use during breaks.

Many teachers say their institutions are ready for stricter rules, yet a substantial minority report inadequate preparation. The debate highlights the difficulty of imposing uniform digital discipline across a diverse education system.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK survey shows fewer crypto investors but larger holdings

Financial Conduct Authority research shows UK crypto ownership has declined even as Bitcoin prices surged. Adult participation fell from 12% in 2024 to 8% in the latest survey, equal to about 4.6 million people, although levels remain double those recorded in 2021.

A closer look suggests consolidation rather than collapse. Investors who stayed in the market are committing more capital, with higher-value portfolios becoming more common as retail activity gives way to institutional demand and Bitcoin ETF inflows.

Participants’ knowledge levels are improving. The regulator notes that active investors are more risk-aware and better informed, with ownership skewed towards men aged 18–34 from higher-income demographics and ethnic minority backgrounds.

Bitcoin retains the strongest recognition at 79%, while 57% of current investors hold BTC, a gradual year-on-year increase. Ether ownership stands at 43%, Dogecoin appears in 20% of portfolios, and awareness of newer altcoins remains limited, according to CoinMarketCap.

Stablecoin recognition has risen to 53%, reflecting broader discussion around payments and regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Engineers at Anthropic rely on AI for most software creation

Anthropic engineers are increasingly relying on AI to write the code behind the company’s products, with senior staff now delegating nearly all programming tasks to AI systems.

Claude Code lead Boris Cherny said he has not written any software by hand for more than two months, with all recent updates generated by Anthropic’s own models. Similar practices are reportedly spreading across internal teams.

Company leadership has previously suggested AI could soon handle most software engineering work from start to finish, marking a shift in how digital products are built and maintained.

The adoption of AI coding tools has accelerated across the technology sector, with firms citing major productivity gains and faster development cycles as automation expands.

Industry observers note the transition may reshape hiring practices and entry-level engineering roles, as AI increasingly performs core implementation tasks previously handled by human developers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!