MIT develops AI model to speed up materials synthesis

Researchers at the Massachusetts Institute of Technology have developed a generative AI model to guide scientists through the complex process of materials synthesis, a significant bottleneck in materials discovery.

DiffSyn uses diffusion-based AI to suggest multiple synthesis routes for a material, factoring in temperature, reaction time, and precursor ratios. Unlike earlier tools tied to single recipes, DiffSyn reflects the laboratory reality in which multiple pathways can produce the same material.

The system achieved state-of-the-art accuracy on zeolites, a challenging material class used in catalysis and chemical processing. Using DiffSyn’s recommendations, the team synthesised a new zeolite with improved thermal stability, confirming the model’s practical value.

The researchers believe the approach could be extended beyond zeolites to other complex materials, eventually integrating with automated experiments to shorten the path from theoretical design to real-world application dramatically.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft expands software security lifecycle for AI-driven platforms

AI is widening the cyber risk landscape and forcing security teams to rethink established safeguards. Microsoft has updated its Secure Development Lifecycle to address AI-specific threats across design, deployment and monitoring.

The updated approach reflects how AI can blur trust boundaries by combining data, tools, APIs and agents in one workflow. New attack paths include prompts, plugins, retrieved content and model updates, raising risks such as prompt injection and data poisoning.

Microsoft says policy alone cannot manage non-deterministic systems and fast iteration cycles. Guidance now centres on practical engineering patterns, tight feedback loops and cross-team collaboration between research, governance and development.

Its SDL for AI is organised around six pillars: threat research, adaptive policy, shared standards, workforce enablement, cross-functional collaboration and continuous improvement. Microsoft says the aim is to embed security into every stage of AI development.

The company also highlights new safeguards, including AI-specific threat modelling, observability, memory protections and stronger identity controls for agent workflows. Microsoft says more detailed guidance will follow in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom expands scrutiny of X over Grok deepfake concerns

The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.

As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.

X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.

The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.

Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.

Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.

Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.

Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.

Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.

Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI becomes optional in Firefox 148 as Mozilla launches new control system

Mozilla has confirmed that Firefox will include a built-in ‘AI kill switch‘ from version 148, allowing users to disable all AI features across the browser. The update follows earlier commitments that AI tools would remain optional as Firefox evolves into what the company describes as an AI-enabled browser.

The new controls will appear in the desktop release scheduled to begin rolling out on 24 February. A dedicated AI Controls section will allow users to turn off every AI feature at once or manage each tool individually, reflecting Mozilla’s aim to balance innovation with user choice.

At launch, Firefox 148 will introduce AI-powered translations, automatic alt text for images in PDFs, tab grouping suggestions, link previews, and an optional sidebar chatbot supporting services such as ChatGPT, Claude, Copilot, Gemini, and Le Chat Mistral.

All of these tools can be disabled through a single ‘Block AI enhancements’ toggle, which removes prompts and prevents new AI features from appearing. Mozilla has said preferences will remain in place across updates, with users able to adjust settings at any time.

The organisation said the approach is intended to give people full control over how AI appears in their browsing experience, while continuing development for those who choose to use it. Early access to the controls will also be available through Firefox Nightly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU moves closer to decision on ChatGPT oversight

The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.

OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.

Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.

The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.

ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.

The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.

A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Porto summit highlights growing risks to undersea internet cables

The Second International Submarine Cable Resilience Summit opened this week in Porto, Portugal, bringing together senior officials from governments, international organisations, and industry to address the growing risks facing the underwater cables that carry most of the world’s internet traffic. The event highlighted how submarine cables have become critical infrastructure for the global digital economy, especially as societies grow more dependent on cloud services, AI, and cross-border data flows.

Opening the summit, Ambassador João Mira Gomes, Permanent Representative of Portugal to the United Nations Office at Geneva, explained that Portugal’s infrastructure minister was absent due to ongoing storm recovery efforts, underlining the real-world pressures facing critical infrastructure today. He recalled Portugal’s long history in global connectivity, noting that one of the earliest submarine cables linking Portugal and the United Kingdom was built to support the port wine trade, a reminder that communication networks and economic exchange have long evolved together.

Professor Sandra Maximiano, co-chair of the International Advisory Body for Submarine Cable Resilience, placed the discussions in a broader historical context. She pointed to the creation of the International Telecommunication Union in 1865 as the first global organisation dedicated to managing international communications, stressing that cooperation on submarine cables has always been a ‘positive-sum game’ in which all countries benefit from shared rules and coordination.

Maximiano also highlighted Portugal’s strategic role as a cable hub, citing its extensive coastline, large exclusive economic zone, and favourable landing conditions connecting Europe, the Americas, Africa, and Asia. She outlined key projects such as the Atlantic CAM system linking mainland Portugal with Madeira and the Azores using a resilient ring design and smart cable technology that combines telecommunications with seismic and oceanographic monitoring. Existing and planned systems, she said, are not just data pipelines but foundations for innovation, scientific cooperation, and strategic autonomy.

A major outcome of the summit was the adoption of the Porto Declaration on Submarine Cable Resilience, developed with input from more than 150 experts worldwide. The declaration sets out practical guidance to improve permitting and repair processes, strengthen legal frameworks, promote route diversity and risk mitigation, and enhance capacity-building, with special attention to the needs of small island states and developing countries.

ITU Secretary-General Doreen Bogdan-Martin framed these efforts within a rapidly changing digital landscape, announcing that 2026 will be designated the ‘year of resilience.’ She warned that the scale of global digital dependence has transformed the impact of cable disruptions, as even minor outages can ripple across AI systems, cloud platforms, and autonomous services. Resilience, she argued, now depends as much on international coordination and preparedness as on cable design itself.

From the European Union perspective, European Commission Vice-President Henna Virkkunen outlined upcoming EU measures, including a submarine cable security toolbox and targeted funding through the Connecting Europe Facility. She stressed the importance of regional coordination and praised Portugal’s active role in aligning EU initiatives with global efforts led by the ITU.

Closing the opening session, Ambassador Gomes linked cable resilience to broader goals of development and peace, warning that digital divides fuel inequality and instability, and reaffirming Portugal’s commitment to international cooperation and capacity-building as the summit moves the global conversation from policy to action.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Austria and Poland eye social media limits for minors

Austria is advancing plans to bar children under 14 from social media when the new school year begins in September 2026, according to comments from a senior Austrian official. Poland’s government is drafting a law to restrict access for under-15s, using digital ID tools to confirm age.

Austria’s governing parties support protecting young people online but differ on how to verify ages securely without undermining privacy. In Poland supporters of the draft argue that early exposure to screens is a parental and platform enforcement issue.

Austria and Poland form part of a broader European trend as France moves to ban under-15s and the UK is debating similar measures. Wider debates tie these proposals to concerns about children’s mental health and online safety.

Proponents in both Austria and Poland aim to finalise legal frameworks by 2026, with implementation potentially rolling out in the following year if national parliaments approve the age restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Biodegradable sensors developed to cut e-waste and monitor air pollution

Researchers at Incheon National University have developed biodegradable gas sensors designed to reduce electronic waste while improving air quality monitoring. The technology targets nitrogen dioxide, a pollutant linked to fossil fuel combustion and respiratory diseases.

The sensors are built using organic field-effect transistors, a lightweight and low-energy alternative suited for portable environmental monitoring devices. OFET-based systems are also easier to manufacture compared with traditional silicon electronics.

To create the sensing layer, the research team blended an organic semiconductor polymer, P3HT, with a biodegradable material, PBS. Each polymer was prepared separately in chloroform before being combined into a uniform solution.

Performance varied with solvent composition, with mixtures of chloroform and dichlorobenzene yielding the most consistent and sensitive sensor structures. High PBS concentrations remained effective without compromising detection accuracy.

Project lead Professor Park said the approach balances sustainability and performance, particularly for use in natural environments. The biodegradable design could contribute to long-term pollution monitoring and waste reduction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI skills are gaining momentum among college students

AI tools are already widely used in higher education, with more than half of surveyed students required to use them in coursework and nearly two-thirds using them for assignments. However, the survey suggests that students are largely learning to use AI on their own, relying mainly on informal experimentation rather than structured university-led training.

At the same time, awareness and participation in formal AI education remain limited. Only 31% of students said they were aware of AI-related courses offered by their college or university, and fewer than 20% had taken one, highlighting a gap between widespread use and institutional teaching.

Despite this, many students recognise AI’s growing importance for their careers. Around half believe proficiency with AI tools will be important in the future, reflecting expectations that AI skills will be increasingly valued in the workplace.

Overall, the findings point to an opportunity for universities to strengthen AI education by integrating practical, advanced, and ethical AI training into curricula, helping students move beyond basic use toward workplace-ready skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chinese AI firms offer cash rewards to boost chatbot adoption

Technology firms in China are rolling out large cash incentive campaigns to attract users to their AI chatbots ahead of the expected launch of new AI models later this month.

Alibaba Group has earmarked CNY 3 billion for users of its Qwen AI app, with the promotion beginning on 6 February to coincide with Lunar New Year celebrations.

Tencent Holdings and Baidu have announced similar offers, together committing around CNY 1.5 billion in cash rewards and consumer electronics, including smartphones and televisions.

To qualify for prizes, users must register on the platforms and interact with the chatbots during the promotional period by asking questions or completing everyday planning tasks.

The incentives reflect intensifying competition with global developers such as Google and OpenAI, while also strengthening efforts to position China-based firms as potential local AI partners for Apple in the Chinese market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!