Firefox adds VPN and AI tools

Mozilla is preparing a major update to its Firefox browser, introducing a built-in VPN and new AI-powered tools. The company says the changes aim to strengthen privacy and give users greater control over browsing.

The integrated VPN will hide the user’s location and IP address while offering a limited monthly data allowance in selected regions. The feature replaces a previously separate paid service and will be built into the browser.

New AI tools will support tasks such as summarising content and comparing products without leaving a web page. Additional features include split-screen browsing and tools to organise notes across tabs.

The update also introduces redesigned settings and a refreshed interface to improve usability. Mozilla says the changes are intended to create a more personalised and modern browsing experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU digital wallet nears rollout

Interoperability tests for the European Digital Identity Wallet have marked a significant step towards deployment, following a major industry-wide exercise. Systems were tested under real conditions to ensure compatibility across providers.

The initiative forms part of the EU’s plan to provide citizens with a secure digital wallet for identification and online services. The system will allow users to store identity data and access services, including electronic signatures.

Results showed that most test scenarios were successfully completed, confirming that independent systems can work together effectively. The exercise also highlighted areas requiring further refinement ahead of wider implementation.

EU officials and industry leaders said the progress supports the development of a unified digital ecosystem. The wallet is expected to simplify everyday services while strengthening security and trust in digital identity solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO promotes safe AI use and gender equality in Caribbean workshop

A regional workshop in Kingston has been organised by UNESCO to explore the relationship between AI, gender equality and online safety, reflecting wider efforts to support inclusive digital governance across the Caribbean.

Discussions examined the impact of technology-facilitated gender-based violence, including harassment, impersonation and image-based abuse, which continue to affect women and girls disproportionately.

Generative AI was presented as both an opportunity and a risk, with concerns linked to bias, deepfakes, misinformation and non-consensual content.

More than 50 participants from government, civil society and youth organisations engaged in practical sessions aimed at strengthening awareness and digital skills. A participatory approach encouraged peer learning and critical thinking, aligning with UNESCO’s ethical AI principles.

Technology reflects the hands that build it and the society that feeds it data. If we are not careful, AI will not just mirror our existing inequalities; it will magnify them.

The Honourable Olivia Grange, Minister of Culture, Gender, Entertainment and Sport of Jamaica.

The pursuit of equality must extend into every space where women live, work, and where they connect and express themselves – including the digital world,

For Eric Falt, Regional Director and Representative of UNESCO.

The initiative forms part of broader efforts to ensure that digital transformation supports inclusion rather than reinforcing existing disparities, while equipping stakeholders with tools for safe and responsible AI use.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New iPhone vulnerability raises concerns over advanced mobile cyber threats

A newly identified cyberattack known as ‘DarkSword’ is raising concerns about the security of iPhone devices, following reports that millions of users could be exposed to rapid data extraction techniques.

Cybersecurity researchers indicate that the attack targets specific iOS versions, exploiting vulnerabilities in the Safari browser and a graphics processing feature known as WebGPU.

Once access is gained, attackers can retrieve sensitive information, including messages, emails and location data, within minutes, while removing traces of the intrusion almost immediately.

Estimates suggest that a significant share of global iPhone users may be affected, with hundreds of millions of devices running vulnerable software versions.

The scale of exposure remains uncertain, particularly as experts continue to assess whether additional versions of iOS may also be impacted.

Researchers have associated the campaign with a threat actor previously identified by Google, with observed activity across multiple regions.

Such a development highlights growing concerns about the evolution of mobile cyber threats, where increasingly sophisticated techniques are being deployed beyond traditional state-level operations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

TikTok disinformation study raises concerns over AI content and EU regulation

A new study by Science Feedback indicates that TikTok has a higher proportion of misleading content than other major platforms operating in the EU.

The analysis covered France, Poland, Slovakia and Spain, assessing content across multiple thematic areas including health, politics and climate.

Findings suggest that approximately one in four posts on TikTok contained misleading elements, placing the platform ahead of competitors such as Facebook, YouTube and X. Health-related narratives were the most prominent category, reflecting broader patterns observed across digital ecosystems.

Researchers describe disinformation as a persistent feature embedded within platform structures instead of an isolated occurrence.

The study also highlights a growing presence of AI-generated content, particularly in video formats, where synthetic material accounted for a significant share of misleading posts. Despite existing platform policies, most identified content lacked clear labelling.

The regulatory context remains under development.

While the Digital Services Act integrates voluntary commitments from the EU disinformation code, it does not impose mandatory requirements for identifying AI-generated material.

Ongoing debates therefore focus on transparency, accountability and the evolving responsibilities of digital platforms within the European information environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Malaysia tightens rules on data centres

Malaysia has quietly restricted new data centre approvals to projects linked to AI, signalling a strategic shift in its digital economy. Authorities confirmed that non-AI development has been halted for nearly 2 years.

The policy reflects mounting pressure on energy and water resources as demand for data centres accelerates. Officials aim to ensure infrastructure supports high-value AI projects rather than lower-impact investments.

Rapid growth has positioned Malaysia as a key regional hub, attracting major global technology firms. Concerns remain over whether the country risks hosting infrastructure without building local innovation capacity.

Leaders say future efforts will focus on balancing investment with domestic benefits and energy sustainability. Plans include expanding power supply and strengthening national AI capabilities to secure long term gains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK drops AI copyright opt-out plan amid growing industry divide

The UK Government has abandoned its previous preference for an AI copyright opt-out model, signalling a shift in policy following strong opposition from creative industries.

Ministers now acknowledge that there is no clear consensus on how AI developers should access copyrighted material.

Concerns from writers, artists and rights holders focused on the use of their work in training AI systems without permission.

Liz Kendall confirmed that extensive consultation exposed significant disagreement, prompting the government to step back from its earlier position that would have allowed the use of copyrighted content unless creators opted out.

A joint report from the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport states that further evidence is required before any legislative change.

Policymakers in the UK will assess how copyright frameworks influence AI development, while also examining international regulation, licensing models and ongoing legal disputes.

Government strategy now centres on balancing innovation with fair compensation.

Officials emphasise that creators must retain control over how their work is used, while AI developers require access to high-quality data to remain competitive. Potential measures include labelling AI-generated content to reduce risks linked to disinformation and deepfakes.

No timeline has been set for reform, reflecting the complexity of aligning economic growth with intellectual property protection.

The debate unfolds alongside broader ambitions outlined by Rachel Reeves, who has identified AI as a central driver of future economic expansion, with the UK aiming to lead adoption across the G7.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Amazon upgrades Alexa with AI features

Amazon is rolling out an AI upgrade to its Alexa assistant, aiming to make interactions more conversational and responsive. The new version is designed to follow the context and respond more naturally.

The update comes as Amazon seeks to compete with advanced AI chatbots that have gained popularity in recent years. Critics have argued that smart speakers have fallen behind newer AI tools.

Users in the UK are expected to notice more personalised and proactive responses from the upgraded assistant. This will be based on user and customer personal data. The service will be included with Prime subscriptions or offered as a standalone monthly option.

Analysts say the update could help Amazon gather even more user data and improve engagement by picking up on customers’ habits through conversations. However, questions remain about whether the changes will drive revenue or revive interest in smart speakers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI safety push sees Anthropic and OpenAI recruit explosives specialists

Anthropic and OpenAI are recruiting chemical and explosives experts to strengthen safeguards for their AI systems, reflecting growing concern about the potential misuse of advanced models.

Anthropic is seeking a policy specialist to design and monitor guardrails governing how its systems respond to prompts involving chemical weapons and explosives. The role includes assessing high-risk scenarios and responding to potential escalation signals in real time.

OpenAI is expanding its Preparedness team, hiring researchers and a threat modeller to identify and forecast risks linked to frontier AI systems. The positions focus on evaluating catastrophic risks and aligning technical, policy, and governance responses.

The recruitment drive comes amid heightened scrutiny of AI safety and national security implications. Anthropic is currently challenging a US government designation that labels it a supply-chain risk, while tensions have emerged over restrictions on the military use of AI systems.

At the same time, OpenAI has secured agreements to deploy its technology in classified environments under defined constraints. The parallel developments highlight how AI firms are balancing commercial expansion with increasing pressure to implement robust safety controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta data processing ruled unlawful in Germany

A Berlin court has ruled that Meta unlawfully processed personal data through its Facebook platform, including information belonging to non-users. Judges found the ‘Find Friends’ feature lacked a valid legal basis for handling third-party data.

The court determined that Meta acted as a data controller and could not rely on consent, contract or legitimate interests to justify the processing. Non-users had no reasonable expectation that their data would be collected or stored.

The German judges also ruled that personalised advertising based on platform data breached GDPR rules. The processing was not considered necessary for providing a social media service and lacked a lawful basis.

However, the court accepted that sensitive personal data entered by users could be processed with explicit consent under the GDPR. The ruling is under appeal and may shape future enforcement of the EU data protection law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot