AI tool could change marine forecasting methods

An AI-driven forecasting tool developed by the Met Office and the University of Exeter is poised to reshape how marine operations are planned. The low-cost model, MaLCOM, has successfully predicted ocean currents in the Gulf of Mexico.

Designed initially to forecast regional wave patterns around the UK, the framework’s adaptability is now helping model ocean currents in new environments.

The tool’s ability to run on a laptop makes it highly accessible, offering real-time insights that could aid offshore energy.

Researchers emphasise the importance of the model’s transparency, which allows users to inspect how it processes data and generates forecasts. This design supports trust in its outputs and offers a strong foundation for ongoing development.

The project began five years ago and has grown through collaboration between academia, government and industry.

Its recent recognition with the ASCE Offshore Technology Conference Best Paper Award underscores the value of partnerships in accelerating progress in AI-based weather and climate tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Veo 3 video for Gemini users globally

Google has begun rolling out its Veo 3 video-generation model to Gemini users across more than 159 countries. The advanced AI tool allows subscribers to create short video clips simply by entering text prompts.

Access to Veo 3 is limited to those on Google’s AI Pro plan, and usage is currently restricted to three videos per day. The tool can generate clips lasting up to eight seconds, enabling rapid video creation for a variety of purposes.

Google is already developing additional features for Gemini, including the ability to turn images into videos, according to product director Josh Woodward.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK plans new laws to tackle undersea cable sabotage

The UK government’s evolving defence and security policies aim to close legal gaps exposed by modern threats such as cyberattacks and sabotage of undersea cables. As set out in the recent Strategic Defence Review, ministers plan to introduce a new defence readiness bill to protect critical subsea infrastructure better and prepare for hostile acts that fall outside traditional definitions of war.

The government is also considering revising the outdated Submarine Telegraph Act of 1885, whose penalties, last raised in 1982 to £1,000, are now recognised as inadequate. Instead of merely increasing fines, officials from the Ministry of Defence and the Department for Science, Innovation and Technology intend to draft comprehensive legislation that balances civil and military needs, clarifies how to prosecute sabotage, and updates the UK’s approach to national defence in the digital age.

These policy initiatives reflect growing concern about ‘grey zone’ threats—deliberate acts of sabotage or cyber aggression that stop short of open conflict yet pose serious national security risks. Recent suspected sabotage incidents, including damage to subsea cables connecting Sweden, Latvia, Finland, and Estonia, have highlighted how vulnerable undersea infrastructure remains.

Investigations have linked several of these operations to Russian and Chinese interests, emphasising the urgency of modernising UK law. By updating its legislative framework, the UK government aims to ensure it can respond effectively to attacks that blur the line between peace and conflict, safeguarding both national interests and critical international data flows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada’s telecoms face a key choice between competition and investment

Canada is preparing to finalise a critical policy decision regarding internet affordability and competition. The core policy, reaffirmed by the Canadian Radio-television and Telecommunications Commission (CRTC), mandates that the country’s three major telecom providers, Bell, Telus, and Rogers, must grant wholesale access to their fibre optic networks to smaller internet service providers (ISPs).

The ruling aims to increase consumer choice and stimulate competition by allowing smaller players to use existing infrastructure rather than building their own. The policy also notably expands Telus’s ability to enter new markets, such as Ontario and Quebec, without additional infrastructure investment.

Following concerns raised by major telecom companies, the federal government has been asked to review and potentially overturn the decision. The CRTC warns that reversing the policy could undo competition gains and limit future ISP options.

Meanwhile, Telus and other supporters argue that maintaining the ruling protects regulatory independence and encourages further investment by creating market certainty. Major telecom companies in Canada argue that this policy discourages investment and creates unfair competition, with Bell reporting significant cuts to planned infrastructure spending.

Smaller providers worry about losing market share as big players expand using shared networks. The decision will strongly influence Canada’s future internet competition and investment landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X to test AI-generated Community Notes

X, the social platform formerly known as Twitter, is preparing to test a new feature allowing AI chatbots to generate Community Notes.

These notes, a user-driven fact-checking system expanded under Elon Musk, are meant to provide context on misleading or ambiguous posts, such as AI-generated videos or political claims.

The pilot will enable AI systems like Grok or third-party large language models to submit notes via API. Each AI-generated comment will be treated the same as a human-written one, undergoing the same vetting process to ensure reliability.

However, concerns remain about AI’s tendency to hallucinate, where it may generate inaccurate or fabricated information instead of grounded fact-checks.

A recent research paper by the X Community Notes team suggests that AI and humans should collaborate, with people offering reinforcement learning feedback and acting as the final layer of review. The aim is to help users think more critically, not replace human judgment with machine output.

Still, risks persist. Over-reliance on AI, particularly models prone to excessive helpfulness rather than accuracy, could lead to incorrect notes slipping through.

There are also fears that human raters could become overwhelmed by a flood of AI submissions, reducing the overall quality of the system. X intends to trial the system over the coming weeks before any wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grammarly invests in email with Superhuman acquisition

Grammarly announced on Tuesday that it has acquired email client Superhuman to expand its AI capabilities within its productivity suite.

Financial details of the deal were not disclosed by either company. Superhuman, founded by Rahul Vohra, Vivek Sodera and Conrad Irwin, has raised over $114 million from investors such as a16z and Tiger Global, with a last valuation of $825 million.

Grammarly CEO Shishir Mehrotra said the acquisition will enable the company to bring enhanced AI collaboration to millions more professionals, adding that email is not just another app but a crucial platform where users spend significant time.

Superhuman’s CEO Rahul Vohra and his team are joining Grammarly, promising to invest further in improving the Superhuman experience and building AI agents that collaborate across everyday communication tools.

Recently, Superhuman introduced AI-powered features like scheduling, replies and email categorisation. Grammarly aims to leverage the technology to build smarter AI agents for email, which remains a top use case for its customers.

The move follows Grammarly’s acquisition of productivity software Coda last year and the promotion of Shishir Mehrotra to CEO.

In May, Grammarly secured $1 billion from General Catalyst through a non-dilutive investment, repaid by a capped percentage of revenue generated using the funds instead of equity.

The Superhuman deal further signals Grammarly’s commitment to integrating AI deeply into professional communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple may use ChatGPT or Claude to power Siri

Apple is reportedly in talks with OpenAI and Anthropic as it considers outsourcing AI technology for its voice assistant, Siri.

The discussions are said to include the possibility of training versions of ChatGPT or Claude to run on Apple’s cloud infrastructure. According to Bloomberg’s Mark Gurman, Apple is currently leaning towards Anthropic’s Claude as a better fit for Siri, although no final decision has been made.

While Apple already allows users to access ChatGPT through its Apple Intelligence platform, the integration is currently optional and user-driven.

What is now under consideration would mark a significant shift, such as choosing a third-party model to power Siri directly. The initiative comes as the company struggles to keep pace in a rapidly advancing AI market dominated by Google, OpenAI, and others.

Apple is still developing its large language models under a project codenamed LLM Siri. However, these in-house systems are reportedly lagging behind leading models already available.

Should Apple proceed with a third-party integration, it would signal a rare admission that its internal AI efforts are not enough to compete at the highest level.

Once celebrated for breakthrough innovations like the iPhone, Apple has faced growing criticism for a lack of fresh ideas. With rivals embedding generative AI into everyday tools, the pressure is mounting.

If Siri remains limited — still unable to answer basic questions — Apple risks alienating even its most loyal users. Whether through partnership or internal progress, the company now faces a narrowing window to prove it still leads, instead of follows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s Facebook uses phone photos for AI if users allow it

Meta has introduced a new feature that allows Facebook to access and analyse users’ photos stored on their phones, provided they give explicit permission.

The move is part of a broader push to improve the company’s AI tools, especially after the underwhelming reception of its Llama 4 model. Users who opt in will be agreeing to Meta’s AI Terms of Service, which grants the platform the right to retain and use personal media for content suggestions.

The new feature, currently being tested in the US and Canada, is designed to offer Facebook users creative ideas for Stories by processing their photos and videos through cloud infrastructure.

When enabled, users may receive suggestions such as collages or travel highlights based on when and where images were captured, as well as who or what appears in them. However, participation is strictly optional and can be turned off at any time.

Facebook clarifies that the media analysed under the feature is not used to train AI models in the current test. Still, the system does upload selected media to Meta’s servers on an ongoing basis, raising privacy concerns.

The option to activate these suggestions can be found in the Facebook app’s settings, where users are asked whether they want camera roll data to inform sharing ideas.

Meta has been actively promoting its AI ambitions, with CEO Mark Zuckerberg pushing for the development of ‘superintelligence’. The company recently launched Meta Superintelligence Labs to lead these efforts.

Despite facing stiff competition from OpenAI, DeepSeek and Google, Meta appears determined to deepen its use of personal data to boost its AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Africa risks being left behind in global AI development

Africa is falling far behind in the global race to develop AI, according to a new report by Oxford University.

The study mapped the location of advanced AI infrastructure and revealed that only 32 countries — just 16% of the world — currently operate major AI data centres.

These facilities are essential for training and developing modern AI systems. In contrast, most African nations remain dependent on foreign technology providers, limiting their control over digital development.

Rather than building local capacity, Africa has essentially been treated as a market for AI products developed elsewhere. Regional leaders have often focused on distributing global tech tools instead of investing in infrastructure for homegrown innovation.

One notable exception is Strive Masiyiwa’s Cassava Technologies, which recently partnered with Nvidia to launch the continent’s first AI factory, which is located in South Africa. The project aims to expand across Egypt, Kenya, Morocco and Nigeria.

Unlike typical data centres, an AI factory is explicitly built to support the full AI lifecycle, from raw data to trained models. Nvidia’s GPUs will power the facility, enabling ‘AI as a service’ to be used by governments, businesses, and researchers across the continent.

Cassava’s model offers a more sustainable vision, where African data is used to create local solutions, instead of exporting value abroad.

Experts argue that Africa needs more such initiatives to reduce dependence and participate meaningfully in the AI economy. An AI Fund supported by leading African nations could help finance new factories and infrastructure.

With time running out, leaders must move beyond surface-level engagement and begin coordinated action to address the continent’s growing digital divide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenInfra Summit Europe brings focus on AI and VMware alternatives

The OpenInfra Foundation and its global community will gather at the OpenInfra Summit Europe from 17 to 19 October in Paris-Saclay to explore how open source is reshaping digital infrastructure.

It will be the first summit since the Foundation joined the Linux Foundation, uniting major projects such as Linux, Kubernetes and OpenStack under the OpenInfra Blueprint. The agenda includes a strong focus on digital sovereignty, VMware migration strategies and infrastructure support for AI workloads.

Taking place at École Polytechnique in Palaiseau, the summit arrives at a time when open source software is powering nearly $9 trillion of economic activity.

With over 38% of the global OpenInfra community based in Europe, the event will focus on regional priorities like data control, security, and compliance with new EU regulations such as the Cyber Resilience Act.

Developers, IT leaders and business strategists will explore how projects like Kata Containers, Ceph and RISC-V integrate to support cost-effective, scalable infrastructure.

The summit will also mark OpenStack’s 15th anniversary, with use cases shared by the UN, BMW and nonprofit Restos du Coeur.

Attendees will witness a live VMware migration demo featuring companies like Canonical and Rackspace, highlighting real-world approaches to transitioning away from proprietary platforms. Sessions will dive into topics like CI pipelines, AI-powered infrastructure, and cloud-native operations.

As a community-led event, OpenInfra Summit Europe remains focused on collaboration.

With sponsors including Canonical, Mirantis, Red Hat and others, the gathering offers developers and organisations an opportunity to share best practices, shape open source development, and strengthen the global infrastructure ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!