EU conference highlights the need for collaboration in digital safety and growth

European politicians and experts gathered in Billund for the conference ‘Towards a Safer and More Innovative Digital Europe’, hosted by the Danish Parliament.

The discussions centred on how to protect citizens online while strengthening Europe’s technological competitiveness.

Lisbeth Bech-Nielsen, Chair of the Danish Parliament’s Digitalisation and IT Committee, stated that the event demonstrated the need for the EU to act more swiftly to harness its collective digital potential.

She emphasised that only through cooperation and shared responsibility can the EU match the pace of global digital transformation and fully benefit from its combined strengths.

The first theme addressed online safety and responsibility, focusing on the enforcement of the Digital Services Act, child protection, and the accountability of e-commerce platforms importing products from outside the EU.

Participants highlighted the importance of listening to young people and improving cross-border collaboration between regulators and industry.

The second theme examined Europe’s competitiveness in emerging technologies such as AI and quantum computing. Speakers called for more substantial investment, harmonised digital skills strategies, and better support for businesses seeking to expand within the single market.

A Billund conference emphasised that Europe’s digital future depends on striking a balance between safety, innovation, and competitiveness, which can only be achieved through joint action and long-term commitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jobs and skills transform as AI changes the workplace

AI is transforming the job market as companies cut traditional roles and expand AI-driven positions. Major employers like Accenture, IBM and Amazon are investing heavily in training while reducing headcount, signalling a shift in what skills truly matter.

Research from Drexel University highlights a growing divide between organisations that adopt AI and workers who are prepared to use it effectively. Surveys show that while most companies rely on AI in daily operations, fewer than four in ten believe their employees are ready to work alongside intelligent systems.

Experts say the future belongs to those with ā€˜human-AI fluency’ that means people who can question, interpret and apply machine output to real business challenges. Firms that build trust, encourage learning and blend technical understanding with sound judgement are proving best equipped to thrive in the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Jersey launches AI council to drive innovation

A new Artificial Intelligence Council has been launched in Jersey to strengthen collaboration and coordinate the island’s approach to AI adoption. Led by Digital Jersey, the council seeks to bring together public and private sector initiatives to ensure AI technologies are used responsibly and effectively.

The council’s mission is to facilitate cooperation and knowledge exchange among key organisations, including the government, Jersey Finance, and the Institute of Directors. It aims to create a unified plan that draws on members’ expertise to maximise benefits while reducing potential risks.

Tony Moretta, chief executive of Digital Jersey and chair of the AI Council, said the island was at a pivotal stage in its AI journey. He emphasised that collective action could accelerate progress far more than isolated efforts across individual organisations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI becomes fastest-growing business platform in history

OpenAI has surpassed 1 million business customers, becoming the fastest-growing business platform in history. Companies in healthcare, finance, retail, and tech use ChatGPT for Work or API access to enhance operations, customer experiences, and team workflows.

Consumer familiarity is driving enterprise adoption. With over 800 million weekly ChatGPT users, rollouts face less friction. ChatGPT for Work now has more than 7 million seats, growing 40% in two months, while ChatGPT Enterprise seats have increased ninefold year-over-year.

Businesses are reporting strong ROI, with 75% seeing positive results from AI deployment.

New tools and integrations are accelerating adoption. Company knowledge lets AI work across Slack, SharePoint, and GitHub. Codex accelerates engineering workflows, while AgentKit facilitates rapid enterprise agent deployment.

Multimodal models now support text, images, video, and audio, allowing richer workflows across industries.

Many companies are building applications directly on OpenAI’s platform. Brands like Canva, Spotify, and Shopify are integrating AI into apps, and the Agentic Commerce Protocol is bringing conversational commerce to everyday experiences.

OpenAI aims to continue expanding capabilities in 2026, reimagining enterprise workflows with AI at the core.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MAI-Image-1 arrives in Bing and Copilot with EU launch on the way

Microsoft’s in-house image generator, MAI-Image-1, now powers Bing Image Creator and Copilot Audio Expressions, with EU availability coming soon, according to Mustafa Suleyman. It’s optimised for speed and photorealism in food, landscapes, and stylised lighting.

In Copilot’s Story Mode, MAI-Image-1 pairs artwork with AI audio, linking text-to-image and text-to-speech. Microsoft pitches realism and fast iteration versus larger, slower models to shorten creative workflows.

The rollout follows August’s MAI-Voice-1 and MAI-1-preview. Copilot is shifting to OpenAI’s GPT-5 while continuing to offer Anthropic’s Claude, signalling a mixed-model strategy alongside homegrown systems.

Bing’s Image Creator lists three selectable models, which are MAI-Image-1, OpenAI’s DALL-E 3, and OpenAI’s GPT-4o. Microsoft says MAI-Image-1 enables faster ideation and hand-off to downstream tools for refinement.

Analysts see MAI-Image-1 as part of a broader effort to reduce dependence on third-party image systems while preserving user choice. Microsoft highlights safety tooling and copyright-aware practices across Copilot experiences as adoption widens.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Project Suncatcher to scale AI computing in space

Google has unveiled Project Suncatcher, a research initiative exploring how AI computation could be scaled in space. The project aims to create an interconnected constellation of solar-powered satellites equipped with Google’s Tensor Processing Unit (TPU) chips.

Researchers hope that off-Earth computation could unlock new possibilities for high-performance AI, powered directly by the Sun. Early research focuses on satellite design, communication systems and radiation testing to ensure the TPUs function in orbit.

The company plans a joint mission with Planet to launch two prototype satellites by early 2027. These trials will test the hardware in space and assess the feasibility of large-scale solar computation networks.

Project Suncatcher continues Google’s tradition of ambitious research ā€˜moonshots’, following advances in quantum computing and autonomous systems. If successful, it could redefine how energy and computing resources are harnessed for future AI breakthroughs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian government highlights geopolitical risks to critical infrastructure

According to the federal government’s latest Critical Infrastructure Annual Risk Review, Australia’s critical infrastructure is increasingly vulnerable due to global geopolitical uncertainty, supply chain vulnerabilities, and advancements in technology.

The report, released by the Department of Home Affairs, states that geopolitical tensions and instability are affecting all sectors essential to national functioning, such as energy, healthcare, banking, aviation and the digital systems supporting them.

It notes that operational environments are becoming increasingly uncertain both domestically and internationally, requiring new approaches to risk management.

The review highlights a combination of pressures, including cyber threats, supply chain disruptions, climate-related risks and the potential for physical sabotage. It also points to challenges linked to ā€œmalicious insidersā€, geostrategic shifts and declining public trust in institutions.

According to the report, Australia’s involvement in international policy discussions has, at times, exposed it to possible retaliation from foreign actors through activities ranging from grey zone operations to preparations for state-sponsored sabotage.

It further notes that the effects of overseas conflicts have influenced domestic sentiment and social cohesion, contributing to risks such as ideologically driven vandalism, politically motivated violence and lone-actor extremism.

To address these challenges, the government emphasises the need for adaptable risk management strategies that reflect shifting dependencies, short- and long-term supply chain issues and ongoing geopolitical tensions.

The report divides priority risks into two categories: those considered most plausible and those deemed most harmful. Among the most convincing are extreme-impact cyber incidents and geopolitically driven supply chain disruption.

The most damaging risks include disrupted fuel supplies, major cyber incidents and state-sponsored sabotage. The review notes that because critical sectors are increasingly interdependent, disruption in one area could have cascading impacts on others.

Australia currently imports 61 percent of its fuel from the Middle East, with shipments transiting maritime routes that are vulnerable to regional tensions. Many global shipping routes also pass through the Taiwan Strait, where conflict would significantly affect supply chains.

Home Affairs Minister Tony Burke said the review aims to increase understanding of the risks facing Australia’s essential services and inform efforts to enhance resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UAE invites public to design commemorative AI coin

The UAE has launched a pioneering initiative inviting the public to design a commemorative coin using AI. The competition, run by the AI Office and Central Bank, coincides with National Code Day, marking the UAE’s first electronic government in 2001.

Participants must create a circular coin design with generative AI tools, adhering to ethical and legal standards suitable for minting. Officials emphasise that the initiative reflects the UAE’s ambition to reinforce its position as a global hub for technology and innovation.

Omar Sultan Al Olama, Minister of State for Artificial Intelligence, highlighted the project as part of the nation’s digital vision. Central Bank Governor Khaled Mohamed Balama added that the competition promotes public engagement and the development of innovative skills.

The winning design will feature on a commemorative coin issued by the UAE Central Bank, symbolising the country’s leadership in the digital era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Perplexity’s Comet hits Amazon’s policy wall

Amazon removed Perplexity’s Comet after receiving warnings that it was shopping without identifying itself. Perplexity says an agent inherits a user’s permissions. The fight turns a header detail into a question of who gets to intermediate online buying.

Amazon likens agents to delivery or travel intermediaries that announce themselves, and hints at blocking non-compliant bots. With its own assistant, Rufus, critics fear rules as competitive moats; Perplexity calls it gatekeeping.

Beneath this is a business-model clash. Retailers monetise discovery with ads and sponsored placement. Neutral agents promise price-first buying and fewer impulse ads. If bots dominate, incumbents lose margin and control of merchandising levers.

Interoperability likely requires standards, including explicit bot IDs, rate limits, purchase scopes, consented data access, and auditable logs. Stores could ship agent APIs for inventory, pricing, and returns, with 2FA and fraud checks for transactions.

In the near term, expect fragmentation as platforms favour native agents and restrictive terms, while regulators weigh transparency and competition. A workable truce: disclose the agent, honour robots and store policies, and use clear opt-in data contracts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google AI roadmap aims to accelerate nature protection and restoration

Google and the World Resources Institute have co-authored a new paper on how to harness AI to support conservation efforts. The paper begins by highlighting successful applications of AI in nature conservation. There are near-real-time monitoring tools that track forests and oceans.

For instance, platforms like Global Fishing Watch scan billions of satellite signals to map human activity at sea and support sustainable fishing. Citizen-science apps such as iNaturalist use AI to identify plants and animals from a photo, turning observations into usable biodiversity data.

New multimodal approaches combine satellite imagery, audio recordings and field notes to help scientists understand whole ecosystems and decide where conservation efforts are needed most.

The report sets out three recommendations to scale the impact AI. First, expand primary biodiversity data and shared infrastructure, collect more images, audio and field observations, and make them accessible through common standards and public repositories.

Second, invest in open, trustworthy models and platforms (for example, Wildlife Insights), with transparent methods, independent testing and governance so results can be reused and audited.

Third, strengthen two-way knowledge exchange between AI developers, practitioners, and indigenous and local communities through co-design, training and funding, ensuring tools match real needs on the ground.

Their message is that AI can act as a force multiplier, but only when paired with on-the-ground capacity, ethical safeguards and long-term funding, enabling communities and conservation agencies to use these tools to protect and restore ecosystems. However, Google has faced scrutiny in the past over meeting its climate goals, including its commitment to reduce carbon emissions by 2030.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!