Powerful Gemini update turns simple prompts into ready-to-use results

Gemini can now generate downloadable and ready-to-share files directly in chat across a wide range of formats, including PDF, Microsoft Word, Excel, Google Docs, Sheets, and Slides.

The new feature is meant to remove the extra steps that often follow AI-assisted brainstorming, such as copying content into other applications and reformatting it manually. Instead, users can ask Gemini to create a structured file that is already formatted and ready to download or export to Google Drive.

Supported formats include Google Workspace files, PDF, DOCX, XLSX, CSV, LaTeX, TXT, RTF, and Markdown. The company says the feature is now available globally to all Gemini app users.

Possible uses include turning budget plans into spreadsheets, organising rough ideas into structured documents, converting long discussions into concise reports, and generating PDF study guides from uploaded lecture notes.

Why does it matter?

What changes here is not simply that Gemini can create more file types, but that it moves AI one step closer to replacing part of the software workflow itself. Instead of using AI to generate rough text and then finishing the task manually in Word, Excel, or Google Docs, users can now get output in a format that is already structured for immediate use.

That may reduce friction between prompting and execution, making AI more useful in everyday work, study, and administration. In practical terms, the update pushes Gemini further from being just a conversational assistant towards becoming a tool that can produce finished digital outputs people can actually work with.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

United Nations warns AI-driven advertising could deepen information crisis

The United Nations has warned that the rapid adoption of AI in advertising could deepen a global information integrity crisis. With worldwide advertising spending now exceeding $1 trillion annually, concerns are growing over how automated systems influence what users see, trust, and engage with online.

A briefing by the Department of Global Communications and the Conscious Advertising Network places advertising at the centre of the digital information ecosystem. It argues that advertising helps fund and shape the systems that influence what people see and believe, while AI-driven tools are increasingly being used in media buying and content generation in ways that can amplify disinformation, hate speech, and opaque decision-making.

Transparency gaps in AI advertising systems are also raising concerns over fraud, inefficiency, and declining trust in digital platforms. The report warns that these pressures can weaken independent journalism and reduce advertising effectiveness as confidence in online environments continues to erode.

UN officials and industry representatives are calling for stronger governance, clearer oversight of AI supply chains, and closer cooperation between regulators, advertisers, and civil society. The core message is that without stronger guardrails, AI may accelerate the breakdown of information ecosystem integrity rather than simply improve commercial performance.

Why does it matter?

AI is becoming embedded in systems that shape the online flow of information, which means advertising is no longer only a commercial mechanism but also a force affecting public perception and trust. As automation expands without clear oversight, risks can spread beyond brand safety into disinformation, media sustainability, and democratic discourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

China pushes AI self-reliance while expanding global cooperation

Chinese Vice Premier Ding Xuexiang has reiterated China’s emphasis on AI self-reliance while also calling for deeper international cooperation, underscoring a dual approach to technology policy amid rising global competition. Speaking at the opening of the 9th Digital China Summit, he presented AI as an important part of China’s wider modernisation agenda.

Ding said China should strengthen self-reliance and independent innovation in AI, arguing that the sector must be able to withstand external pressure and attempts at suppression. He also emphasised application-driven development, calling for faster integration of AI into the real economy to support productivity and industrial transformation.

Alongside those domestic priorities, he called for a more collaborative innovation ecosystem, including closer coordination across the AI industry chain. Internationally, he advocated open and mutually beneficial cooperation, with particular emphasis on computing power, data, and talent.

Regulation also featured prominently in the speech. Ding said AI development must remain safe and controllable, with stronger oversight to ensure the technology serves human interests and remains under human control. Taken together, the message reflects China’s broader effort to balance technological sovereignty with continued international engagement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New federated learning approach highlights shift towards decentralised and privacy-preserving AI

Researchers at MIT have developed a new method that significantly improves privacy-preserving AI training on everyday devices such as smartphones, sensors, and smartwatches.

The approach strengthens federated learning systems, where data remains on devices while models are trained collaboratively, supporting sensitive applications such as healthcare and finance.

The new framework, called FTTE (Federated Tiny Training Engine), addresses long-standing issues in federated learning networks with uneven device capabilities. Traditional systems struggle with delays from limited memory, weak connectivity and slow update cycles, reducing network efficiency and performance.

FTTE improves the process by sending smaller model segments to devices, introducing asynchronous updates and weighting contributions based on freshness. These changes reduce memory load and communication demands while maintaining stable training across heterogeneous devices.

Testing across simulated and real device networks showed training speeds improved by around 81 percent, with major reductions in memory and data transfer requirements.

Researchers also highlighted the potential to expand AI access in regions with lower-end hardware, while future work will focus on further personalising models for individual devices.

Why does it matter? 

Decentralised AI training marks a shift away from dependence on centralised data centres towards distributed intelligence embedded in everyday devices.

That changes the architecture of AI itself, allowing sensitive data to remain local and reducing privacy risks. At the same time, computation is spread across billions of low-power devices rather than concentrated in a few powerful systems.

The researchers note that such approaches may enable AI training on devices with limited memory and connectivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scaling AI systems highlights growing importance of data governance and infrastructure

Deloitte has argued that the long-term success of AI will depend less on model performance than on the strength and adaptability of the data foundations beneath it, as organisations move from experimentation to operational deployment.

The piece says many AI initiatives still fail to progress beyond the pilot stage, even where controlled tests are successful. In Deloitte’s view, the main constraint is not the models’ capabilities, but whether the underlying data foundations are mature enough to support AI at scale.

That challenge reflects a mismatch between current AI demands and older data investment priorities, which have often focused on compliance, reporting, or technology modernisation rather than AI readiness. As a result, organisations may manage data effectively by traditional standards while still struggling to scale AI.

Deloitte argues that AI systems now consume and generate data with greater speed, scale, and autonomy than earlier enterprise systems. That creates new requirements for timeliness, consistency, explainability, traceability, security, compliance, and machine-readable business meaning, as well as more controlled access to both structured and unstructured data sources.

The piece also presents AI as a tool that can accelerate the operation of data foundations themselves. AI agents, it says, can help interpret business intent, identify and profile relevant data sources, detect quality issues, recommend remediation, and assist in building or adapting data pipelines, reducing tasks that once took weeks to hours.

At the same time, Deloitte stresses that AI does not remove the need for human oversight. Human expertise remains necessary, it argues, for defining intent, setting guardrails, resolving trade-offs, and ensuring accountability.

Deloitte concludes that organisations leading the next phase of AI adoption will be those whose data foundations can operate at the speed of AI, with continuous oversight, machine-readable semantics, AI-assisted operations, and quality embedded directly into data pipelines.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tax season phishing scams surge with fake government sites

Cybercriminal activity tends to intensify during tax-return season, as taxpayers face tighter deadlines and share sensitive financial information. A recent Kaspersky analysis highlights the growing use of fake tax authority websites, phishing emails, and malicious downloads designed to steal personal and banking data.

Attackers are impersonating official revenue services across multiple countries, creating convincing portals that mimic government branding and online tax services. Victims are often prompted to enter login credentials, payment details, or download files containing malware aimed at compromising devices or extracting sensitive information.

Crypto holders are also being targeted through fake compliance portals and fraudulent regulatory notices. These schemes try to trick users into revealing wallet recovery phrases or linking digital wallets, which can lead to full asset theft once access is granted.

AI adds another layer of risk. Kaspersky warns that users who upload tax documents or personal financial data to unverified AI platforms may expose confidential information to leakage, misuse, or further fraud. More broadly, AI is also making phishing and impersonation campaigns easier to scale and harder to detect.

Security experts recommend relying only on official tax channels, checking websites and email sources carefully, avoiding unsolicited downloads, and using secure storage and trusted protection tools when handling tax documents.

Why does it matter?

Tax-season phishing campaigns show how financial data is increasingly being treated as a high-value target for cybercrime. As tax systems, digital finance, crypto assets, and AI tools overlap more closely, a single successful scam can lead not only to immediate financial loss but also to identity theft, device compromise, and broader damage to trust in digital services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Rubrik launches Agent Cloud for enterprise AI governance

Rubrik has launched Rubrik Agent Cloud for the Gemini Enterprise Agent Platform, introducing new governance and operational controls for enterprise AI agents built on Google Cloud.

According to the company, the integration is intended to help organisations accelerate and secure the deployment of AI agents by adding semantic governance and operational resilience through real-time, intent-based guardrails. The company says the offering is powered by its Semantic AI Governance Engine, or SAGE, which is designed to monitor and control autonomous agent behaviour.

Google Cloud’s Satish Thomas, Vice President for Applied AI and Platform Ecosystem, said:

‘As enterprises move into the autonomous era with Gemini Enterprise, security and governance are top of mind. Rubrik helps to provide a unified control layer for agent deployment and security that is critical for AI success.’

Rubrik’s Devvret Rishi, General Manager for AI, stated:

Enterprises want the speed of Google Cloud’s AI technologies, but also require the safety of Rubrik’s cyber resilience. Through this collaboration, we will remove the governance bottleneck for customers developing with Gemini Enterprise Agent Platform. RAC provides real-time guardrails organizations need to speed AI agents into production, without the worry of compromising enterprise security or integrity.’

Rubrik says the integration includes automated discovery of agents running on Gemini Enterprise Agent Platform Runtime, visibility into risk, access permissions and policy violations, a unified control interface for AI security policies, and an ‘Agent Rewind’ capability intended to instantly and precisely undo an autonomous agent’s destructive action.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Latvia shows average AI tool adoption levels

Recent data from Eurostat and the Central Statistical Bureau of Latvia highlights that around one-third of people in Latvia use AI tools. Latvian Public Media reports that usage broadly matches the EU average.

In Latvia, 35.1 percent of internet users reported using AI in 2025, slightly above the EU figure of 33 percent. Adoption is highest among younger people, with nearly three-quarters of those aged 16 to 24 using such tools.

Usage varies across demographics, with higher rates among educated users and employed individuals. Men use AI slightly more than women, while regional differences show stronger uptake in the Riga area.

Many non-users say they see no need for AI, while others cite a lack of skills or awareness. The findings were reported based on official data in Latvia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyprus defence minister highlights role of AI and advanced technologies in defence

The Cyprus Defence Minister Vasilis Palmas has said that AI and advanced technologies are transforming defence, requiring stronger domestic capabilities. His remarks were recently reported by the Cyprus Mail.

He highlighted the growing roles of AI, autonomous systems, cyberdefence and space technology, stressing the need to secure supply chains and meet the National Guard’s requirements.

Palmas said participation in the European defence innovation programmes is a strategic priority, supporting local technological development and integration into wider industry networks.

The country is advancing several funded projects, strengthening research infrastructure, and preparing a national defence industry plan. The comments were made at an event in Cyprus.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital Dubai rolls out AI workforce programme across public sector

Digital Dubai has launched the AI Workforce Transformation Programme to train 50,000 government employees in AI skills. The initiative is being delivered with the Dubai Government Human Resources Department and the Dubai Centre for Artificial Intelligence.

The programme aims to equip staff with practical knowledge to apply AI in public services and internal processes. It includes tailored training tracks based on job roles, from leadership to general employees.

Officials say the initiative will improve productivity, support innovation and enable more efficient service delivery. It also forms part of wider efforts to strengthen AI adoption across government operations.

The programme is designed to build long-term institutional capabilities and support a technology-driven government model. The initiative was launched by Digital Dubai in Dubai.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot