Powerful Gemini update turns simple prompts into ready-to-use results

Gemini can now generate downloadable and ready-to-share files directly in chat across a wide range of formats, including PDF, Microsoft Word, Excel, Google Docs, Sheets, and Slides.

The new feature is meant to remove the extra steps that often follow AI-assisted brainstorming, such as copying content into other applications and reformatting it manually. Instead, users can ask Gemini to create a structured file that is already formatted and ready to download or export to Google Drive.

Supported formats include Google Workspace files, PDF, DOCX, XLSX, CSV, LaTeX, TXT, RTF, and Markdown. The company says the feature is now available globally to all Gemini app users.

Possible uses include turning budget plans into spreadsheets, organising rough ideas into structured documents, converting long discussions into concise reports, and generating PDF study guides from uploaded lecture notes.

Why does it matter?

What changes here is not simply that Gemini can create more file types, but that it moves AI one step closer to replacing part of the software workflow itself. Instead of using AI to generate rough text and then finishing the task manually in Word, Excel, or Google Docs, users can now get output in a format that is already structured for immediate use.

That may reduce friction between prompting and execution, making AI more useful in everyday work, study, and administration. In practical terms, the update pushes Gemini further from being just a conversational assistant towards becoming a tool that can produce finished digital outputs people can actually work with.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

United Nations warns AI-driven advertising could deepen information crisis

The United Nations has warned that the rapid adoption of AI in advertising could deepen a global information integrity crisis. With worldwide advertising spending now exceeding $1 trillion annually, concerns are growing over how automated systems influence what users see, trust, and engage with online.

A briefing by the Department of Global Communications and the Conscious Advertising Network places advertising at the centre of the digital information ecosystem. It argues that advertising helps fund and shape the systems that influence what people see and believe, while AI-driven tools are increasingly being used in media buying and content generation in ways that can amplify disinformation, hate speech, and opaque decision-making.

Transparency gaps in AI advertising systems are also raising concerns over fraud, inefficiency, and declining trust in digital platforms. The report warns that these pressures can weaken independent journalism and reduce advertising effectiveness as confidence in online environments continues to erode.

UN officials and industry representatives are calling for stronger governance, clearer oversight of AI supply chains, and closer cooperation between regulators, advertisers, and civil society. The core message is that without stronger guardrails, AI may accelerate the breakdown of information ecosystem integrity rather than simply improve commercial performance.

Why does it matter?

AI is becoming embedded in systems that shape the online flow of information, which means advertising is no longer only a commercial mechanism but also a force affecting public perception and trust. As automation expands without clear oversight, risks can spread beyond brand safety into disinformation, media sustainability, and democratic discourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Kazakhstan advances digital economy with AI business assistant

Kazakhstan has introduced an AI-powered assistant designed to simplify the process of starting a business, according to Zhaslan Madiyev. Developed in cooperation with the Ministry of Finance, the platform aims to provide data-driven guidance to early-stage entrepreneurs.

Built around a digital mapping system, the assistant evaluates factors such as nearby businesses, customer flow, and competition. Its recommendations aim to help users choose more viable locations and avoid oversaturated sectors, thereby reducing the risk of duplicating businesses in the same area.

Officials say the tool could reduce startup operating costs by up to half while improving long-term business sustainability. Alongside it, a second AI assistant already provides continuous guidance on tax reporting and regulatory compliance, translating complex requirements into clearer, more practical steps for users. According to Kazakhstani reporting, the tax assistant has already processed more than 5,000 requests.

The development forms part of Kazakhstan’s wider digital transformation agenda, which aims to modernise public services and strengthen the country’s digital economy through practical AI deployment. The government says more than 50 AI-powered services are now being developed to support citizens and businesses.

Why does it matter?

Kazakhstan’s AI assistant points to a shift from basic digital services towards more active, real-time decision support for entrepreneurs. Data-driven recommendations can help reduce startup risks, limit market oversaturation, and support more efficient resource allocation across local economies.

Simplified tax and compliance guidance also targets one of the main barriers facing early-stage businesses: administrative complexity. Placed within Kazakhstan’s broader AI-first digital strategy, the initiative signals a wider move towards a more competitive and operationally AI-driven digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

New Chinese rules restrict digital promotion of financial products

China has introduced new online marketing rules for financial products, further tightening its long-standing restrictions on cryptocurrency-related activity. The new framework limits the promotion of financial products to licensed entities and treats digital currency trading and issuance as illegal financial activity.

Issued by the People’s Bank of China and seven other regulators, the Administrative Measures for Online Marketing of Financial Products will take effect on 30 September 2026. The rules extend responsibility to platforms, intermediaries, and content creators who promote or facilitate financial products online.

Any assistance in promoting or facilitating prohibited financial activity may now be treated as participation in illegal finance, expanding enforcement beyond direct trading bans. In practice, that broadens the focus from financial products themselves to the wider digital promotion layer, including online displays, traffic generation, and other forms of internet-based marketing support.

Authorities say the measures are intended to protect consumers by limiting misleading or aggressive online promotion, including livestream marketing and viral investment content. In that sense, the rules are not only about crypto, but about tighter control over how financial products are marketed in digital environments.

The policy also reinforces China’s existing position, dating back to 2021, when regulators declared all cryptocurrency transactions illegal, while pushing enforcement deeper into the digital advertising and distribution layers of financial markets.

Why does it matter?

Stronger oversight of online financial promotion shows that crypto-related advertising is increasingly being treated as a regulatory risk category, not just a marketing issue. The Chinese move also points to a broader trend in which regulators are extending scrutiny beyond financial products themselves to the digital channels, influencers, and platforms that help distribute them.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Agentic AI to take over half of UAE public sector

The UAE has announced an ambitious government framework to integrate Agentic AI across 50% of the public sector and services within two years. Revealed at a Cabinet meeting chaired by Sheikh Mohammed bin Rashid Al Maktoum, the initiative positions AI as an operational partner managing government functions autonomously.

Agentic AI systems will be deployed to monitor developments, analyse data, recommend actions and run operational workflows without human intervention. Authorities expect the shift to improve service speed and efficiency, cut costs, and enable real-time evaluation and continuous improvements across federal entities.

The programme will roll out in phases under a dedicated task force, with performance-based assessments for government entities and leadership. A parallel focus has been placed on workforce development, with training programmes designed to equip employees with advanced AI capabilities.

The framework builds on two decades of digital transformation in the UAE, including earlier national AI strategies and smart government initiatives, and expands the country’s push towards fully integrated, data-driven governance systems.

Why does it matter?

The project marks a shift from digital tools to autonomous governance, where AI can directly run and optimise public services in real time. That raises efficiency and responsiveness, but also makes strong oversight, governance, and workforce readiness essential to ensure safe and effective implementation. 

The approach could also serve as a global blueprint for large-scale government AI adoption, shaping how states modernise public services and integrate autonomous systems into core governance. 

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Wikipedia-based AI model identifies 100 emerging technologies to watch in 2026

A new analysis by Australian researchers reveals how AI is reshaping the way emerging technologies are identified and tracked.

Using a dataset derived from thousands of Wikipedia entries, the researchers mapped more than 23,000 technologies to produce the ‘Momentum 100’ list, highlighting the fastest-growing technologies across science and industry.

The findings place reinforcement learning at the top, followed closely by blockchain and other rapidly advancing fields such as 3D printing, soft robotics and augmented reality.

These technologies reflect a broader shift towards data-driven innovation, where systems capable of learning, adapting and scaling are becoming central to both research and commercial applications.

Unlike traditional forecasts, which often rely on expert judgement, the model uses large-scale data analysis to detect patterns of growth and interconnection between technologies.

The approach offers a more dynamic and repeatable method, capturing early signals that might otherwise be overlooked in manual assessments.

Despite its advantages, researchers caution that predicting real-world impact remains difficult at early stages.

While AI-driven mapping provides valuable insights, policymakers and industry leaders still rely on hybrid approaches that combine data analysis with expert evaluation, as seen in frameworks developed by organisations such as the World Economic Forum.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

GPT-5.5 pushes AI deeper into agentic work

OpenAI has released GPT-5.5 as its latest push towards more capable agentic AI, presenting the model as better suited to complex, multi-step digital work across coding, research, analysis, and enterprise tasks.

The company frames it as a system designed to carry more of the work itself, moving beyond isolated prompt-response interactions towards fuller execution across digital workflows.

According to OpenAI, the model’s biggest gains are in software engineering, tool use, and knowledge work. GPT-5.5 improves performance on coding and workflow benchmarks, strengthens long-horizon reasoning, and handles complex digital tasks with greater efficiency while maintaining earlier latency standards.

OpenAI also says the model performs better across documents, spreadsheets, presentations, and data analysis, reflecting a broader effort to make AI more useful across full professional workflows rather than only as an assistant for isolated tasks.

The release also highlights stronger performance in scientific and technical research, alongside expanded safety testing and tighter safeguards for higher-risk capabilities.

The wider significance of GPT-5.5 lies in its reflection of the next phase of AI competition. The focus is shifting from better answers to more reliable execution across real-world digital work, with growing implications for productivity, oversight, and governance.

Why does it matter? 

GPT-5.5 signals a shift from AI as a passive tool to AI as an active digital operator that can complete full workflows across coding, research, and business systems with minimal human supervision.

Over time, such capability could reshape productivity, speed up development cycles, and shift competitive advantage toward those best integrating autonomous AI while managing safety and governance risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

World Economic Forum analysis explains what drives startup growth today

Findings from the World Economic Forum (WEF) highlight a shift in how early-stage ventures grow from pilot projects into fully operational businesses.

Evidence gathered from more than 200 start-ups by UpLink, the early-stage innovation initiative by WEF, alongside investors and policymakers, suggests that scaling no longer depends primarily on innovation itself, but on the conditions enabling deployment.

Core and emerging technologies already exist across sectors, yet barriers remain in market adoption, coordination, and institutional readiness.

Resilience has moved from a strategic ambition to an immediate operational requirement. Start-ups are increasingly built around urgent, clearly defined problems, allowing them to adapt quickly in volatile environments shaped by geopolitical tensions, supply chain disruption, and climate pressures.

Strong partnerships have emerged as a central priority, with a significant majority of ventures seeking collaboration with larger corporate actors to gain access to infrastructure, regulatory pathways, and credibility.

Collaboration at early stages is proving essential in reducing risk and accelerating adoption. Traditional scaling models, based on proving technology before securing buyers, are losing effectiveness in complex sectors with high institutional risk.

Shared responsibility across multiple stakeholders enables innovation to move beyond demonstration phases into real-world application, particularly when aligned with procurement systems and regulatory frameworks.

Commercial viability has also become central to scaling success. Impact alone is no longer sufficient, as investors and buyers increasingly prioritise measurable financial outcomes such as cost efficiency, risk reduction, and resilience.

Market signals, including early contracts and partnerships, now outweigh funding rounds as indicators of credibility.

Why does it matter?

The WEF analysis underscores that scalable growth depends less on innovation alone and more on coordinated ecosystems that turn pilots into real-world adoption.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Crypto derivatives rules face overhaul in Thailand consultation

Thailand is moving to simplify access to crypto derivatives markets through proposed regulatory changes aimed at reducing operational barriers for digital asset firms. The Securities and Exchange Commission of Thailand has opened a consultation on letting licensed crypto firms access derivatives without separate corporate entities. 

Current regulations require firms to operate distinct legal structures for derivatives activity, increasing compliance costs and limiting market expansion. The proposed framework consolidates licensing under a single regulatory umbrella while maintaining oversight through internal controls and conflict management rules. 

The reform reflects a broader international shift towards integrating crypto and traditional financial markets within unified trading environments. Similar momentum is visible in the United States, where discussions on crypto perpetual futures are advancing alongside increased institutional activity in derivatives infrastructure.

Market activity is already responding to anticipated changes, including acquisitions of regulated trading platforms to support expanded product offerings. These developments indicate growing alignment between regulatory evolution and industry expansion in digital asset derivatives markets.

Why does it matter? 

These changes represent a broader move toward integrating crypto and traditional markets under unified regulatory frameworks. Reducing structural barriers may improve efficiency and innovation while preserving oversight.

Parallel developments across key jurisdictions also point to growing global competition to set standards for crypto derivatives, with implications for liquidity, access, and institutional participation worldwide. 

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

YouTube expands AI deepfake detection tools for celebrities

The expansion of its likeness detection technology to the entertainment industry has been announced by YouTube, extending access beyond content creators to talent agencies, management companies and the individuals they represent.

The move is part of a broader effort by the platform to address the growing misuse of AI to generate misleading or unauthorised videos of public figures. By extending the tool to entertainment industry stakeholders, YouTube is signalling that AI-driven impersonation is no longer treated as a niche creator issue but as a broader identity and rights problem.

The system works in a way broadly comparable to Content ID, allowing eligible users to identify videos that use AI to replicate a person’s face or likeness. Once such content is detected, individuals can request its removal through YouTube’s existing privacy complaint process.

The rollout has been developed with input from major industry players, including Creative Artists Agency, United Talent Agency, William Morris Endeavor, and Untitled Management. Those partnerships are intended to help YouTube refine how the system works in practice and ensure it reflects the needs of artists and rights holders dealing with synthetic media.

Importantly, access to the tool is not limited to people who actively run YouTube channels. Celebrities and public figures can use it even without a direct creator presence on the platform, extending its reach across a much broader part of the entertainment ecosystem.

The significance of the update lies in how platforms are beginning to treat AI impersonation as a governance issue rather than merely a content-moderation problem.

As synthetic media tools become easier to use and more convincing, technology companies are under growing pressure to provide faster and more credible mechanisms for detecting misuse, protecting identity rights, and limiting deceptive content.

YouTube’s latest move shows that platform responses are becoming more structured and rights-based, especially in sectors where a person’s likeness is closely tied to reputation, image, and commercial value. The bigger question now is whether such tools will prove effective enough to keep pace with the scale and speed of AI-generated impersonation online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!