Microsoft markets Copilot as a productivity boost but warns it is ‘for entertainment purposes only’

Microsoft has spent the past year pushing Copilot as a mainstream productivity tool, baking it into Windows 11 and promoting new hardware such as Copilot+ PCs, yet its own legal language urges caution. In Microsoft’s Copilot Terms of Use, updated in October last year, the company states Copilot is ‘for entertainment purposes only’, may ‘make mistakes’, and ‘may not work as intended’.

The terms warn users not to rely on Copilot for important advice and to ‘use Copilot at your own risk’, a caveat that sits uneasily alongside the product’s business-focused marketing.

The Tom’s Hardware article argues Microsoft is not unique in issuing such warnings. Similar disclaimers are common across the generative AI industry. It points to xAI’s guidance that AI is ‘probabilistic in nature’ and may produce ‘hallucinations’, generate offensive or objectionable content, or fail to reflect real people, places or facts.

While these limitations are well known to those familiar with large language models, the piece notes that many users still treat AI output as authoritative, even in professional settings where scepticism should be standard.

To underline the risks of overreliance, the text cites reports of Amazon-related incidents allegedly linked to ‘Gen-AI assisted changes’. It says some AWS outages were reportedly caused after engineers let an AI coding bot address an issue without sufficient oversight, and that Amazon’s website experienced ‘high blast radius’ problems that required senior engineers to step in. These examples are used to illustrate how AI-generated errors can propagate quickly in complex systems when humans fail to verify the output.

Why does it matter?

Overall, the article acknowledges that generative AI can boost productivity, but stresses it remains a tool with no accountability for mistakes, making verification essential. It warns that automation bias, people trusting machine outputs over contradictory evidence, can be intensified by AI systems that produce plausible-sounding answers that pass casual inspection.

While such disclaimers help companies limit legal liability, the piece suggests aggressive marketing of AI as a productivity ‘hack’ may downplay real-world risks, particularly as firms seek returns on the billions invested in AI hardware and talent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft commits $10 billion to Japan’s AI future

Microsoft Corporation announced a $10 billion investment in Japan over four years to expand AI infrastructure and strengthen cybersecurity partnerships with the government. The investment aligns with Prime Minister Sanae Takaichi’s strategy for economic growth through advanced technologies.

The company will collaborate with Japanese firms SoftBank and Sakura Internet to develop domestically-based AI computing capacity, allowing Japanese businesses and government agencies to store sensitive data locally whilst accessing Microsoft Azure services.

Why does it matter?

Microsoft plans to train 1 million engineers and developers by 2030 as part of the initiative to build Japan’s digital workforce in AI and emerging technologies. The investment addresses Japan’s growing demand for cloud and AI services as part of the company’s Asia-wide expansion strategy.

The announcement, made on 3 April, reflects Microsoft’s commitment to supporting Japanese technological advancement whilst maintaining data security. Sakura Internet’s share price jumped 20 percent following the news, signalling strong market confidence in the partnership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU delegation in China calls for sustainable e-commerce and safety standards

Members of the European Parliament (MEPs) completed a visit to Beijing and Shanghai to address pressing e-commerce challenges affecting the European single market.

The delegation studied local business models and market supervision frameworks, engaging with Chinese regulators, e-commerce platforms, and the EU company representatives.

The discussions highlighted the surge of parcels from China, which now account for 91% of small shipments to Europe, and the resulting pressures on fair competition.

MEPs stressed that regulatory compliance must be consistent across all operators, ensuring consumer protection is not compromised by disparities in market practices or enforcement gaps.

The delegation urged representatives of e-commerce platforms to implement preventive measures, reinforcing accountability in areas such as product safety, customs compliance, and the removal of unsafe goods from the market.

MEPs underscored that these standards are essential to maintaining a sustainable and secure e-commerce environment for European citizens.

The visit, the first in eight years, demonstrated the EU’s commitment to safeguarding consumer rights, strengthening international cooperation, and ensuring digital commerce evolves in a manner that is fair, transparent, and safe for all citizens.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

MIT develops AI framework to test ethics in autonomous systems

Researchers at MIT have introduced a new framework designed to evaluate the ethical impact of autonomous systems used in high-stakes environments. The approach aims to identify cases where AI-driven decisions may be technically efficient but fail to meet fairness expectations.

Growing reliance on AI in areas such as energy distribution and traffic management has raised concerns about unintended bias. Cost-optimised systems can still disadvantage communities, especially when ethical factors are hard to measure.

The framework, known as SEED-SET, separates objective performance metrics from subjective human values. A large language model is used to simulate stakeholder preferences, enabling the system to compare scenarios and detect where outcomes diverge from ethical expectations.

Testing shows the method generates more relevant scenarios while reducing manual analysis. Findings highlight its potential to improve transparency and support more balanced decision-making before AI systems are deployed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EIB highlights AI as key driver of Croatia’s economic growth

The European Investment Bank and the Croatian National Bank have emphasised the strategic importance of AI in strengthening Croatia’s economic competitiveness. Discussions at a joint conference focused on accelerating AI adoption through coordinated investment, policy development and skills enhancement.

Despite strong investment activity among firms in Croatia, the uptake of advanced technologies remains limited. Only a small share of companies systematically use generative AI, with applications largely confined to internal processes, highlighting significant untapped potential for productivity gains.

Participants identified key structural barriers, including limited access to finance, shortages of skilled workers and regulatory uncertainty.

Addressing these challenges requires a combined approach that mobilises private capital, improves access to funding for smaller firms and supports the development of a more robust innovation ecosystem.

The EIB continues to play a central role in Europe’s digital transformation, with major funding initiatives aimed at scaling AI technologies and strengthening strategic infrastructure.

By aligning financial instruments with policy priorities, the initiative seeks to enhance long-term growth, resilience and integration into global value chains.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EPO accelerates digital patent shift with paperless system by 2027

The European Patent Office (EPO) is accelerating its transition towards a fully digital patent system, with plans to implement a paperless patent-granting process by 2027.

Discussions at the latest eSACEPO meeting highlighted steady progress and broad stakeholder support for modernising patent workflows.

Electronic filing and communication are set to become the default, with paper-based processes limited to exceptional cases. The shift aims to improve efficiency and accessibility, supported by legal adjustments and the gradual introduction of structured data formats to enhance processing accuracy.

Digital tools continue to evolve, with the MyEPO platform expanding its functionality through interface upgrades, self-service features and new capabilities such as colour drawing submissions.

The rollout of DOCX filing, alongside optional PDF backups, reflects a cautious approach designed to balance innovation with reliability.

AI is increasingly integrated into patent examination processes, supporting tasks such as search and documentation.

However, the EPO maintains a human-centric model, ensuring that decision-making authority remains with patent examiners while AI enhances productivity and consistency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Oracle agentic AI tool streamlines CAD to procurement workflows

Oracle has launched a new agentic AI application designed to connect engineering and procurement into a single workflow. The Design-to-Source Workspace for product lifecycle management aims to reduce delays, improve traceability, and minimise compliance risks across sourcing processes.

Traditional design-to-source models often operate sequentially, with engineering and procurement working in separate stages. Oracle’s approach replaces that structure with a continuous, coordinated loop, where AI evaluates cost, supply, and risk in real time as designs evolve.

The platform translates CAD data directly into sourcing actions, eliminating manual input and reducing errors. Automated workflows handle supplier identification, risk assessment, and request-for-quote execution, while maintaining compliance and auditability throughout the process.

Expected gains include up to 60% less manual work, significantly faster RFQ cycles, and a 20% to 30% reduction in overall sourcing timelines. Greater accuracy and improved decision-making allow teams to focus on higher-value tasks rather than repetitive coordination.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Call to scrap cookie banners gains traction

A new study argues that cookie consent banners should be scrapped, claiming they fail to protect user privacy and instead create frustration. The research highlights how repeated pop-ups have become a defining feature of the modern internet.

The paper suggests that cookie banners, originally introduced under data protection laws, have led to ‘performative compliance’ rather than meaningful consent. Users often click through notices without understanding them, weakening the purpose of privacy regulation.

Researchers say the system may even normalise data tracking by encouraging habitual acceptance. Instead of improving transparency, the approach risks obscuring how personal data is collected and used across digital platforms.

The study calls for regulators to move beyond banner-based consent towards more effective privacy protections. It argues that current rules may hinder the development of better solutions by giving the impression that the problem has already been addressed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts warn YouTube AI slop harms children and demand action

Fairplay and more than 200 experts have urged YouTube to address the spread of ‘AI slop’ targeting children. The letter was sent to Sundar Pichai and Neal Mohan, along with a petition.

The signatories state that AI-generated videos harm children’s development by distorting reality and overwhelming learning processes. They also warn that such content captures attention and is being recommended to young users, including infants and toddlers.

The letter cites findings that 40% of videos following shows like Cocomelon contained AI-generated content. It also states that 21% of Shorts recommendations included similar material, and misleading science videos were shown to older children.

Fairplay and its partners propose measures, including labelling AI content and banning it from YouTube Kids. They also call for restrictions on recommendations to under-18s and for tools that allow parents to turn off such content.

The initiative was organised by Fairplay and supported by organisations and experts, including Jonathan Haidt. The group says platforms must ensure content is safe and appropriate for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot