Albania names first AI-generated minister to fight corruption

Albanian Prime Minister Edi Rama has unveiled the world’s first AI-generated minister, a virtual figure named Diella, who will oversee public tenders in an effort to eradicate corruption. The announcement was made as Rama presented his new cabinet following a decisive election victory in May.

Diella, meaning ‘Sun’ in Albanian, has already been active on the government’s e-Albania portal, where it has issued more than 36,000 digital documents and helped citizens access around 1,000 services.

Now, it will formally take on a cabinet role, marking what Rama described as a radical shift in governance where technology acts as a participant instead of a tool.

The AI will gradually take over responsibility for awarding government tenders, removing decisions from ministries and ensuring assessments are objective. Rama said the system would help Albania become ‘100 per cent corruption-free’ in procurement, a key area of concern in the country’s bid to join the EU by 2030.

Public tenders have long been linked to corruption scandals in Albania, a nation often cited as a hub for money laundering and organised crime. Supporters view Diella’s appointment as a bold step towards transparency, with local media calling it a major transformation in how state power is exercised.

Rama emphasised that the AI minister would have a special mandate to break down bureaucratic barriers and strengthen public trust in administration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK launches CAF 4.0 for cybersecurity

The UK’s National Cyber Security Centre has released version 4.0 of its Cyber Assessment Framework to help organisations protect essential services from rising cyber threats.

An updated CAF that provides a structured approach for assessing and improving cybersecurity and resilience across critical sectors.

Version 4.0 introduces a deeper focus on attacker methods and motivations to inform risk decisions, ensures software in essential services is developed and maintained securely, and strengthens guidance on threat detection through security monitoring and threat hunting.

AI-related cyber risks are also now covered more thoroughly throughout the framework.

The CAF primarily supports energy, healthcare, transport, digital infrastructure, and government organisations, helping them meet regulatory obligations such as the NIS Regulations.

Developed in consultation with UK cyber regulators, the framework provides clear benchmarks for assessing security outcomes relative to threat levels.

Authorities encourage system owners to adopt CAF 4.0 alongside complementary tools such as Cyber Essentials, the Cyber Resilience Audit, and Cyber Adversary Simulation services. These combined measures enhance confidence and resilience across the nation’s critical infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FTC opens inquiry into AI chatbots and child safety

The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.

Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.

Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.

An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.

Other companies receiving orders include Character.AI and Elon Musk’s xAI.

The probe follows growing public concern over the psychological effects of generative AI on young people.

Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and TikTok win court challenge over EU fee

Europe’s General Court has backed challenges by Meta Platforms and TikTok against an EU supervisory fee imposed under the Digital Services Act (DSA). The companies argued that the levy was calculated unfairly and imposed a disproportionate financial burden.

The supervisory fee, introduced in 2022, requires large platforms to pay 0.05% of their annual global net income to cover monitoring costs. Meta and TikTok said the methodology relied on flawed data, inflated their fees, and even double-counted users.

Their lawyers told the court the process lacked transparency and produced ‘implausible’ results.

Lawyers for the European Commission defended the fee, arguing that group-wide financial resources justified the calculation method. They said the companies had adequate information about how the levy was determined.

The ruling reduces pressure on the two firms as they continue investing in the EU market. A final judgement from the General Court is expected next year and may shape how supervisory costs are applied to other major platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude AI gains powerful file editing tools for documents and spreadsheets

Anthropic’s Claude has expanded its role as a leading AI assistant by adding advanced tools for creating and editing files. Instead of manually working with different programs, users can now describe their needs in plain language and let the AI produce or update Word, Excel, PowerPoint, and PDF files.

A feature that supports uploads of CSV and TSV data and can generate charts, graphs, or images where needed, with a 30MB size limit applying to uploads and downloads.

The real breakthrough lies in editing. Instead of opening a document or spreadsheet, users can simply type instructions such as replacing text, changing currencies, or updating job titles. Claude processes the prompt and makes all the changes in one pass, preserving the original formatting.

It positions the AI as more efficient than rivals, as Gemini can only export reports but not directly modify existing files.

The feature preview is available on web and desktop for subscribers on Max, Team, or Enterprise plans. Analysts suggest the update could reshape productivity tools, especially after reports that Microsoft has partnered with Anthropic to explore using Claude for Office 365 functions.

By removing repetitive tasks and making file handling conversational, Claude is pushing productivity software into a new phase of automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NotebookLM turns notes into flashcards podcasts and quizzes

Google’s learning-focused AI tool NotebookLM has gained a major update, making studying and teaching more interactive.

Instead of offering only static summaries, it now generates flashcards that condense key information into easy-to-remember notes, helping users recall knowledge more effectively.

Reports can also be transformed into quizzes with customisable topics and difficulty, which can then be shared with friends or colleagues through a simple link.

The update extends to audio learning, where NotebookLM’s podcast-style Audio Overviews are evolving with new formats. Instead of a single style, users can now create Brief, Debate, or Critique episodes, giving greater flexibility in how material is explained or discussed.

Google is also strengthening its teaching tools. A new Blog Post format offers contextual suggestions such as strategy papers or explainers, while the ability to create custom report formats allows users to design study resources tailored to their needs.

The most significant addition, however, is the Learning Guide. Acting like a personal tutor, it promotes deeper understanding by asking open-ended questions, breaking problems into smaller steps, and adapting explanations to suit each learner.

With these features, NotebookLM is moving closer to becoming a comprehensive learning assistant, offering a mix of interactive study aids and adaptable teaching methods that go beyond simple note-taking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot mixes up Dutch political party policies

Dutch voters have been warned not to rely on AI chatbots for political advice after Google’s NotebookLM mixed up VVD and PVV policies.

When asked about Ukrainian refugees, the tool attributed a PVV proposal to send men back to Ukraine to the VVD programme. Similar confusions reportedly occurred when others used the system.

Google acknowledged the mistake and said it would investigate whether the error was a hallucination, the term for incorrect AI-generated output.

Experts caution that language models predict patterns rather than facts, making errors unavoidable. Voting guide StemWijzer stressed that reliable political advice requires up-to-date and verified information.

Professor Claes de Vreese said chatbots might be helpful to supplementary tools but should never replace reading actual party programmes. He also urged stricter regulation to avoid undue influence on election choices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

2025 State of the Union: Tech sovereignty amid geopolitical pressure

The European Commission President, Ursula von der Leyen, delivered her 2025 State of the Union address to the European Parliament in Strasbourg. The speech set out priorities for the coming year and was framed by growing geopolitical tensions and the push for a more self-reliant Europe.

Von der Leyen highlighted that global dynamics have shifted.

‘Battlelines for a new world order based on power are being drawn right now, ’ she said.

In this context, Europe must take a more assertive role in defending its own security and advancing the technologies that will underpin its economic future. The President characterised this moment as a turning point for European independence.

Digital policy appeared less prominently than expected in the address. Von der Leyen often referred to ‘technology sovereignty’ to encompass not only digital technologies, but also other types of technologies necessary for the green transition and to achieve energetic autonomy. In spite of that, some specific references to digital policy are worth highlighting.

  • Europe’s right to regulate. Von der Leyen defended Europe’s right to set its own standards and regulations. The assertion came right after her defence of the US-EU trade deal, making it a direct response to the mounting pressure and tariff threats from the US President Donald Trump’s administration.
  • Regulatory simplification. A specific regulatory package (omnibus) on digital was promised, under inspiration from the Draghi report on EU competitiveness. 
  • Investment in digital technology. Startups in key areas, such as quantum and AI, could receive particular attention, in order to enhance the availability of European capital and strengthen European sovereignty in these areas. According to her, the Commission ‘will partner with private investors on a multi-billion euro Scaleup Europe Fund’. No concrete figures were provided, however.
  • Artificial intelligence as key to European independence. In order to support this sector, von der Leyen highlighted the importance of some initiatives, such as the Cloud and AI Development Act, and the European AI Gigafactories. She praised the commitment of CEOs from some leading European companies to invest in digital in the recently launched AI and Tech Declaration
  • Mainstreaming information integrity. According to von der Leyen, Europe’s democracy is under attack, with the rise of information manipulation and disinformation. She proposed to create a new European Centre for Democratic Resilience, which will bring together all the expertise and capacity across member states and neighbouring countries. A new Media Resilience Programme aimed at supporting independent journalism and media literacy was also announced.
  • Limits to the use of social media by young people. The President of the Commission raised concerns about the impact of social media on children’s mental health and safety. She committed to convening a panel of experts to consider restrictions for social media access, referencing efforts that have been put in place in Australia.  

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot