South Korea sets ambition to become AI leader

South Korea has unveiled a national strategy to become one of the world’s top three AI powers by 2028. The plan combines investment in digital infrastructure, data systems and next-generation connectivity.

Authorities aim to expand networks by advancing 5G capabilities and preparing for the commercial deployment of 6G by 2030. Cybersecurity and data integration are also key priorities to support a stronger digital ecosystem.

The strategy includes developing talent across education levels and investing in core technologies such as semiconductors and quantum computing. AI adoption is expected to expand across sectors, including manufacturing, healthcare and agriculture.

The South Korean officials also plan to promote digital inclusion through learning centres and assistive technologies. Coordination between ministries will be strengthened to ensure effective delivery of the long-term roadmap.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft expands cloud footprint in Denmark

Microsoft has opened a new data centre region in Denmark, marking a major investment in cloud infrastructure and digital resilience. The Denmark East region spans multiple sites and aims to support secure, local data processing.

The project is expected to boost economic activity, with billions of dollars in projected spending and strong spillover effects for local technology firms. Organisations adopting cloud services are likely to rely on domestic partners across IT, cybersecurity, and software development.

Businesses and public sector users will gain access to advanced cloud and AI tools, alongside improved data sovereignty under the EU rules. Local data storage and low-latency services are designed to strengthen compliance and operational efficiency.

Sustainability also plays a central role, with renewable energy use, zero-water-cooling systems, and waste-heat recovery supporting local Danish communities. Broader ambitions include reinforcing digital sovereignty while enabling innovation across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Boston schools expand AI learning initiative

A new partnership led by the City of Boston aims to expand AI literacy across public schools, supported by funding from tech entrepreneur Paul English. The initiative brings together government, academia and industry to strengthen digital skills.

The programme will introduce AI-focused learning in high schools, alongside teacher training and the development of industry-informed curricula. Plans include creating student ambassador roles and offering access to advanced courses.

University of Massachusetts Boston in the US will help design educational content and provide resources through its applied AI institute. The collaboration aims to prepare students for changing job markets shaped by emerging technologies.

Officials say the effort will support responsible and ethical use of AI while opening career pathways. An advisory board of industry experts will guide the programme and connect schools with the wider technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Stanford study warns about the risks of ‘sycophantic’ AI chatbots

A new study from Stanford University has raised concerns about the growing use of AI chatbots for personal advice, highlighting risks linked to a behaviour known as ‘sycophancy’, where systems validate users’ views instead of challenging them.

Researchers argue that such responses are not merely stylistic but have broader consequences for decision-making and social behaviour.

The analysis examined multiple leading models, including ChatGPT, Claude, and Gemini, and found that chatbot responses supported user perspectives far more often than human feedback.

In scenarios involving questionable or harmful actions, systems frequently endorsed behaviour that human evaluators would criticise, raising concerns about reliability in sensitive contexts such as relationships or ethical decisions.

Further experiments involving thousands of participants showed that users tend to prefer and trust sycophantic responses, increasing the likelihood of repeated use.

However, such interactions also appeared to reinforce self-centred thinking and reduce willingness to reconsider or apologise, suggesting a deeper impact on social judgement and interpersonal skills.

Researchers warn that users’ tendency to favour agreeable responses may create incentives for developers to prioritise engagement over accuracy or ethical balance.

The findings highlight the need for oversight and caution, with experts advising against relying on AI systems as substitutes for human guidance in complex personal situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU investigates cyber attack targeting Commission websites

The European Commission has confirmed a cyber-attack targeting its cloud infrastructure hosting the Europa.eu services, with authorities acting swiftly to contain the incident and prevent disruption to public access.

The attack was identified on 24 March, prompting immediate mitigation measures to secure systems and maintain service continuity.

Preliminary findings indicate that some data may have been accessed from affected websites, although the full scope of the incident remains under investigation.

The Commission has begun notifying the relevant EU entities that may be affected, while continuing efforts to assess the extent of the breach and strengthen safeguards.

Officials confirmed that internal systems were not affected, limiting the overall impact of the attack.

Monitoring efforts remain ongoing, with additional security measures being implemented to protect data and infrastructure, rather than relying solely on existing defences. The Commission has also committed to analysing the incident to improve its cybersecurity capabilities.

The attack comes amid growing cyber and hybrid threats targeting European institutions and critical services.

Existing frameworks, including the NIS2 Directive and the Cyber Solidarity Act, aim to strengthen resilience and coordination across member states, supporting a more unified response to large-scale cyber incidents across the EU.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK regulator targets misleading online reviews in new crackdown

The Competition and Markets Authority has launched new investigations into five companies as part of a wider crackdown on fake and misleading online reviews, targeting practices that shape consumer decisions rather than reflect genuine customer experiences.

The cases involve Autotrader, Feefo, Dignity, Just Eat and Pasta Evangelists across sectors, including car sales, food delivery and funeral services.

CMA is examining whether negative reviews were suppressed, ratings inflated, or incentives offered in exchange for positive feedback without disclosure.

Concerns also extend to moderation practices and whether review systems provide a complete and accurate picture of customer experiences, rather than favouring reputational or commercial interests. No conclusions have yet been reached on whether consumer law has been breached.

Online reviews play a central role in consumer behaviour, influencing significant levels of spending across the UK economy.

Research indicates that a large majority of consumers rely on reviews when making purchasing decisions, raising concerns that misleading content can distort markets and undermine trust, particularly as AI makes it harder to detect fabricated reviews.

The investigations form part of a broader enforcement effort under the Digital Markets Competition and Consumers Act 2024, which introduced stricter rules on fake and misleading reviews.

Authorities aim to improve transparency and accountability across digital platforms, with potential penalties reaching up to 10% of global turnover for companies found to have breached consumer protection laws.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Lille proposed as EU customs hub

France has submitted a bid to host the future EU Customs Authority in Lille, positioning itself at the centre of efforts to modernise the customs union. The proposal highlights national expertise and a leading role in shaping recent reforms.

Authorities argue the new body will strengthen internal market security, improve oversight of e-commerce and enhance cooperation between member states. France has supported initiatives to tackle illicit trade and improve risk management.

Officials also point to strong operational experience, including international customs networks and the use of AI tools to screen postal shipments. Such capabilities are presented as key to supporting the authority from its launch, but questions are raised concerning the use of AI and its biases.

Lille is promoted as a strategic logistics hub with strong transport links and access to skilled workers. Its location near major European trade routes is expected to support recruitment and coordination across the bloc.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital divide shapes AI job outcomes

A joint study by the International Labour Organization and the World Bank finds that AI will reshape labour markets unevenly across countries. Research covering 135 economies highlights growing risks for workers as automation expands.

Advanced economies show higher exposure to AI, particularly in clerical and professional roles. Lower-income regions face fewer direct impacts but lack the infrastructure and skills needed to capture productivity gains.

The digital divide plays a central role, with many vulnerable jobs already online and therefore exposed to automation. Workers in roles with potential benefits often lack reliable internet access, limiting opportunities.

The ILO’s findings suggest outcomes depend on infrastructure, skills and job design rather than technology alone. Policymakers are urged to improve connectivity, training and social protections to spread benefits more evenly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Human creativity outperforms AI in new research findings

New research challenges assumptions about AI creativity, concluding that human imagination remains significantly more advanced than generative systems.

The study, published in Advanced Science, examined how AI models perform in visual creative tasks compared with both professional artists and non-artists.

Researchers developed an experimental method to assess creativity using abstract visual tasks, comparing human and AI outputs under different conditions.

Results showed a clear hierarchy, with visual artists achieving the highest creativity scores, followed by the general population, while AI models ranked lower, especially when operating without human guidance.

These findings indicate that even when trained on human-created material, AI struggles to replicate originality and imaginative depth.

The study argues that creativity should be analysed as a process rather than judged solely by final outputs. By examining stages from idea generation to execution, researchers found that AI systems rely heavily on human input throughout development and use.

Removing human assistance significantly reduced the quality and originality of AI-generated results, reinforcing the limitations of current generative models.

Overall, the research highlights a persistent gap between human and artificial creativity, suggesting that AI operates more as a tool guided by human direction than as an independent creative agent.

The findings contribute to broader debates in cognitive science and AI, emphasising the continued importance of human involvement in creative processes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!