Alignment Project to tackle safety risks of advanced AI systems

The UK’s Department for Science, Innovation and Technology (DSIT) has announced a new international research initiative aimed at ensuring future AI systems behave in ways aligned with human values and interests.

Called the Alignment Project, the initiative brings together global collaborators including the Canadian AI Safety Institute, Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA).

DSIT confirmed that the project will invest £15 million into AI alignment research – a field concerned with developing systems that remain responsive to human oversight and follow intended goals as they become more advanced.

Officials said this reflects growing concerns that today’s control methods may fall short when applied to the next generation of AI systems, which are expected to be significantly more powerful and autonomous.

This positioning reinforces the urgency and motivation behind the funding initiative, before going into the mechanics of how the project will work.

The Alignment Project will provide funding through three streams, each tailored to support different aspects of the research landscape. Grants of up to £1 million will be made available for researchers across a range of disciplines, from computer science to cognitive psychology.

A second stream will provide access to cloud computing resources from AWS and Anthropic, enabling large-scale technical experiments in AI alignment and safety.

The third stream focuses on accelerating commercial solutions through venture capital investment, supporting start-ups that aim to build practical tools for keeping AI behaviour aligned with human values.

An expert advisory board will guide the distribution of funds and ensure that investments are strategically focused. DSIT also invited further collaboration, encouraging governments, philanthropists, and industry players to contribute additional research grants, computing power, or funding for promising start-ups.

Science, Innovation and Technology Secretary Peter Kyle said it was vital that alignment research keeps pace with the rapid development of advanced systems.

‘Advanced AI systems are already exceeding human performance in some areas, so it’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests,’ Kyle said.

‘AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests.’

The announcement follows recent warnings from scientists and policy leaders about the risks posed by misaligned AI systems. Experts argue that without proper safeguards, powerful AI could behave unpredictably or act in ways beyond human control.

Geoffrey Irving, chief scientist at the AI Safety Institute, welcomed the UK’s initiative and highlighted the need for urgent progress.

‘AI alignment is one of the most urgent and under-resourced challenges of our time. Progress is essential, but it’s not happening fast enough relative to the rapid pace of AI development,’ he said.

‘Misaligned, highly capable systems could act in ways beyond our ability to control, with profound global implications.’

He praised the Alignment Project for its focus on international coordination and cross-sector involvement, which he said were essential for meaningful progress.

‘The Alignment Project tackles this head-on by bringing together governments, industry, philanthropists, VC, and researchers to close the critical gaps in alignment research,’ Irving added.

‘International coordination isn’t just valuable – it’s necessary. By providing funding, computing resources, and interdisciplinary collaboration to bring more ideas to bear on the problem, we hope to increase the chance that transformative AI systems serve humanity reliably, safely, and in ways we can trust.’

The project positions the UK as a key player in global efforts to ensure that AI systems remain accountable, transparent, and aligned with human intent as their capabilities expand.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

White House launches AI Action Plan with Executive Orders on exports and regulation

The White House has unveiled a sweeping AI strategy through its new publication Winning the Race: America’s AI Action Plan.

Released alongside three Executive Orders, the plan outlines the federal government’s next phase in shaping AI policy, focusing on innovation, infrastructure, and global leadership.

The AI Action Plan centres on three key pillars: accelerating AI development, establishing national AI infrastructure, and promoting American AI standards globally. Four consistent themes run through each pillar: regulation and deregulation, investment, research and standardisation, and cybersecurity.

Notably, deregulation is central to the plan’s strategy, particularly in reducing barriers to AI growth and speeding up infrastructure approval for data centres and grid expansion.

Investment plays a dominant role. Federal funds will support AI job training, data access, lab automation, and domestic component manufacturing, instead of relying on foreign suppliers.

Alongside, the plan calls for new national standards, improved dataset quality, and stronger evaluation mechanisms for AI interpretability, control, and safety. A dedicated AI Workforce Research Hub is also proposed.

In parallel, three Executive Orders were issued. One bans ‘woke’ or ideologically biased AI tools in federal use, another fast-tracks data centre development using federal land and brownfield sites, and a third launches an AI exports programme to support full-stack US AI systems globally.

While these moves open new opportunities, they also raise questions around regulation, bias, and the future shape of AI development in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan invests NT$50 million to train AI-ready professionals

Taiwan’s Ministry of Economic Affairs has announced the launch of the first phase of its 2025 AI talent training programme, set to begin in August.

The initiative aims to develop 152 skilled professionals capable of supporting businesses in adopting AI technologies across a vast range of sectors.

Chiu Chiu-hui, Director-General of the Industrial Development Administration, said the programme has attracted over 60 domestic and international companies that will contribute instructors and offer internship placements.

Notable participating firms include Microsoft Taiwan, ASE Group, and Acer. Students will be selected from leading universities, such as National Taipei University, National Taipei University of Technology, National Formosa University, and National Cheng Kung University.

Structured as a one-year curriculum, the training is divided into three four-month phases. The initial stage will focus on theoretical foundations and current industry trends.

The first training stage will be followed by four months of practical application, and finally, four months of on-site corporate internships. Graduates of the programme are required to commit to working for one of the participating companies for a minimum of two years upon completion.

Participants will receive financial support throughout their training. A monthly stipend of NT$20,000 (approximately US$673) will be provided during the academic and practical stages, increasing to NT$30,000 during the internship period.

The government has earmarked NT$50 million for the first phase of the programme, and additional co-investment from private companies is being actively encouraged.

According to Chiu, some Taiwanese firms are struggling to find qualified talent to support their AI ambitions. In response, the ministry trained approximately 70,000 AI professionals last year and has set a slightly lower target of over 50,000 for 2025.

However, the long-term vision remains ambitious — to develop a total of 200,000 AI specialists within the next four years.

Registration for the second phase of the initiative is now open and will close in September. Training will expand to include universities and research institutions across Taiwan, with the next round of classes scheduled to start in October.

Industry leaders have praised the initiative as a timely response to the rapidly evolving technological landscape.

Lee Shu-hsia, Vice President of Human Resources at ASE Group, noted that AI is no longer confined to manufacturing but is increasingly being integrated into various functions such as procurement, human resources, and management.

The cross-departmental adoption is creating demand for AI-literate professionals who can bridge technical knowledge with operational needs.

Danny Chen, General Manager of Microsoft Taiwan’s public business group, added that the digital transformation underway in many companies has led to a significant increase in demand for AI-related talent.

Chen expressed optimism that the training programme will help companies not only recruit but also retain skilled personnel. The Ministry of Economic Affairs has expressed its expectation for participation to grow in the coming years and plans to expand both the scope and scale of the training.

In addition to co-investment, the ministry is exploring partnerships with international institutions to further enhance the programme’s global relevance and ensure alignment with emerging industry standards.

While the government’s long-term goal is to future-proof Taiwan’s workforce, the immediate focus is on plugging the talent gap that threatens to slow industrial innovation.

By linking academic institutions with real-world corporate challenges, the programme aims to produce graduates who are not only technically proficient but also industry-ready from day one.

Observers say the initiative represents a proactive strategy in preparing Taiwan’s economy for the next wave of AI-driven transformation. With AI applications becoming increasingly prevalent in sectors ranging from logistics to administration, building a robust talent pipeline is now viewed as a national priority.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brainstorming with AI opens new doors for innovation

AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Company, Kevin Li describes how AI complements human brainstorming under time pressure, drawing from his work at Amazon and startup Stealth.

Li argues AI is no longer just a tool but a true collaborator in creative workflows. Generative models can analyse vast data sets and rapidly suggest alternative concepts, helping teams reimagine product features, marketing strategies, and campaign angles. The shift aligns with broader industry trends.

A McKinsey report from earlier this year highlighted that, while only 1% of companies consider themselves mature in AI use, most are investing heavily in this area. Creative use cases are expected to generate massive value by 2025.

Li notes that the most effective use of AI occurs when it’s treated as a sounding board. He recounts how the quality of ideas improved significantly when AI offered raw directions that humans later refined. The hybrid model is gaining traction across multiple startups and established firms alike.

Still, original thinking remains a hurdle. A recent study by PsyPost found human pairs often outperform AI tools in generating novel ideas during collaborative sessions. While AI offers scale, human teams reported more substantial creative confidence and profound originality.

The findings suggest AI may work best at the outset of ideation, followed by human editing and development. Experts recommend setting clear roles for AI in the creative cycle. For instance, tools like ChatGPT or Midjourney might handle initial brainstorming, while humans oversee narrative coherence, tone, and ethics.

The approach is especially relevant in advertising, product design, and marketing, where nuance is still essential. Creatives across X are actively sharing tips and results. One agency leader posted about reducing production costs by 30% using AI tools for routine content work.

The strategy allowed more time and budget to focus on storytelling and strategy. Others note that using AI to write draft copy or generate design options is becoming common. Yet concerns remain over ethical boundaries.

The Orchidea Innovation Blog cautioned in 2023 that AI often recycles learned material, which can limit fresh perspectives. Recent conversations on X raise alarms about over-reliance. Some fear AI-generated content will eradicate originality across sectors, particularly marketing, media, and publishing.

To counter such risks, structured prompting and human-in-the-loop models are gaining popularity. ClickUp’s AI brainstorming guide recommends feeding diverse inputs to avoid homogeneous outputs. Précis AI referenced Wharton research to show that vague prompts often produce repetitive results.

The solution: intentional, varied starting points with iterative feedback loops. Emerging platforms are tackling this in real-time. Ideamap.ai, for example, enables collaborative sessions where teams interact with AI visually and textually.

Jabra’s latest insights describe AI as a ‘thought partner’ rather than a replacement, enhancing team reasoning and ideation dynamics without eliminating human roles. Looking ahead, the business case for AI creativity is strong.

McKinsey projects hundreds of billions in value from AI-enhanced marketing, especially in retail and software. Influencers like Greg Isenberg predict $100 million niches built on AI-led product design. Frank$Shy’s analysis points to a $30 billion creative AI market by 2025, driven by enterprise tools.

Even in e-commerce, AI is transforming operations. Analytics India Magazine reports that brands build eight-figure revenues by automating design and content workflows while keeping human editors in charge. The trend is not about replacement but refinement and scale.

Li’s central message remains relevant: when used ethically, AI augments rather than replaces creativity. Responsible integration supports diverse voices and helps teams navigate the fast-evolving innovation landscape. The future of ideation lies in balance, not substitution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Survey finds developers value AI for ideas, not final answers

As AI becomes more integrated into developer workflows, a new report shows that trust in AI-generated results erodes. According to Stack Overflow’s 2025 Developer Survey, the use of AI has increased to 84%, up from 76% last year. However, trust in its output has dropped, especially among experienced professionals.

The survey found that 46% of developers now lack trust in AI-generated answers.

That figure marks a sharp increase from 31% in 2024, suggesting growing scepticism despite higher adoption. By contrast, only 3.1% of developers trust AI responses.

Interestingly, trust varies with experience. Beginners were twice as likely to express high confidence in AI, with 6.1% reporting strong trust, compared with just 2.5% among seasoned developers. The results indicate a divide in how AI is perceived across the developer landscape.

Despite doubts, developers continue to use AI tools across various tasks. The vast majority – 78.5% – use AI on an infrequent basis, such as once a month. The pattern holds across experience levels, suggesting cautious and situational usage.

While trust is lacking, developers still see AI as a helpful starting point. Three in five respondents reported favourable views of AI tools overall. One in five viewed them negatively, with the remaining 20% remaining neutral.

However, that usefulness has limits. Developers were quick to seek human input when unsure about AI responses. Seventy-five percent said they would ask someone when they didn’t trust an AI-generated answer. Fifty-eight percent preferred human advice when they didn’t fully understand a solution.

Ethics and security were also areas where developers preferred human judgement. Again, 58% reported turning to colleagues or mentors to evaluate such risks. Such cases show a continued reliance on human expertise in high-stakes decisions.

Stack Overflow CEO Prashanth Chandrasekar acknowledged the limitations of AI in the development process. ‘AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance,’ he said. He added that AI best uses a ‘trusted human intelligence layer’.

The data also revealed that developers may not trust AI entirely but use it to support learning.

Forty-four percent of respondents admitted using AI tools to learn how to code, up from 37% last year.

A further 36% use it for work-related growth or career advancement.

The results highlight the role of AI as an educational companion rather than a coding authority.

It can help users understand concepts or generate basic examples, but most still want human review.

That distinction matters as teams consider how to integrate AI into production workflows.

Some developers are concerned that overreliance on AI could reduce the depth of their problem-solving skills. Others worry about hallucinations — AI-generated content that appears accurate but is misleading or incorrect. Such risks have led to a cautious, layered approach to using AI tools in real-life projects.

Stack Overflow’s findings align with broader AI adoption and trust industry trends. Tech firms are exploring ways to integrate AI safely, but many prioritise transparency and human oversight. Chandrasekar believes developers are uniquely positioned to help shape AI’s future revolution.

‘By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value,’ he said. ‘They’ll help build the AI technologies and products of tomorrow.’

As AI continues to expand into software development, one thing is clear: trust matters. Developers are open to using AI – but only when it supports, rather than replaces, human judgement. The challenge now is building systems that earn and maintain that trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE partnership boosts NeOnc’s clinical trial programme

Biotech firm NeOnc Technologies has gained rapid attention after going public in March 2025 and joining the Russell Microcap Index just months later. The company focuses on intranasal drug delivery for brain cancer, allowing patients to administer treatment at home and bypass the blood-brain barrier.

NeOnc’s lead treatment is in Phase 2A trials for glioblastoma patients and is already showing extended survival times with minimal side effects. Backed by a partnership with USC’s Keck Medical School, the company is also expanding clinical trials to the Middle East and North Africa under US FDA standards.

A $50 million investment deal with a UAE-based firm is helping fund this expansion, including trials run by Cleveland Clinic through a regional partnership. The trials are expected to be fully enrolled by September, with positive preliminary data already being reported.

AI and quantum computing are central to NeOnc’s strategy, particularly in reducing risk and cost in trial design and drug development. As a pre-revenue biotech, the company is betting that innovation and global collaboration will carry it to the next stage of growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google backs EU AI Code but warns against slowing innovation

Google has confirmed it will sign the European Union’s General Purpose AI Code of Practice, joining other companies, including major US model developers.

The tech giant hopes the Code will support access to safe and advanced AI tools across Europe, where rapid adoption could add up to €1.4 trillion annually to the continent’s economy by 2034.

Kent Walker, Google and Alphabet’s President of Global Affairs, said the final Code better aligns with Europe’s economic ambitions than earlier drafts, noting that Google had submitted feedback during its development.

However, he warned that parts of the Code and the broader AI Act might hinder innovation by introducing rules that stray from EU copyright law, slow product approvals or risk revealing trade secrets.

Walker explained that such requirements could restrict Europe’s ability to compete globally in AI. He highlighted the need to balance regulation with the flexibility required to keep pace with technological advances.

Google stated it will work closely with the EU’s new AI Office to help shape a proportionate, future-facing approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act begins as tech firms push back

Europe’s AI crackdown officially begins soon, as the EU enforces the first rules targeting developers of generative AI models like ChatGPT.

Under the AI Act, firms must now assess systemic risks, conduct adversarial testing, ensure cybersecurity, report serious incidents, and even disclose energy usage. The goal is to prevent harms related to bias, misinformation, manipulation, and lack of transparency in AI systems.

Although the legislation was passed last year, the EU only released developer guidance on 10 July, leaving tech giants with little time to adapt.

Meta, which developed the Llama AI model, has refused to sign the voluntary code of practice, arguing that it introduces legal uncertainty. Other developers have expressed concerns over how vague and generic the guidance remains, especially around copyright and practical compliance.

The EU also distinguishes itself from the US, where a re-elected Trump administration has launched a far looser AI Action Plan. While Washington supports minimal restrictions to encourage innovation, Brussels is focused on safety and transparency.

Trade tensions may grow, but experts warn that developers should not rely on future political deals instead of taking immediate steps toward compliance.

The AI Act’s rollout will continue into 2026, with the next phase focusing on high-risk AI systems in healthcare, law enforcement, and critical infrastructure.

Meanwhile, questions remain over whether AI-generated content qualifies for copyright protection and how companies should handle AI in marketing or supply chains. For now, Europe’s push for safer AI is accelerating—whether Big Tech likes it or not.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia reverses its stance and restricts YouTube for children under 16

Australia has announced that YouTube will be banned for children under 16 starting in December, reversing its earlier exemption from strict new social media age rules. The decision follows growing concerns about online harm to young users.

Platforms like Facebook, Instagram, Snapchat, TikTok, and X are already subject to the upcoming restrictions, and YouTube will now join the list of ‘age-restricted social media platforms’.

From 10 December, all such platforms will be required to ensure users are aged 16 or older or face fines of up to AU$50 million (£26 million) for not taking adequate steps to verify age. Although those steps remain undefined, users will not need to upload official documents like passports or licences.

The government has said platforms must find alternatives instead of relying on intrusive ID checks.

Communications Minister Anika Wells defended the policy, stating that four in ten Australian children reported recent harm on YouTube. She insisted the government would not back down under legal pressure from Alphabet Inc., YouTube’s US-based parent company.

Children can still view videos, but won’t be allowed to hold personal YouTube accounts.

YouTube criticised the move, claiming the platform is not social media but a video library often accessed through TVs. Prime Minister Anthony Albanese said Australia would campaign at a UN forum in September to promote global backing for social media age restrictions.

Exemptions will apply to apps used mainly for education, health, messaging, or gaming, which are considered less harmful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google adds narrated slide videos to NotebookLM

Google has added a new dimension to NotebookLM by introducing Video Overviews, a feature that transforms your content into narrated slide presentations.

Originally revealed at Google I/O, the tool builds on the popularity of Audio Overviews, which generated AI-hosted podcast-style summaries. Instead of relying solely on audio, users can now enjoy visual storytelling powered by the same AI.

Video Overviews automatically pulls elements like images, diagrams, quotes and statistics from documents to create slide-based summaries.

The tool supports professionals and students by simplifying complex reports or academic papers into engaging visual formats. Users can also customise the video output by defining learning goals, selecting key topics, or tailoring it to a specific audience.

For now, the rollout is limited to English-speaking users on desktops, but Google plans to expand the formats. Narrated slides are the first to launch, combining clear visuals with spoken summaries, helping visual learners engage with content more effectively instead of reading lengthy text.

Alongside the new feature, Google has redesigned the NotebookLM Studio interface. Users can now generate and store multiple outputs—Audio Overviews, Reports, Study Guides, or Mind Maps—all within a single notebook.

The update also allows users to interact with different tools simultaneously, such as listening to an AI podcast while reviewing a study guide, offering a more integrated and versatile learning experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!