VPN dangers highlighted as UK’s Online Safety Act comes into force

Britons are being urged to proceed with caution before turning to virtual private networks (VPNs) in response to the new age verification requirements set by the Online Safety Act.

The law, now in effect, aims to protect young users by restricting access to adult and sensitive content unless users verify their age.

Instead of offering anonymous access, some platforms now demand personal details such as full names, email addresses, and even bank information to confirm a user’s age.

Although the legislation targets adult websites, many people have reported being blocked from accessing less controversial content, including alcohol-related forums and parts of Wikipedia.

As a result, more users are considering VPNs to bypass these checks. However, cybersecurity experts warn that many VPNs can pose serious risks by exposing users to scams, data theft, and malware. Without proper research, users might install software that compromises their privacy rather than protecting it.

With Ofcom reporting that eight per cent of children aged 8 to 14 in the UK have accessed adult content online, the new rules are viewed as a necessary safeguard. Still, concerns remain about the balance between online safety and digital privacy for adult users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian companies unite cybersecurity defences to combat AI threats

Australian companies are increasingly adopting unified, cloud-based cybersecurity systems as AI reshapes both threats and defences.

A new report from global research firm ISG reveals that many enterprises are shifting away from fragmented, uncoordinated tools and instead opting for centralised platforms that can better detect and counter sophisticated AI-driven attacks.

The rapid rise of generative AI has introduced new risks, including deepfakes, voice cloning and misinformation campaigns targeting elections and public health.

In response, organisations are reinforcing identity protections and integrating AI into their security operations to improve both speed and efficiency. These tools also help offset a growing shortage of cybersecurity professionals.

After a rushed move to the cloud during the pandemic, many businesses retained outdated perimeter-focused security systems. Now, firms are switching to cloud-first strategies that target vulnerabilities at endpoints and prevent misconfigurations instead of relying on legacy solutions.

By reducing overlap in systems like identity management and threat detection, businesses are streamlining defences for better resilience.

ISG also notes a shift in how companies choose cybersecurity providers. Firms like IBM, PwC, Deloitte and Accenture are seen as leaders in the Australian market, while companies such as TCS and AC3 have been flagged as rising stars.

The report further highlights growing demands for compliance and data retention, signalling a broader national effort to enhance cyber readiness across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI won’t replace coaches, but it will replace coaching without outcomes

Many coaches believe AI could never replace the human touch. They pride themselves on emotional intelligence — their empathy, intuition, and ability to read between the lines. They consider these traits irreplaceable. But that belief could be costing them their business.

The reason AI poses a real threat to coaching isn’t because machines are becoming more human. It’s because they’re becoming more effective. And clients aren’t hiring coaches for human connection — they’re hiring them for outcomes.

People seek coaches to overcome challenges, make decisions, or experience a transformation. They want results — and they want them as quickly and painlessly as possible. If AI can deliver those results faster and more conveniently, many clients will choose it without hesitation.

So what should coaches do? They shouldn’t ignore AI, fear it, or dismiss it as a passing fad. Instead, they should learn how to integrate it. Live, one-to-one sessions still matter. They provide the deepest insights and most lasting impact. But coaching must now extend beyond the session.

Coaching must be supported by systems that make success inevitable — and AI is the key to building those systems. Here lies a fundamental disconnect: coaches often believe their value lies in personal connections.

Clients, on the other hand, value results. The gap is where AI is stepping in — and where forward-thinking coaches are stepping up. Currently, most coaches are trapped in a model that trades time for money. More sessions, they assume, equals more transformation.

However, this model doesn’t scale. Many are burning out trying to serve everyone personally. Meanwhile, the most strategic among them are turning their coaching into scalable assets: digital products, automated workflows, and AI-trained tools that do their job around the clock.

They’re not being replaced by AI. They’re being amplified by it. The coaches are packaging their methods into online courses that clients can revisit between sessions. They’re building tools that track client progress automatically, offering midnight reassurance when doubts creep in.

The coaches are even training AI on their own frameworks, allowing clients to access support informed by the coach’s actual thinking — not generic chatbot responses. The business model in question isn’t science fiction. It’s already happening.

AI can be trained on your transcripts, methodologies, and session notes. It can conduct initial assessments and reinforce your teachings between meetings. Your clients receive consistent, on-demand support — and you free up time for the deep, human work only you can do.

Coaches who embrace this now will dominate their niches tomorrow. Even the content generated from coaching sessions is underutilised. Every call contains valuable insights — breakthroughs, reframes, moments of clarity.

The insights shouldn’t stay confined to just one client. Strip away personal details, extract the universal truths, and turn those insights into content that attracts your next ideal client. AI can also help you uncover patterns across your coaching history.

Feed your notes into analysis tools, and you might find that 80% of your executive clients hit the same obstacle in month three. Or that a particular intervention consistently delivers rapid breakthroughs.

The insights help you refine your practice and anticipate challenges before they arise — making your coaching more effective and less predictable. Then there’s the admin. Scheduling, invoicing, progress tracking — all of it can be automated.

Tools like Zapier or Make can optimise such repetitive tasks, giving you back hours each week. That’s time better spent on transformation, not operations. Your clients don’t want tradition. They want transformation.

The coaches who succeed in this new era will be those who understand that human insight and AI systems are not in competition. They’re complementary. Choose one area where AI could support your work — a progress tracker, a digital guide, or a content workflow. Start there.

The future of coaching doesn’t belong to the ones who resist AI. It belongs to those who combine wisdom with scalability. Your enhanced coaching model is waiting to be built — and your future clients are waiting to experience it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alignment Project to tackle safety risks of advanced AI systems

The UK’s Department for Science, Innovation and Technology (DSIT) has announced a new international research initiative aimed at ensuring future AI systems behave in ways aligned with human values and interests.

Called the Alignment Project, the initiative brings together global collaborators including the Canadian AI Safety Institute, Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA).

DSIT confirmed that the project will invest £15 million into AI alignment research – a field concerned with developing systems that remain responsive to human oversight and follow intended goals as they become more advanced.

Officials said this reflects growing concerns that today’s control methods may fall short when applied to the next generation of AI systems, which are expected to be significantly more powerful and autonomous.

This positioning reinforces the urgency and motivation behind the funding initiative, before going into the mechanics of how the project will work.

The Alignment Project will provide funding through three streams, each tailored to support different aspects of the research landscape. Grants of up to £1 million will be made available for researchers across a range of disciplines, from computer science to cognitive psychology.

A second stream will provide access to cloud computing resources from AWS and Anthropic, enabling large-scale technical experiments in AI alignment and safety.

The third stream focuses on accelerating commercial solutions through venture capital investment, supporting start-ups that aim to build practical tools for keeping AI behaviour aligned with human values.

An expert advisory board will guide the distribution of funds and ensure that investments are strategically focused. DSIT also invited further collaboration, encouraging governments, philanthropists, and industry players to contribute additional research grants, computing power, or funding for promising start-ups.

Science, Innovation and Technology Secretary Peter Kyle said it was vital that alignment research keeps pace with the rapid development of advanced systems.

‘Advanced AI systems are already exceeding human performance in some areas, so it’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests,’ Kyle said.

‘AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests.’

The announcement follows recent warnings from scientists and policy leaders about the risks posed by misaligned AI systems. Experts argue that without proper safeguards, powerful AI could behave unpredictably or act in ways beyond human control.

Geoffrey Irving, chief scientist at the AI Safety Institute, welcomed the UK’s initiative and highlighted the need for urgent progress.

‘AI alignment is one of the most urgent and under-resourced challenges of our time. Progress is essential, but it’s not happening fast enough relative to the rapid pace of AI development,’ he said.

‘Misaligned, highly capable systems could act in ways beyond our ability to control, with profound global implications.’

He praised the Alignment Project for its focus on international coordination and cross-sector involvement, which he said were essential for meaningful progress.

‘The Alignment Project tackles this head-on by bringing together governments, industry, philanthropists, VC, and researchers to close the critical gaps in alignment research,’ Irving added.

‘International coordination isn’t just valuable – it’s necessary. By providing funding, computing resources, and interdisciplinary collaboration to bring more ideas to bear on the problem, we hope to increase the chance that transformative AI systems serve humanity reliably, safely, and in ways we can trust.’

The project positions the UK as a key player in global efforts to ensure that AI systems remain accountable, transparent, and aligned with human intent as their capabilities expand.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

White House launches AI Action Plan with Executive Orders on exports and regulation

The White House has unveiled a sweeping AI strategy through its new publication Winning the Race: America’s AI Action Plan.

Released alongside three Executive Orders, the plan outlines the federal government’s next phase in shaping AI policy, focusing on innovation, infrastructure, and global leadership.

The AI Action Plan centres on three key pillars: accelerating AI development, establishing national AI infrastructure, and promoting American AI standards globally. Four consistent themes run through each pillar: regulation and deregulation, investment, research and standardisation, and cybersecurity.

Notably, deregulation is central to the plan’s strategy, particularly in reducing barriers to AI growth and speeding up infrastructure approval for data centres and grid expansion.

Investment plays a dominant role. Federal funds will support AI job training, data access, lab automation, and domestic component manufacturing, instead of relying on foreign suppliers.

Alongside, the plan calls for new national standards, improved dataset quality, and stronger evaluation mechanisms for AI interpretability, control, and safety. A dedicated AI Workforce Research Hub is also proposed.

In parallel, three Executive Orders were issued. One bans ‘woke’ or ideologically biased AI tools in federal use, another fast-tracks data centre development using federal land and brownfield sites, and a third launches an AI exports programme to support full-stack US AI systems globally.

While these moves open new opportunities, they also raise questions around regulation, bias, and the future shape of AI development in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan invests NT$50 million to train AI-ready professionals

Taiwan’s Ministry of Economic Affairs has announced the launch of the first phase of its 2025 AI talent training programme, set to begin in August.

The initiative aims to develop 152 skilled professionals capable of supporting businesses in adopting AI technologies across a vast range of sectors.

Chiu Chiu-hui, Director-General of the Industrial Development Administration, said the programme has attracted over 60 domestic and international companies that will contribute instructors and offer internship placements.

Notable participating firms include Microsoft Taiwan, ASE Group, and Acer. Students will be selected from leading universities, such as National Taipei University, National Taipei University of Technology, National Formosa University, and National Cheng Kung University.

Structured as a one-year curriculum, the training is divided into three four-month phases. The initial stage will focus on theoretical foundations and current industry trends.

The first training stage will be followed by four months of practical application, and finally, four months of on-site corporate internships. Graduates of the programme are required to commit to working for one of the participating companies for a minimum of two years upon completion.

Participants will receive financial support throughout their training. A monthly stipend of NT$20,000 (approximately US$673) will be provided during the academic and practical stages, increasing to NT$30,000 during the internship period.

The government has earmarked NT$50 million for the first phase of the programme, and additional co-investment from private companies is being actively encouraged.

According to Chiu, some Taiwanese firms are struggling to find qualified talent to support their AI ambitions. In response, the ministry trained approximately 70,000 AI professionals last year and has set a slightly lower target of over 50,000 for 2025.

However, the long-term vision remains ambitious — to develop a total of 200,000 AI specialists within the next four years.

Registration for the second phase of the initiative is now open and will close in September. Training will expand to include universities and research institutions across Taiwan, with the next round of classes scheduled to start in October.

Industry leaders have praised the initiative as a timely response to the rapidly evolving technological landscape.

Lee Shu-hsia, Vice President of Human Resources at ASE Group, noted that AI is no longer confined to manufacturing but is increasingly being integrated into various functions such as procurement, human resources, and management.

The cross-departmental adoption is creating demand for AI-literate professionals who can bridge technical knowledge with operational needs.

Danny Chen, General Manager of Microsoft Taiwan’s public business group, added that the digital transformation underway in many companies has led to a significant increase in demand for AI-related talent.

Chen expressed optimism that the training programme will help companies not only recruit but also retain skilled personnel. The Ministry of Economic Affairs has expressed its expectation for participation to grow in the coming years and plans to expand both the scope and scale of the training.

In addition to co-investment, the ministry is exploring partnerships with international institutions to further enhance the programme’s global relevance and ensure alignment with emerging industry standards.

While the government’s long-term goal is to future-proof Taiwan’s workforce, the immediate focus is on plugging the talent gap that threatens to slow industrial innovation.

By linking academic institutions with real-world corporate challenges, the programme aims to produce graduates who are not only technically proficient but also industry-ready from day one.

Observers say the initiative represents a proactive strategy in preparing Taiwan’s economy for the next wave of AI-driven transformation. With AI applications becoming increasingly prevalent in sectors ranging from logistics to administration, building a robust talent pipeline is now viewed as a national priority.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brainstorming with AI opens new doors for innovation

AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Company, Kevin Li describes how AI complements human brainstorming under time pressure, drawing from his work at Amazon and startup Stealth.

Li argues AI is no longer just a tool but a true collaborator in creative workflows. Generative models can analyse vast data sets and rapidly suggest alternative concepts, helping teams reimagine product features, marketing strategies, and campaign angles. The shift aligns with broader industry trends.

A McKinsey report from earlier this year highlighted that, while only 1% of companies consider themselves mature in AI use, most are investing heavily in this area. Creative use cases are expected to generate massive value by 2025.

Li notes that the most effective use of AI occurs when it’s treated as a sounding board. He recounts how the quality of ideas improved significantly when AI offered raw directions that humans later refined. The hybrid model is gaining traction across multiple startups and established firms alike.

Still, original thinking remains a hurdle. A recent study by PsyPost found human pairs often outperform AI tools in generating novel ideas during collaborative sessions. While AI offers scale, human teams reported more substantial creative confidence and profound originality.

The findings suggest AI may work best at the outset of ideation, followed by human editing and development. Experts recommend setting clear roles for AI in the creative cycle. For instance, tools like ChatGPT or Midjourney might handle initial brainstorming, while humans oversee narrative coherence, tone, and ethics.

The approach is especially relevant in advertising, product design, and marketing, where nuance is still essential. Creatives across X are actively sharing tips and results. One agency leader posted about reducing production costs by 30% using AI tools for routine content work.

The strategy allowed more time and budget to focus on storytelling and strategy. Others note that using AI to write draft copy or generate design options is becoming common. Yet concerns remain over ethical boundaries.

The Orchidea Innovation Blog cautioned in 2023 that AI often recycles learned material, which can limit fresh perspectives. Recent conversations on X raise alarms about over-reliance. Some fear AI-generated content will eradicate originality across sectors, particularly marketing, media, and publishing.

To counter such risks, structured prompting and human-in-the-loop models are gaining popularity. ClickUp’s AI brainstorming guide recommends feeding diverse inputs to avoid homogeneous outputs. Précis AI referenced Wharton research to show that vague prompts often produce repetitive results.

The solution: intentional, varied starting points with iterative feedback loops. Emerging platforms are tackling this in real-time. Ideamap.ai, for example, enables collaborative sessions where teams interact with AI visually and textually.

Jabra’s latest insights describe AI as a ‘thought partner’ rather than a replacement, enhancing team reasoning and ideation dynamics without eliminating human roles. Looking ahead, the business case for AI creativity is strong.

McKinsey projects hundreds of billions in value from AI-enhanced marketing, especially in retail and software. Influencers like Greg Isenberg predict $100 million niches built on AI-led product design. Frank$Shy’s analysis points to a $30 billion creative AI market by 2025, driven by enterprise tools.

Even in e-commerce, AI is transforming operations. Analytics India Magazine reports that brands build eight-figure revenues by automating design and content workflows while keeping human editors in charge. The trend is not about replacement but refinement and scale.

Li’s central message remains relevant: when used ethically, AI augments rather than replaces creativity. Responsible integration supports diverse voices and helps teams navigate the fast-evolving innovation landscape. The future of ideation lies in balance, not substitution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Survey finds developers value AI for ideas, not final answers

As AI becomes more integrated into developer workflows, a new report shows that trust in AI-generated results erodes. According to Stack Overflow’s 2025 Developer Survey, the use of AI has increased to 84%, up from 76% last year. However, trust in its output has dropped, especially among experienced professionals.

The survey found that 46% of developers now lack trust in AI-generated answers.

That figure marks a sharp increase from 31% in 2024, suggesting growing scepticism despite higher adoption. By contrast, only 3.1% of developers trust AI responses.

Interestingly, trust varies with experience. Beginners were twice as likely to express high confidence in AI, with 6.1% reporting strong trust, compared with just 2.5% among seasoned developers. The results indicate a divide in how AI is perceived across the developer landscape.

Despite doubts, developers continue to use AI tools across various tasks. The vast majority – 78.5% – use AI on an infrequent basis, such as once a month. The pattern holds across experience levels, suggesting cautious and situational usage.

While trust is lacking, developers still see AI as a helpful starting point. Three in five respondents reported favourable views of AI tools overall. One in five viewed them negatively, with the remaining 20% remaining neutral.

However, that usefulness has limits. Developers were quick to seek human input when unsure about AI responses. Seventy-five percent said they would ask someone when they didn’t trust an AI-generated answer. Fifty-eight percent preferred human advice when they didn’t fully understand a solution.

Ethics and security were also areas where developers preferred human judgement. Again, 58% reported turning to colleagues or mentors to evaluate such risks. Such cases show a continued reliance on human expertise in high-stakes decisions.

Stack Overflow CEO Prashanth Chandrasekar acknowledged the limitations of AI in the development process. ‘AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance,’ he said. He added that AI best uses a ‘trusted human intelligence layer’.

The data also revealed that developers may not trust AI entirely but use it to support learning.

Forty-four percent of respondents admitted using AI tools to learn how to code, up from 37% last year.

A further 36% use it for work-related growth or career advancement.

The results highlight the role of AI as an educational companion rather than a coding authority.

It can help users understand concepts or generate basic examples, but most still want human review.

That distinction matters as teams consider how to integrate AI into production workflows.

Some developers are concerned that overreliance on AI could reduce the depth of their problem-solving skills. Others worry about hallucinations — AI-generated content that appears accurate but is misleading or incorrect. Such risks have led to a cautious, layered approach to using AI tools in real-life projects.

Stack Overflow’s findings align with broader AI adoption and trust industry trends. Tech firms are exploring ways to integrate AI safely, but many prioritise transparency and human oversight. Chandrasekar believes developers are uniquely positioned to help shape AI’s future revolution.

‘By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value,’ he said. ‘They’ll help build the AI technologies and products of tomorrow.’

As AI continues to expand into software development, one thing is clear: trust matters. Developers are open to using AI – but only when it supports, rather than replaces, human judgement. The challenge now is building systems that earn and maintain that trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE partnership boosts NeOnc’s clinical trial programme

Biotech firm NeOnc Technologies has gained rapid attention after going public in March 2025 and joining the Russell Microcap Index just months later. The company focuses on intranasal drug delivery for brain cancer, allowing patients to administer treatment at home and bypass the blood-brain barrier.

NeOnc’s lead treatment is in Phase 2A trials for glioblastoma patients and is already showing extended survival times with minimal side effects. Backed by a partnership with USC’s Keck Medical School, the company is also expanding clinical trials to the Middle East and North Africa under US FDA standards.

A $50 million investment deal with a UAE-based firm is helping fund this expansion, including trials run by Cleveland Clinic through a regional partnership. The trials are expected to be fully enrolled by September, with positive preliminary data already being reported.

AI and quantum computing are central to NeOnc’s strategy, particularly in reducing risk and cost in trial design and drug development. As a pre-revenue biotech, the company is betting that innovation and global collaboration will carry it to the next stage of growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google backs EU AI Code but warns against slowing innovation

Google has confirmed it will sign the European Union’s General Purpose AI Code of Practice, joining other companies, including major US model developers.

The tech giant hopes the Code will support access to safe and advanced AI tools across Europe, where rapid adoption could add up to €1.4 trillion annually to the continent’s economy by 2034.

Kent Walker, Google and Alphabet’s President of Global Affairs, said the final Code better aligns with Europe’s economic ambitions than earlier drafts, noting that Google had submitted feedback during its development.

However, he warned that parts of the Code and the broader AI Act might hinder innovation by introducing rules that stray from EU copyright law, slow product approvals or risk revealing trade secrets.

Walker explained that such requirements could restrict Europe’s ability to compete globally in AI. He highlighted the need to balance regulation with the flexibility required to keep pace with technological advances.

Google stated it will work closely with the EU’s new AI Office to help shape a proportionate, future-facing approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!