Physicists remain split on what quantum theory really means

One hundred years after its birth, quantum mechanics continues to baffle physicists, despite underpinning many of today’s technologies. While its equations accurately describe the behaviour of subatomic particles, experts remain deeply divided on what those equations actually reveal about reality.

A recent survey by Nature, involving more than 1,100 physicists, highlighted the lack of consensus within the field. Just over a third supported the Copenhagen interpretation, which claims a particle only assumes a definite state once it is observed.

Others favour alternatives like the many worlds theory, which suggests every possible outcome exists in parallel universes rather than collapsing into a single reality. The concept challenges traditional notions of observation, space and causality.

Physicists also remain split on whether there is a boundary between classical and quantum systems. Only a quarter expressed confidence in their chosen interpretation, with most believing a better theory will eventually replace today’s understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI won’t replace coaches, but it will replace coaching without outcomes

Many coaches believe AI could never replace the human touch. They pride themselves on emotional intelligence — their empathy, intuition, and ability to read between the lines. They consider these traits irreplaceable. But that belief could be costing them their business.

The reason AI poses a real threat to coaching isn’t because machines are becoming more human. It’s because they’re becoming more effective. And clients aren’t hiring coaches for human connection — they’re hiring them for outcomes.

People seek coaches to overcome challenges, make decisions, or experience a transformation. They want results — and they want them as quickly and painlessly as possible. If AI can deliver those results faster and more conveniently, many clients will choose it without hesitation.

So what should coaches do? They shouldn’t ignore AI, fear it, or dismiss it as a passing fad. Instead, they should learn how to integrate it. Live, one-to-one sessions still matter. They provide the deepest insights and most lasting impact. But coaching must now extend beyond the session.

Coaching must be supported by systems that make success inevitable — and AI is the key to building those systems. Here lies a fundamental disconnect: coaches often believe their value lies in personal connections.

Clients, on the other hand, value results. The gap is where AI is stepping in — and where forward-thinking coaches are stepping up. Currently, most coaches are trapped in a model that trades time for money. More sessions, they assume, equals more transformation.

However, this model doesn’t scale. Many are burning out trying to serve everyone personally. Meanwhile, the most strategic among them are turning their coaching into scalable assets: digital products, automated workflows, and AI-trained tools that do their job around the clock.

They’re not being replaced by AI. They’re being amplified by it. The coaches are packaging their methods into online courses that clients can revisit between sessions. They’re building tools that track client progress automatically, offering midnight reassurance when doubts creep in.

The coaches are even training AI on their own frameworks, allowing clients to access support informed by the coach’s actual thinking — not generic chatbot responses. The business model in question isn’t science fiction. It’s already happening.

AI can be trained on your transcripts, methodologies, and session notes. It can conduct initial assessments and reinforce your teachings between meetings. Your clients receive consistent, on-demand support — and you free up time for the deep, human work only you can do.

Coaches who embrace this now will dominate their niches tomorrow. Even the content generated from coaching sessions is underutilised. Every call contains valuable insights — breakthroughs, reframes, moments of clarity.

The insights shouldn’t stay confined to just one client. Strip away personal details, extract the universal truths, and turn those insights into content that attracts your next ideal client. AI can also help you uncover patterns across your coaching history.

Feed your notes into analysis tools, and you might find that 80% of your executive clients hit the same obstacle in month three. Or that a particular intervention consistently delivers rapid breakthroughs.

The insights help you refine your practice and anticipate challenges before they arise — making your coaching more effective and less predictable. Then there’s the admin. Scheduling, invoicing, progress tracking — all of it can be automated.

Tools like Zapier or Make can optimise such repetitive tasks, giving you back hours each week. That’s time better spent on transformation, not operations. Your clients don’t want tradition. They want transformation.

The coaches who succeed in this new era will be those who understand that human insight and AI systems are not in competition. They’re complementary. Choose one area where AI could support your work — a progress tracker, a digital guide, or a content workflow. Start there.

The future of coaching doesn’t belong to the ones who resist AI. It belongs to those who combine wisdom with scalability. Your enhanced coaching model is waiting to be built — and your future clients are waiting to experience it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alignment Project to tackle safety risks of advanced AI systems

The UK’s Department for Science, Innovation and Technology (DSIT) has announced a new international research initiative aimed at ensuring future AI systems behave in ways aligned with human values and interests.

Called the Alignment Project, the initiative brings together global collaborators including the Canadian AI Safety Institute, Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA).

DSIT confirmed that the project will invest £15 million into AI alignment research – a field concerned with developing systems that remain responsive to human oversight and follow intended goals as they become more advanced.

Officials said this reflects growing concerns that today’s control methods may fall short when applied to the next generation of AI systems, which are expected to be significantly more powerful and autonomous.

This positioning reinforces the urgency and motivation behind the funding initiative, before going into the mechanics of how the project will work.

The Alignment Project will provide funding through three streams, each tailored to support different aspects of the research landscape. Grants of up to £1 million will be made available for researchers across a range of disciplines, from computer science to cognitive psychology.

A second stream will provide access to cloud computing resources from AWS and Anthropic, enabling large-scale technical experiments in AI alignment and safety.

The third stream focuses on accelerating commercial solutions through venture capital investment, supporting start-ups that aim to build practical tools for keeping AI behaviour aligned with human values.

An expert advisory board will guide the distribution of funds and ensure that investments are strategically focused. DSIT also invited further collaboration, encouraging governments, philanthropists, and industry players to contribute additional research grants, computing power, or funding for promising start-ups.

Science, Innovation and Technology Secretary Peter Kyle said it was vital that alignment research keeps pace with the rapid development of advanced systems.

‘Advanced AI systems are already exceeding human performance in some areas, so it’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests,’ Kyle said.

‘AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests.’

The announcement follows recent warnings from scientists and policy leaders about the risks posed by misaligned AI systems. Experts argue that without proper safeguards, powerful AI could behave unpredictably or act in ways beyond human control.

Geoffrey Irving, chief scientist at the AI Safety Institute, welcomed the UK’s initiative and highlighted the need for urgent progress.

‘AI alignment is one of the most urgent and under-resourced challenges of our time. Progress is essential, but it’s not happening fast enough relative to the rapid pace of AI development,’ he said.

‘Misaligned, highly capable systems could act in ways beyond our ability to control, with profound global implications.’

He praised the Alignment Project for its focus on international coordination and cross-sector involvement, which he said were essential for meaningful progress.

‘The Alignment Project tackles this head-on by bringing together governments, industry, philanthropists, VC, and researchers to close the critical gaps in alignment research,’ Irving added.

‘International coordination isn’t just valuable – it’s necessary. By providing funding, computing resources, and interdisciplinary collaboration to bring more ideas to bear on the problem, we hope to increase the chance that transformative AI systems serve humanity reliably, safely, and in ways we can trust.’

The project positions the UK as a key player in global efforts to ensure that AI systems remain accountable, transparent, and aligned with human intent as their capabilities expand.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scientists use quantum AI to solve chip design challenge

Scientists in Australia have used quantum machine learning to model semiconductor properties more accurately, potentially transforming how microchips are designed and manufactured.

The hybrid technique combines AI with quantum computing to solve a long-standing challenge in chip production: predicting electrical resistance where metal meets semiconductor.

The Australian researchers developed a new algorithm, the Quantum Kernel-Aligned Regressor (QKAR), which uses quantum methods to detect complex patterns in small, noisy datasets, a common issue in semiconductor research.

By improving how engineers predict Ohmic contact resistance, the approach could lead to faster, more energy-efficient chips. It also offers real-world compatibility, meaning it can eventually run on existing quantum machines as the hardware matures.

The findings highlight the growing role of quantum AI in hardware design and suggest the method could be adopted in commercial chip production in the near future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brainstorming with AI opens new doors for innovation

AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Company, Kevin Li describes how AI complements human brainstorming under time pressure, drawing from his work at Amazon and startup Stealth.

Li argues AI is no longer just a tool but a true collaborator in creative workflows. Generative models can analyse vast data sets and rapidly suggest alternative concepts, helping teams reimagine product features, marketing strategies, and campaign angles. The shift aligns with broader industry trends.

A McKinsey report from earlier this year highlighted that, while only 1% of companies consider themselves mature in AI use, most are investing heavily in this area. Creative use cases are expected to generate massive value by 2025.

Li notes that the most effective use of AI occurs when it’s treated as a sounding board. He recounts how the quality of ideas improved significantly when AI offered raw directions that humans later refined. The hybrid model is gaining traction across multiple startups and established firms alike.

Still, original thinking remains a hurdle. A recent study by PsyPost found human pairs often outperform AI tools in generating novel ideas during collaborative sessions. While AI offers scale, human teams reported more substantial creative confidence and profound originality.

The findings suggest AI may work best at the outset of ideation, followed by human editing and development. Experts recommend setting clear roles for AI in the creative cycle. For instance, tools like ChatGPT or Midjourney might handle initial brainstorming, while humans oversee narrative coherence, tone, and ethics.

The approach is especially relevant in advertising, product design, and marketing, where nuance is still essential. Creatives across X are actively sharing tips and results. One agency leader posted about reducing production costs by 30% using AI tools for routine content work.

The strategy allowed more time and budget to focus on storytelling and strategy. Others note that using AI to write draft copy or generate design options is becoming common. Yet concerns remain over ethical boundaries.

The Orchidea Innovation Blog cautioned in 2023 that AI often recycles learned material, which can limit fresh perspectives. Recent conversations on X raise alarms about over-reliance. Some fear AI-generated content will eradicate originality across sectors, particularly marketing, media, and publishing.

To counter such risks, structured prompting and human-in-the-loop models are gaining popularity. ClickUp’s AI brainstorming guide recommends feeding diverse inputs to avoid homogeneous outputs. Précis AI referenced Wharton research to show that vague prompts often produce repetitive results.

The solution: intentional, varied starting points with iterative feedback loops. Emerging platforms are tackling this in real-time. Ideamap.ai, for example, enables collaborative sessions where teams interact with AI visually and textually.

Jabra’s latest insights describe AI as a ‘thought partner’ rather than a replacement, enhancing team reasoning and ideation dynamics without eliminating human roles. Looking ahead, the business case for AI creativity is strong.

McKinsey projects hundreds of billions in value from AI-enhanced marketing, especially in retail and software. Influencers like Greg Isenberg predict $100 million niches built on AI-led product design. Frank$Shy’s analysis points to a $30 billion creative AI market by 2025, driven by enterprise tools.

Even in e-commerce, AI is transforming operations. Analytics India Magazine reports that brands build eight-figure revenues by automating design and content workflows while keeping human editors in charge. The trend is not about replacement but refinement and scale.

Li’s central message remains relevant: when used ethically, AI augments rather than replaces creativity. Responsible integration supports diverse voices and helps teams navigate the fast-evolving innovation landscape. The future of ideation lies in balance, not substitution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE partnership boosts NeOnc’s clinical trial programme

Biotech firm NeOnc Technologies has gained rapid attention after going public in March 2025 and joining the Russell Microcap Index just months later. The company focuses on intranasal drug delivery for brain cancer, allowing patients to administer treatment at home and bypass the blood-brain barrier.

NeOnc’s lead treatment is in Phase 2A trials for glioblastoma patients and is already showing extended survival times with minimal side effects. Backed by a partnership with USC’s Keck Medical School, the company is also expanding clinical trials to the Middle East and North Africa under US FDA standards.

A $50 million investment deal with a UAE-based firm is helping fund this expansion, including trials run by Cleveland Clinic through a regional partnership. The trials are expected to be fully enrolled by September, with positive preliminary data already being reported.

AI and quantum computing are central to NeOnc’s strategy, particularly in reducing risk and cost in trial design and drug development. As a pre-revenue biotech, the company is betting that innovation and global collaboration will carry it to the next stage of growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ECOSOC adopts CSTD draft resolution on WSIS outcomes implementation

On 29 July 2025, the UN Economic and Social Council (ECOSOC) adopted a resolution titled ‘Assessment of the progress made in the implementation of and follow-up to the outcomes of the World Summit on the Information Society‘.

Prepared by the Commission on Science and Technology for Development (CSTD) and adopted as a draft at the Commission’s 28th meeting in April 2025, the resolution outlines several vital recommendations for possible outcomes of the ongoing process dedicated to the review of 20 years of implementation of outcomes of the World Summit on the Information Society (the so-called WSIS+20 review process):

  • A recommendation is that, as an outcome of the WSIS+20 process, commitments outlined in the Global Digital Compact (GDC) are integrated into the work of WSIS action lines by the action lines facilitators (para 131).
  • A recommendation regarding strengthening the UN Group on the Information Society (UNGIS), by including further UN offices with responsibilities in matters of digital cooperation, as well as multistakeholder advice on its work, as appropriate (para 132).
  • A recommendation that UNGIS is tasked with developing a joint implementation roadmap, to be presented to CSTD’s 29th session, to integrate GDC commitments into the WSIS architecture, ensuring a unified approach to digital cooperation that avoids duplication and maximises resource efficiency (para 133).
  • A call for strengthening the CSTD in its role as an intergovernmental platform for discussions on the impact and opportunities of technologies to achieve sustainable development goals (para 134).

The resolution also emphasises the role of CSTD in the GDC’s follow-up and review process and the need to ensure the strongest possible convergences between the implementation of WSIS outcomes and the Compact to avoid duplication and enhance synergies, efficiencies, and impact (para 135).

ECOSOC adopted the resolution without discussion and by consensus. When discussed at CSTD in April, the draft resolution was adopted by a vote of 33 in favour and one against; the USA, which voted against, explained its vote.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Trust in human doctors remains despite AI advancements

OpenAI CEO Sam Altman has stated that AI, especially ChatGPT, now surpasses many doctors in diagnosing illnesses. However, he pointed out that individuals still prefer human doctors because of the trust and emotional connection they provide.

Altman also expressed concerns about the potential misuse of AI, such as using voice cloning for fraud and identity theft. He emphasised the need for stronger privacy protections for sensitive conversations with AI tools like ChatGPT, noting that current standards are inadequate and should align with those for therapists.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!