Science removes concern from Microsoft quantum paper

The journal Science will replace an editorial expression of concern (EEoC) on a 2020 Microsoft quantum computing paper with a correction. The update notes incomplete explanations of device tuning and partial data disclosure, but no misconduct.

Co-author Charles Marcus welcomed the decision but lamented the four-year dispute.

Sergey Frolov, who raised concerns about data selection, disagrees with the correction and believes the paper should be retracted. The debate centres on Microsoft’s claims about topological superconductors using Majorana particles, a critical step for quantum computing.

Several Microsoft-backed papers on Majoranas have faced scrutiny, including retractions. Critics accuse Microsoft of cherry-picking data, while supporters stress the research’s complexity and pioneering nature.

The controversy reveals challenges in peer review and verifying claims in a competitive field.

Microsoft defends the integrity of its research and values open scientific debate. Critics warn that selective reporting risks misleading the community. The dispute highlights the difficulty of confirming breakthrough quantum computing claims in an emerging industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK Online Safety Act under fire amid free speech and privacy concerns

The UK’s Online Safety Act, aimed at protecting children and eliminating illegal content online, is stirring a strong debate due to its stringent requirements on social media platforms and websites hosting adult content.

Critics argue that the act’s broad application could unintentionally suppress free speech, as highlighted by social media platform X.

X claims the act results in the censorship of lawful content, reflecting concerns shared by politicians, free-speech campaigners, and content creators.

Moreover, public unease is evident, with over 468,000 individuals signing a petition for the act’s repeal, citing privacy concerns over mandatory age checks requiring personal data on adult content sites.

Despite mounting criticism, the UK government is resolute in its commitment to the legislation. Technology Secretary Peter Kyle equates opposition to siding with online predators, emphasising child protection.

The government asserts that the act also mandates platforms to uphold freedom of expression alongside child safety obligations.

While X criticises both the broad scope and the tight compliance timelines of the act, warning of pressures towards over-censorship, it calls for significant statutory revisions to protect personal freedoms while safeguarding children.

The government rebuffs claims that the Online Safety Act compromises free speech, with assurances that the law equally protects freedom of expression.

Meanwhile, Ofcom, the UK’s communications regulator, has initiated investigations into the compliance of several companies managing pornography sites, highlighting the rigorous enforcement.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Prisons trial AI to forecast conflict and self‑harm risk

UK Justice Secretary Shabana Mahmood has rolled out an AI-driven violence prediction tool across prisons and probation services. One system evaluates inmates’ profiles, factoring in age, past behaviour, and gang ties, to flag those likely to become violent. Matching prisoners to tighter supervision or relocation aims to reduce attacks on staff and fellow inmates.

Another feature actively scans content from seized mobile phones. AI algorithms sift through over 33,000 devices and 8.6 million messages, detecting coded language tied to contraband, violence, or escape plans. When suspicious content is flagged, staff receive alerts for preventive action.

Rising prison violence and self-harm underscore the urgency of such interventions. Assaults on staff recently reached over 10,500 a year, the highest on record, while self-harm incidents reached nearly 78,000. Overcrowding and drug infiltration have intensified operational challenges.

Analysts compare the approach to ‘pre‑crime’ models, drawing parallels with sci-fi narratives, raising concerns around civil liberties. Without robust governance, predictive tools may replicate biases or punish potential rather than actual behaviour. Transparency, independent audit, and appeals processes are essential to uphold inmate rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity sector sees busy July for mergers

July witnessed a significant surge in cybersecurity mergers and acquisitions (M&A), spearheaded by Palo Alto Networks’ announcement of its definitive agreement to acquire identity security firm CyberArk for an estimated $25 billion.

The transaction, set to be the second-largest cybersecurity acquisition on record, signals Palo Alto’s strategic entry into identity security.

Beyond this significant deal, Palo Alto Networks also completed its purchase of AI security specialist Protect AI. The month saw widespread activity across the sector, including LevelBlue’s acquisition of Trustwave to create the industry’s largest pureplay managed security services provider.

Zurich Insurance Group, Signicat, Limerston Capital, Darktrace, Orange Cyberdefense, SecurityBridge, Commvault, and Axonius all announced or finalised strategic cybersecurity acquisitions.

The deals highlight a strong market focus on AI security, identity management, and expanding service capabilities across various regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon plans to bring ads to Alexa+ chats

Amazon is exploring ways to insert ads into conversations with its AI assistant Alexa+, according to CEO Andy Jassy. Speaking during the company’s latest earnings call, he described the feature as a potential tool for product discovery and future revenue.

Alexa+ is Amazon’s upgraded digital assistant designed to support more natural, multi-step conversations using generative AI. It is already available to millions of users through Prime subscriptions or as a standalone service.

Jassy said longer interactions open the door for embedded advertising, although the approach has not yet been fully developed. Industry observers see this as part of a wider trend, with companies like Google and OpenAI also weighing ad-based business models.

Alexa+ has received mixed reviews so far, with delays in feature delivery and technical challenges like hallucinations raising concerns. Privacy advocates have warned that ad targeting within personal conversations may worry users, given the data involved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Physicists remain split on what quantum theory really means

One hundred years after its birth, quantum mechanics continues to baffle physicists, despite underpinning many of today’s technologies. While its equations accurately describe the behaviour of subatomic particles, experts remain deeply divided on what those equations actually reveal about reality.

A recent survey by Nature, involving more than 1,100 physicists, highlighted the lack of consensus within the field. Just over a third supported the Copenhagen interpretation, which claims a particle only assumes a definite state once it is observed.

Others favour alternatives like the many worlds theory, which suggests every possible outcome exists in parallel universes rather than collapsing into a single reality. The concept challenges traditional notions of observation, space and causality.

Physicists also remain split on whether there is a boundary between classical and quantum systems. Only a quarter expressed confidence in their chosen interpretation, with most believing a better theory will eventually replace today’s understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI won’t replace coaches, but it will replace coaching without outcomes

Many coaches believe AI could never replace the human touch. They pride themselves on emotional intelligence — their empathy, intuition, and ability to read between the lines. They consider these traits irreplaceable. But that belief could be costing them their business.

The reason AI poses a real threat to coaching isn’t because machines are becoming more human. It’s because they’re becoming more effective. And clients aren’t hiring coaches for human connection — they’re hiring them for outcomes.

People seek coaches to overcome challenges, make decisions, or experience a transformation. They want results — and they want them as quickly and painlessly as possible. If AI can deliver those results faster and more conveniently, many clients will choose it without hesitation.

So what should coaches do? They shouldn’t ignore AI, fear it, or dismiss it as a passing fad. Instead, they should learn how to integrate it. Live, one-to-one sessions still matter. They provide the deepest insights and most lasting impact. But coaching must now extend beyond the session.

Coaching must be supported by systems that make success inevitable — and AI is the key to building those systems. Here lies a fundamental disconnect: coaches often believe their value lies in personal connections.

Clients, on the other hand, value results. The gap is where AI is stepping in — and where forward-thinking coaches are stepping up. Currently, most coaches are trapped in a model that trades time for money. More sessions, they assume, equals more transformation.

However, this model doesn’t scale. Many are burning out trying to serve everyone personally. Meanwhile, the most strategic among them are turning their coaching into scalable assets: digital products, automated workflows, and AI-trained tools that do their job around the clock.

They’re not being replaced by AI. They’re being amplified by it. The coaches are packaging their methods into online courses that clients can revisit between sessions. They’re building tools that track client progress automatically, offering midnight reassurance when doubts creep in.

The coaches are even training AI on their own frameworks, allowing clients to access support informed by the coach’s actual thinking — not generic chatbot responses. The business model in question isn’t science fiction. It’s already happening.

AI can be trained on your transcripts, methodologies, and session notes. It can conduct initial assessments and reinforce your teachings between meetings. Your clients receive consistent, on-demand support — and you free up time for the deep, human work only you can do.

Coaches who embrace this now will dominate their niches tomorrow. Even the content generated from coaching sessions is underutilised. Every call contains valuable insights — breakthroughs, reframes, moments of clarity.

The insights shouldn’t stay confined to just one client. Strip away personal details, extract the universal truths, and turn those insights into content that attracts your next ideal client. AI can also help you uncover patterns across your coaching history.

Feed your notes into analysis tools, and you might find that 80% of your executive clients hit the same obstacle in month three. Or that a particular intervention consistently delivers rapid breakthroughs.

The insights help you refine your practice and anticipate challenges before they arise — making your coaching more effective and less predictable. Then there’s the admin. Scheduling, invoicing, progress tracking — all of it can be automated.

Tools like Zapier or Make can optimise such repetitive tasks, giving you back hours each week. That’s time better spent on transformation, not operations. Your clients don’t want tradition. They want transformation.

The coaches who succeed in this new era will be those who understand that human insight and AI systems are not in competition. They’re complementary. Choose one area where AI could support your work — a progress tracker, a digital guide, or a content workflow. Start there.

The future of coaching doesn’t belong to the ones who resist AI. It belongs to those who combine wisdom with scalability. Your enhanced coaching model is waiting to be built — and your future clients are waiting to experience it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!