OpenClaw users face account suspensions under Google AI rules

Google has suspended access to its Antigravity AI platform for numerous OpenClaw users, citing violations of its terms of service. Developers had used OpenClaw’s OAuth plugin to access subsidised Gemini model tokens, triggering backend strain and service degradation.

OpenClaw, launched in November 2025, gained more than 219,000 GitHub stars by enabling local AI agents for tasks such as email management and web browsing. Users authenticated through Antigravity to access advanced Gemini models at reduced cost, bypassing official distribution channels.

Google said the third-party integration powered non-authorised products on Antigravity infrastructure, triggering usage flagged as malicious. In February 2026, AI Ultra subscribers reported 403 errors and account restrictions, with some citing temporary disruptions to Gmail and Workspace.

Varun Mohan of Google DeepMind said the surge had degraded service quality and that enforcement prioritised legitimate users. Limited reinstatement options were offered to those unaware of violations, while capacity constraints were cited as the reason.

The move follows similar restrictions by Anthropic on third-party OAuth usage. Developers are shifting to alternative forks, as debate intensifies over open tooling, platform control, and the risks of agentic AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IQM puts Finland on Europe’s quantum computing map

Finland is emerging as a key hub in Europe’s quantum computing landscape as startup IQM prepares to become one of the continent’s first publicly listed quantum firms.

The company is developing full-stack, open-architecture quantum systems designed for on-premise deployment or cloud access. It aims to advance the practical use of quantum computing across research and industry.

Founded in 2018, IQM has already delivered 21 quantum systems to 13 customers, highlighting growing European interest in commercial quantum technologies.

Analysts note that while challenges remain, meaningful breakthroughs are now occurring, signalling that quantum computing is shifting from purely experimental science to an operational industry.

IQM’s technology could support advancements in medicine, science, and computational research, enabling the solution to complex problems far beyond the reach of classical computers.

The firm exemplifies Europe’s ambition to build quantum capabilities independently of larger players in the US and China, positioning Finland as a strategic hub for next-generation computing.

The company’s work aligns with broader European efforts to foster innovation in quantum technologies.

By combining domestic expertise with open-access systems, IQM demonstrates how Finland is contributing to the continent’s emerging quantum ecosystem, bridging academic research and industrial application.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Global privacy regulators warn of rising AI deepfake harms

Privacy regulators from around the world have issued a joint warning about the rise of AI-generated deepfakes, arguing that the spread of non-consensual images poses a global risk instead of remaining a problem confined to individual countries.

Sixty-one authorities endorsed a declaration that draws attention to AI images and videos depicting real people without their knowledge or consent.

The signatories highlight the rapid growth of intimate deepfakes, particularly those targeting children and individuals from vulnerable communities. They note that such material often circulates widely on social platforms and may fuel exploitation or cyberbullying.

The declaration argues that the scale of the threat requires coordinated action rather than isolated national responses.

European authorities, including the European Data Protection Board and the European Data Protection Supervisor, support the effort to build global cooperation.

Regulators say that only joint oversight can limit the harms caused by AI systems that generate false depictions, rather than protecting individuals’ privacy as required under frameworks such as the General Data Protection Regulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Medical AI risks in Turkey highlight data bias and privacy challenges

Ankara is seeing growing debate over the risks and benefits of medical AI as experts warn that poorly governed systems could threaten patient safety.

Associate professor Agah Tugrul Korucu said AI offers meaningful potential for healthcare only when supported by rigorous ethical rules and strong oversight instead of rapid deployment without proper safeguards.

Korucu explained that data bias remains one of the most significant dangers because AI models learn directly from the information they receive. Underrepresented age groups, regions or social classes can distort outcomes and create systematic errors.

Turkey’s national health database e-Nabiz provides a strategic advantage, yet raw information cannot generate value unless it is processed correctly and supported by clear standards, quality controls and reliable terminology.

He added that inconsistent hospital records, labelling errors and privacy vulnerabilities can mislead AI systems and pose legal challenges. Strict anonymisation and secure analysis environments are needed to prevent harmful breaches.

Medical AI works best as a second eye in fields such as radiology and pathology, where systems can reduce workloads by flagging suspicious areas instead of leaving clinicians to assess every scan alone.

Korucu said physicians must remain final decision makers because automation bias could push patients towards unnecessary risks.

He expects genomic data combined with AI to transform personalised medicine over the coming decade, allowing faster diagnoses and accurate medication choices for rare conditions.

Priority development areas for Turkey include triage tools, intensive care early warning systems and chronic disease management. He noted that the long-term model will be the AI-assisted physician rather than a fully automated clinician.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AWS warns of AI powered cybercrime

Amazon Web Services has revealed that a Russian-speaking threat actor used commercial AI tools to compromise more than 600 FortiGate firewalls across 55 countries. AWS described the campaign as an AI-powered assembly line for cybercrime.

According to AWS, the attacker relied on exposed management ports and weak single-factor credentials rather than exploiting software vulnerabilities. The campaign targeted FortiGate devices globally and focused on harvesting credentials and configuration data.

AWS said the potentially Russian group appeared unsophisticated but achieved scale through AI-assisted mass scanning and automation. When encountering stronger defences, the attackers reportedly shifted to easier targets rather than persist.

The company advised organisations using FortiGate appliances to secure management interfaces, change default credentials and enforce complex passwords. Amazon said it was not compromised during the campaign.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI model revises proof claim

OpenAI has published its attempts to solve all 10 problems in the First Proof challenge, a research-level maths test designed to assess whether AI can produce checkable, domain-specific proofs. Leading experts created the issues and require extended reasoning rather than short answers.

The company said at least five of its proof attempts are likely correct following expert feedback, although one previously confident submission has now been judged incorrect. Several other attempts remain under review as specialists continue to assess the arguments.

According to OpenAI, the evaluation involved limited human supervision, with researchers sometimes prompting the model to refine or clarify reasoning. The process included exchanges between an internal model and ChatGPT for verification, formatting and style adjustments.

OpenAI described frontier research challenges, such as First Proof, as crucial for testing next-generation AI systems. The company said it plans to deepen its engagement with academics to develop more rigorous evaluation frameworks for research-grade reasoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

University of Bristol opens free online course on AI

The University of Bristol has launched a free online course called AI Fundamentals, designed to increase public understanding of AI. Many people use AI regularly but feel unsure about how to engage with it effectively, creating a gap that the course aims to address.

AI Fundamentals explores the technology’s complexities, societal impact, and environmental implications. The curriculum emphasises critical thinking about AI, its risks, and its potential, making it relevant for both enthusiasts and the curious general public.

The course runs entirely online over four weeks, requiring about 3 hours of self-paced work per week. No coding or advanced mathematics is needed, allowing learners from all backgrounds to participate and explore AI in a digestible format.

Led by Professors Genevieve Liveley and Seth Bullock, the course draws on expertise across fields including computer science, law, medicine, humanities, and neuroscience. Supported by a £50,000 alum donation and UKRI funding, it is now open for enrolment via FutureLearn.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pension savers increasingly rely on AI for retirement planning

AI is becoming a preferred tool for those beginning their retirement planning. Data on searches and website traffic suggests AI is meeting early-stage needs for pension guidance.

Platforms offering general financial information, such as MoneyHelper, have seen traffic fall by 10% over the past six months. At the same time, AI-generated overviews of pension content are on the rise.

AI tools are mainly used to sense-check retirement decisions, model ‘what-if’ scenarios, simplify pension jargon, and assist with tax planning. Users view AI as a thinking partner rather than a replacement for regulated advice.

Despite the rise of AI, bespoke advisory services, such as Pension Wise, have remained relevant, providing personalised guidance that AI cannot fully replace. PensionBee highlights that AI is helpful for basic guidance, but services remain essential for more complex planning.

Experts warn that the retirement sector faces a challenge in maintaining trust and relevance as AI continues to improve. Savers increasingly rely on technology for guidance, signalling a shift in how pensions are researched and managed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Majority of college students use or must use AI in classwork, but institutions lag in AI education

Research from Honorlock indicates a substantial shift in how students engage with generative AI in higher education: more than 56% of surveyed US college–enrolled students report being required to use AI tools in coursework, and 63% use AI for at least some assignments.

The most common uses include grammar and editing support (59%) and text generation (57%), with students also using AI to brainstorm ideas and clarify concepts.

Despite widespread AI use, there remains a significant gap in formal AI education: only 31% of students are aware of AI-focused courses at their institutions, and fewer than 20% have taken them.

Students themselves often learn AI skills independently rather than through a structured curriculum, potentially leaving them unprepared for workplaces where AI fluency is expected.

The survey also highlights academic integrity risks: more than one-third of students admitted to using AI assistance on quizzes or exams, underlining the need for clear AI use policies, responsible-use training and ethical frameworks within higher education.

Researchers and advocates argue that colleges should integrate AI literacy, including ethics, governance, real-world applications and responsible use, into coursework to better equip graduates for AI-enabled careers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU DSA fine against X heads to court in key test case

X Corp., owned by Elon Musk, has filed an appeal with the General Court of the European Union against a €120 million fine imposed by the European Commission for breaching the Digital Services Act. The penalty, issued in December, marks the first enforcement action under the 2022 law.

The Commission concluded that X violated transparency obligations and misled users through its verification design, arguing that paid blue checkmarks made it harder to assess account authenticity. Officials also cited concerns about advertising transparency and researchers’ access to platform data.

Henna Virkkunen, the EU’s executive vice-president for tech sovereignty, security, and democracy, said deceptive verification and opaque advertising had no place online. The Commission opened its probe in December 2023, examining risk management, moderation practices, and alleged dark patterns.

X Corp. argued that the decision followed an incomplete investigation and a flawed reading of the DSA, citing procedural errors and due-process concerns. It said the appeal could shape future enforcement standards and penalty calculations under the regulation.

The EU is also assessing whether X mitigated systemic risks, including deepfaked content and child sexual abuse material linked to its Grok chatbot. US critics describe DSA enforcement as a threat to free speech, while EU officials say it strengthens accountability across the digital single market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!