Strict ban on crypto references introduced by OpenClaw

OpenClaw has introduced a firm community rule prohibiting any reference to Bitcoin or other cryptocurrencies on its Discord server, according to its creator, Peter Steinberger.

Enforcement drew attention after a user was removed for mentioning Bitcoin block height as a timing method in a benchmark, with the developer later offering to restore access.

The policy follows a rebrand scare when scammers hijacked old accounts to promote a fake Solana token. Market value spiked then plunged after Steinberger denied involvement, warning that no official token would be issued.

Rapid growth of the open-source project, which has attracted a large developer base within weeks of launch, contrasts with wider industry momentum linking AI agents and digital assets.

Leaders such as Jeremy Allaire of Circle argue stablecoins could become default payment rails for autonomous software, while Coinbase is already rolling out infrastructure enabling agents to transact on-chain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fake Google Forms phishing campaign targets job seekers

A phishing campaign is targeting job seekers with fake Google Forms pages designed to harvest account credentials. Attackers are using a spoofed domain, forms.google.ss-o[.]com, to mimic the legitimate Google Forms service and trick victims into signing in.

The fraudulent pages advertise a Customer Support Executive role and prompt applicants to enter personal details before clicking a ‘Sign in’ button. Victims are then redirected to id-v4[.]com/generation.php, a domain previously linked to credential harvesting campaigns.

Researchers identified the operation as part of a broader wave of job-themed phishing attacks. The attackers used a script called generation_form.php to create personalised tracking links and implemented redirects to evade security analysis by sending suspicious visitors to local Google search pages.

Security experts warn that the campaign relies on domain impersonation techniques, including the use of ‘ss-o’ to resemble ‘single sign-on’. The fake site reproduces Google branding elements and standard disclaimers to increase credibility.

Users are advised to avoid clicking unsolicited job links, verify opportunities through official channels, and enable multi-factor authentication. Password managers and real-time anti-malware tools can also reduce exposure to credential theft.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK sets 48-hour deadline for removing intimate images

The UK government plans to require technology platforms to remove intimate images shared without consent within forty-eight hours instead of allowing such content to remain online for days.

Through an amendment to the Crime and Policing Bill, firms that fail to comply could face fines amounting to ten percent of their global revenue or risk having their services blocked in the UK.

A move that reflects ministers’ commitment to treat intimate image abuse with the same seriousness as child sexual abuse material and extremist content.

The action follows mounting concern after non-consensual sexual deepfakes produced by Grok circulated widely, prompting investigations by Ofcom and political pressure on platforms owned by Elon Musk.

The government now intends victims to report an image once instead of repeating the process across multiple services. Once flagged, the content should disappear across all platforms and be blocked automatically on future uploads through hash-matching or similar detection tools.

Ministers also aim to address content hosted outside the reach of the Online Safety Act by issuing guidance requiring internet providers to block access to sites that refuse to comply.

Keir Starmer, Liz Kendall and Alex Davies-Jones emphasised that no woman should be forced to pursue platform after platform to secure removal and that the online environment must offer safety and respect.

The package of reforms forms part of a broader pledge to halve violence against women and girls during the next decade.

Alongside tackling intimate image abuse, the government is legislating against nudification tools and ensuring AI chatbots fall within regulatory scope, using this agenda to reshape online safety instead of relying on voluntary compliance from large technology firms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reload launches Epic to bring shared memory and structure to AI agents

Founders of the Reload platform say AI is moving from simple automation toward something closer to teamwork.

Newton Asare and Kiran Das noticed that AI agents were completing tasks normally handled by employees, which pushed them to design a system that treats digital workers as part of a company’s structure instead of disposable tools.

Their platform, Reload, offers a way for organisations to manage these agents across departments, assign responsibilities and monitor performance. The firm has secured 2.275 million dollars in new funding led by Anthemis with several other investors joining the round.

The shift toward agent-driven development exposed a recurring limitation. Most agents retain only short-term memory, which means they often lose context about a product or forget why a task matters.

Reload’s answer is Epic, a new product built on its platform that acts as an architect alongside coding agents. Epic defines requirements and constraints at the start of a project, then continuously preserves the shared understanding that agents need as software evolves.

Epic integrates with popular AI-assisted code editors such as Cursor and Windsurf, allowing developers to keep a consistent system memory without changing their workflow.

The tool generates key project artefacts from the outset, including data models and technical decisions, then carries them forward even when teams switch agents. It creates a single source of truth so that engineers and digital workers develop against the same structure.

Competing systems such as LongChain and CrewAI also offer support for managing agents, but Reload argues that Epic’s ability to maintain project-level context sets it apart.

Asare and Das, who already built and sold a previous company together, plan to use the fresh capital to grow their team and expand the infrastructure needed for a future in which human workers manage AI employees instead of the other way around.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reliance and OpenAI bring AI search to JioHotstar

OpenAI has joined forces with Reliance Industries to introduce conversational search into JioHotstar.

The integration uses OpenAI’s API so viewers can look for films, series, and live sports through multilingual text or voice prompts, receiving recommendations shaped by their viewing patterns instead of basic keyword results.

A collaboration that extends beyond the platform itself, with plans to surface JioHotstar suggestions directly inside ChatGPT.

The approach presents a two-way discovery layer that links entertainment browsing with conversational queries, pointing toward a new model for how audiences engage with streaming catalogues.

OpenAI is strengthening its footprint in India, where more than 100 million people now use ChatGPT weekly. The company intends to open offices in Mumbai and Bengaluru to support the expansion, adding to its site in New Delhi.

The partnership was announced at the India AI Impact Summit, where Sam Altman appeared alongside industry figures such as Dario Amodei and Sundar Pichai.

A move that aligns with a broader ‘OpenAI for India’ strategy that includes work on data centres with the Tata Group and further collaborations with companies such as Pine Labs, Eternal, and MakeMyTrip.

Executives from both sides said conversational interfaces will reshape how people find and follow programming, helping users navigate entertainment in a more natural way instead of relying on conventional menus.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI agent autonomy rises as users gain trust in Anthropic’s Claude Code

A new study from Anthropic offers an early picture of how people allow AI agents to work independently in real conditions.

By examining millions of interactions across its public API and its coding agent Claude Code, the company explored how long agents operate without supervision and how users change their behaviour as they gain experience.

The analysis shows a sharp rise in the longest autonomous sessions, with top users permitting the agent to work for more than forty minutes instead of cutting tasks short.

Experienced users appear more comfortable letting the AI agent proceed on its own, shifting towards auto-approve instead of checking each action.

At the same time, these users interrupt more often when something seems unusual, which suggests that trust develops alongside a more refined sense of when oversight is required.

The agent also demonstrates its own form of caution by pausing to ask for clarification more frequently than humans interrupt it as tasks become more complex.

The research identifies a broad spread of domains that rely on agents, with software engineering dominating usage but early signs of adoption emerging in healthcare, cybersecurity and finance.

Most actions remain low-risk and reversible, supported by safeguards such as restricted permissions or human involvement instead of fully automated execution. Only a tiny fraction of actions reveal irreversible consequences such as sending messages to external recipients.

Anthropic notes that real-world autonomy remains far below the potential suggested by external capability evaluations, including those by METR.

The company argues that safer deployment will depend on stronger post-deployment monitoring systems and better design for human-AI cooperation so that autonomy is managed jointly rather than granted blindly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Anthropic seeks deeper AI cooperation with India

The chief executive of Anthropic, Dario Amodei, has said India can play a central role in guiding global responses to the security and economic risks linked to AI.

Speaking at the India AI Impact Summit in New Delhi, he argued that the world’s largest democracy is well placed to become a partner and leader in shaping the responsible development of advanced systems.

Amodei explained that Anthropic hopes to work with India on the testing and evaluation of models for safety and security. He stressed growing concern over autonomous behaviours that may emerge in advanced systems and noted the possibility of misuse by individuals or governments.

He pointed to the work of international and national AI safety institutes as a foundation for joint efforts and added that the economic effect of AI will be significant and that India and the wider Global South could benefit if policymakers prepare early.

Through its Economic Futures programme and Economic Index, Anthropic studies how AI reshapes jobs and labour markets.

He said the company intends to expand information sharing with Indian authorities and bring economists, labour groups, and officials into regular discussions to guide evidence-based policy instead of relying on assumptions.

Amodei said AI is set to increase economic output and that India is positioned to influence emerging global frameworks. He signalled a strong interest in long-term cooperation that supports safety, security, and sustainable growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU turns to AI tools to strengthen defences against disinformation

Institutions, researchers, and media organisations in the EU are intensifying efforts to use AI to counter disinformation, even as concerns grow about the wider impact on media freedom and public trust.

Confidence in journalism has fallen sharply across the EU, a trend made more severe by the rapid deployment of AI systems that reshape how information circulates online.

Brussels is attempting to respond with a mix of regulation and strategic investment. The EU’s AI Act is entering its implementation phase, supported by the AI Continent Action Plan and the Apply AI Strategy, both introduced in 2025 to improve competitiveness while protecting rights.

Yet manipulation campaigns continue to spread false narratives across platforms in multiple languages, placing pressure on journalists, fact-checkers and regulators to act with greater speed and precision.

Within such an environment, AI4TRUST has emerged as a prominent Horizon Europe initiative. The consortium is developing an integrated platform that detects disinformation signals, verifies content, and maps information flows for professionals who need real-time insight.

Partners stress the need for tools that strengthen human judgment instead of replacing it, particularly as synthetic media accelerates and shared realities become more fragile.

Experts speaking in Brussels warned that traditional fact-checking cannot absorb the scale of modern manipulation. They highlighted the geopolitical risks created by automated messaging and deepfakes, and argued for transparent, accountable systems tailored to user needs.

European officials emphasised that multiple tools will be required, supported by collaboration across institutions and sustained regulatory frameworks that defend democratic resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Digital procurement strengthens compliance and prepares governments for AI oversight

AI is reshaping the expectations placed on organisations, yet many local governments in the US continue to rely on procurement systems designed for a paper-first era.

Sealed envelopes, manual logging and physical storage remain standard practice, even though these steps slow essential services and increase operational pressure on staff and vendors.

The persistence of paper is linked to long-standing compliance requirements, which are vital for public accountability. Over time, however, processes intended to safeguard fairness have created significant inefficiencies.

Smaller businesses frequently struggle with printing, delivery, and rigid submission windows, and the administrative burden on procurement teams expands as records accumulate.

The author’s experience leading a modernisation effort in Somerville, Massachusetts showed how deeply embedded such practices had become.

Gradual adoption of digital submission reduced logistical barriers while strengthening compliance. Electronic bids could be time-stamped, access monitored, and records centrally managed, allowing staff to focus on evaluation rather than handling binders and storage boxes.

Vendor participation increased once geographical and physical constraints were removed. The shift also improved resilience, as municipalities that had already embraced digital procurement were better equipped to maintain continuity during pandemic disruptions.

Electronic records now provide a basis for responsible use of AI. Digital documents can be analysed for anomalies, metadata inconsistencies, or signs of manipulation that are difficult to detect in paper files.

Rather than replacing human judgment, such tools support stronger oversight and more transparent public administration. Modernising procurement aligns government operations with present-day realities and prepares them for future accountability and technological change.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India unveils MANAV Vision as new global pathway for ethical AI

Narendra Modi presented the new MANAV Vision during the India AI Impact Summit 2026 in New Delhi, setting out a human-centred direction for AI.

He described the framework as rooted in moral guidance, transparent oversight, national control of data, inclusive access and lawful verification. He argued that the approach is intended to guide global AI governance for the benefit of humanity.

The Prime Minister of India warned that rapid technological change requires stronger safeguards and drew attention to the need to protect children. He also said societies are entering a period where people and intelligent systems co-create and evolve together instead of functioning in separate spheres.

He pointed to India’s confidence in its talent and policy clarity as evidence of a growing AI future.

Modi announced that three domestic companies introduced new AI models and applications during the summit, saying the launches reflect the energy and capability of India’s young innovators.

He invited technology leaders from around the world to collaborate by designing and developing in India instead of limiting innovation to established hubs elsewhere.

The summit brought together policymakers, academics, technologists and civil society representatives to encourage cooperation on the societal impact of artificial intelligence.

As the first global AI summit held in the Global South, the gathering aligned with India’s national commitment to welfare for all and the wider aspiration to advance AI for humanity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!