The decision follows an internal review of over 700,000 Claude interactions, where researchers identified thousands of values shaping how the system responds in real-world scenarios.
By enabling Claude to exit problematic exchanges, Anthropic hopes to improve trustworthiness while protecting its models from situations that might degrade performance over time.
Industry reaction has been mixed. Many researchers praised the step as a blueprint for responsible AI design. In contrast, others expressed concern that allowing models to self-terminate conversations could limit user engagement or introduce unintended biases.
Critics also warned that the concept of model welfare risks over-anthropomorphising AI, potentially shifting focus away from human safety.
The update arrives alongside other recent Anthropic innovations, including memory features that allow users to maintain conversation history. Together, these changes highlight the company’s balanced approach: enhancing usability where beneficial, while ensuring safeguards are in place when interactions become potentially harmful.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI pioneer Geoffrey Hinton has warned that AI could one day wipe out humanity if its growth is unchecked.
Speaking at the Ai4 conference in Las Vegas, the former Google executive estimated a 10 to 20 percent chance of such an outcome and criticised the approach taken by technology leaders.
He argued that efforts to keep humans ‘dominant’ over AI will fail once systems become more intelligent than their creators. According to Hinton, powerful AI will inevitably develop goals such as survival and control, making it increasingly difficult for people to restrain its influence.
In an interview with CNN, Hinton compared the potential future to a parent-child relationship, noting that AI systems may manipulate humans just as easily as an adult can bribe a child.
He suggested giving AI ‘maternal instincts’ to prevent disaster so that the technology genuinely cares about human well-being.
Hinton, often called the ‘Godfather of AI’ for his pioneering work in neural networks, cautioned that society risks creating beings that will ultimately outsmart and overpower us without embedding such safeguards.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UK property agents are increasingly leveraging AI and automation to tackle a growing skills shortage in the sector, according to an analysis by PropTech provider Reapit.
Reapit’s Property Outlook Report 2025 shows that although agencies continue hiring, most face recruitment difficulties: more than half receive fewer than five qualified applicants per vacancy. Growth in payrolled employees is minimal, and the slowest year-on-year rise since May 2021 reflects wider labour market tightness.
In response, agencies are turning to time-saving technologies. A majority report that automation is more cost-effective than expanding headcount, with nearly 80 percent citing increased productivity from these tools.
This shift towards PropTech and AI reflects deeper structural pressures in the UK real estate sector: high employment costs, slower workforce growth, and increasing demands for efficiency are reshaping the role of technology in agency operations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI can assist with refining cover letters, improving structure, and articulating motivations. It can also support interview preparation through mock question practice and help candidates deepen their understanding of legal issues.
However, authenticity is paramount. Taylor Wessing encourages applicants to ensure their work reflects their voice. Using AI to complete online assessments is explicitly discouraged, as these are designed to evaluate natural ability and personal fit.
According to the firm, while AI can bolster readiness for training schemes, over-reliance or misuse may backfire. They advise transparency about any AI assistance and underscore the importance of integrity throughout the process.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has released Gemma 3 270M, an open-source AI model with 270 million parameters designed to run efficiently on smartphones and Internet of Things devices.
Drawing on technology from the larger Gemini family, it focuses on portability, low energy use and quick fine-tuning, enabling developers to create AI tools that work on everyday hardware instead of relying on high-end servers.
The model supports instruction-following and text structuring with a 256,000-token vocabulary, offering scope for natural language processing and on-device personalisation.
Its design includes quantisation-aware training to work in low-precision formats such as INT4, reducing memory use and improving speed on mobile processors instead of requiring extensive computational power.
Industry commentators note that the model could help meet demand for efficient AI in edge computing, with applications in healthcare wearables and autonomous IoT systems. Keeping processing on-device also supports privacy and reduces dependence on cloud infrastructure.
Google highlights the environmental benefits of the model, pointing to reduced carbon impact and greater accessibility for smaller firms and independent developers. While safeguards like ShieldGemma aim to limit risks, experts say careful use will still be needed to avoid misuse.
Future developments may bring new features, including multimodal capabilities, as part of Google’s strategy to blend open and proprietary AI within hybrid systems.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chinese physicist Pan Jianwei’s team created the world’s largest atom array, arranging over 2,000 rubidium atoms for quantum computing. The breakthrough at the University of Science and Technology of China could enable atom-based quantum computers to scale to tens of thousands of qubits.
Researchers used AI and optical tweezers to position all atoms simultaneously, completing the array in 60 milliseconds. The system achieved 99.97 percent accuracy for single-qubit operations and 99.5 percent for two-qubit operations, with 99.92 percent accuracy in qubit state detection.
Atom-based quantum computing is more promising for its stability and control than superconducting circuits or trapped ions. Until now, arrays were limited to a few hundred atoms, as moving each into position individually was slow and challenging.
Future work aims to expand array sizes further using stronger lasers and faster light modulators. Researchers hope that perfectly arranging tens of thousands of atoms leads to fully reliable and scalable quantum computers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chinese AI company DeepSeek has postponed the launch of its R2 model after repeated technical problems using Huawei’s Ascend processors for training. The delay highlights Beijing’s ongoing struggle to replace US-made chips with domestic alternatives.
Authorities had encouraged DeepSeek to shift from Nvidia hardware to Huawei’s chips after the release of its R1 model in January. However, training failures, slower inter-chip connections, stability issues, and weaker software performance led the start-up to revert to Nvidia chips for training, while continuing to explore Ascend for inference tasks.
Despite Huawei deploying engineers to assist on-site, DeepSeek was unable to complete a successful training run using Ascend processors. The company is also contending with extended data-labelling timelines for its updated model, adding to the delays.
The situation underscores how far Chinese chip technology lags behind Nvidia for advanced AI development, even as Beijing pressures domestic firms to use local products. Industry observers say Huawei is facing “growing pains” but could close the gap over time. Meanwhile, competitors like Alibaba’s Qwen3 have integrated elements of DeepSeek’s design more efficiently, intensifying market pressure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new study has revealed that managers who use AI to write emails are often viewed as less sincere by their staff. Acceptance improved for emails focused on factual information, where employees were more forgiving of AI involvement.
Researchers found employees were more critical of AI use by their supervisors than when using it themselves, even if the level of assistance was the same.
Only 40 percent of respondents rated managers as sincere when their emails involved high AI input, compared to 83 percent for lighter use.
Professionals did consider AI-assisted emails efficient and polished, but trust declined when messages were relationship-driven or motivational.
Researchers highlighted that managers’ heavier reliance on AI may undermine trust, care, and authenticity perceptions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
India’s central bank has proposed a national framework to guide the ethical and responsible use of AI in the financial sector.
The committee, set up by the Reserve Bank of India in December 2024, has made 26 recommendations across six focus areas, including infrastructure, governance, and assurance.
It advised establishing a digital backbone to support homegrown AI models and forming a multi-stakeholder body to evaluate risks.
A dedicated fund to boost domestic AI development tailored for finance was also proposed, alongside audit guidelines and policy frameworks.
The committee recommended integrating AI into platforms such as UPI while preserving public trust and ensuring security.
Led by IIT Bombay’s Pushpak Bhattacharyya, the panel noted the need to balance innovation with risk mitigation in regulatory design.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has announced a $9 billion investment in Oklahoma over the next two years to expand cloud and AI infrastructure.
The funds will support a new data centre campus in Stillwater and an expansion of the existing facility in Pryor, forming part of a broader $1 billion commitment to American education and competitiveness.
The announcement was made alongside Governor Kevin Stitt, Alphabet and Google executives, and community leaders.
Alongside the infrastructure projects, Google funds education and workforce initiatives with the University of Oklahoma and Oklahoma State University through the Google AI for Education Accelerator.
Students will gain no-cost access to Career Certificates and AI training courses, helping them acquire critical AI and job-ready skills instead of relying on standard curricula.
Additional funding will support ALLIANCE’s electrical training to expand Oklahoma’s electrical workforce by 135%, creating the talent needed to power AI-driven energy infrastructure.
Google described the investment as part of an ‘extraordinary time for American innovation’ and a step towards maintaining US leadership in AI.
The move also addresses national security concerns, ensuring the country has the infrastructure and expertise to compete with domestic rivals like OpenAI and Anthropic, as well as international competitors such as China’s DeepSeek.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!