More users are exploring how to switch from ChatGPT to Claude while preserving their existing chat history and preferences. Rather than starting over with a new AI assistant, many want to migrate context and maintain continuity.
The first step is gathering your data from ChatGPT. In Settings, open Personalisation, then review the Memory section to copy any stored preferences you want to retain. You can also export your full chat history through Data Controls by selecting ‘Export Data’.
ChatGPT will generate downloadable files containing your conversations. If you prefer a lighter approach, manually copy key discussions or ask ChatGPT to summarise your main preferences, frequently discussed topics, and custom instructions.
Once your information is ready, open Claude and enable Memory under Settings and Capabilities. Start a new conversation and paste your summaries using a prompt such as ‘Here is important context about me. Please update your memory accordingly.’
After transferring the data, verify that Claude has stored the information accurately. If you plan to leave ChatGPT entirely, review and delete saved memory entries before removing your account to ensure your data is cleared.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A high-severity vulnerability dubbed ‘ClawJacked’ has been discovered in OpenClaw, an open-source AI agent framework that lets developers run autonomous AI assistants locally.
The flaw, uncovered by Oasis Security, allowed malicious websites to silently hijack a user’s local AI agent instance and steal sensitive data, all triggered by a single browser visit.
The attack exploited OpenClaw’s local WebSocket gateway, which assumed that traffic from localhost could be trusted. A malicious website could open a WebSocket connection to the gateway, brute-force the password at hundreds of guesses per second, with no rate limiting applied to local connections, and then silently register as a trusted device without any user prompt.
Once inside, attackers gained admin-level access to the AI agent, connected devices, logs, and configuration data. Oasis Security responsibly disclosed the flaw, and OpenClaw issued a patch within 24 hours, releasing version 2026.2.26.
Security experts are urging organisations to update immediately, audit the permissions held by their AI agents, and apply strict governance policies, treating AI agents as non-human identities that require the same oversight as human users or service accounts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Deepfakes and injection attacks are no longer just tools for misinformation; they are now being deployed to break the identity verification systems that underpin banking, hiring, and account access.
Bad actors are targeting the critical moments when a system determines whether someone is a real person, from customer onboarding at banks to remote hiring and account recovery workflows.
Attackers exploit verification systems in two main ways: by using increasingly convincing synthetic faces and voice clones to mimic real people, and by launching injection attacks that substitute fraudulent video into the capture pipeline before it ever reaches the detection system.
According to the Entrust 2026 Identity Fraud Report, deepfakes are now linked to one in five biometric fraud attempts, with injection attacks rising 40% year-on-year.
Experts warn that detecting deepfakes alone is no longer sufficient. Enterprises must validate the whole session, including device integrity and behavioural signals, in real time.
Gartner predicts that by 2026, 30% of enterprises will no longer consider face-based identity verification reliable in isolation, given the pace AI AI-generated deepfake attacks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
More than 40 million people use ChatGPT alone for health information every day, and both ChatGPT and Claude have recently launched services specifically designed to give consumers health advice.
Yale School of Medicine clinician-educator Shaili Gupta warns that whilst chatbots can democratise access to health information, the risks of overtrust are significant.
Gupta notes that AI chatbots are deliberately designed to feel personal, trained to use pronouns like ‘you’ and ‘I’, which makes users more likely to treat them as authoritative voices rather than information tools.
She cautions against the ‘three C’s’: chatbots that are too competent, too cogent, or too concrete, as these are the most likely to lead patients into harmful health decisions.
Human clinicians, Gupta argues, remain challenging to replace not only because they conduct physical examinations, but also because they bring instinct, experience, and genuine relatability to patient care. She recommends using chatbots for efficiency and general information, whilst leaving diagnosis firmly in the hands of medical professionals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Qualcomm expects robotics to become a significant business opportunity within two years, according to chief executive Cristiano Amon. The company is increasingly expanding beyond smartphones as it searches for new long-term growth markets.
Earlier this year, Qualcomm introduced its Dragonwing processor designed specifically for robotics applications. The chipset aims to operate across multiple robotic platforms using a scalable approach similar to its successful mobile processor strategy.
Industry enthusiasm for robotics has grown alongside rapid advances in AI technologies. Often described as ‘physical AI’, these systems allow robots to interpret surroundings and perform complex tasks more effectively.
Market forecasts suggest strong future demand, with analysts predicting robotics could develop into a multi-trillion-dollar global industry. Technology leaders across the semiconductor sector increasingly view intelligent machines as a major next computing platform.
Robotics innovation featured prominently at Mobile World Congress in Barcelona, where companies showcased emerging autonomous machines. Growing investment highlights intensifying competition to shape the future of AI-powered automation worldwide.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A data breach at British game studio Cloud Imperium has angered players worldwide after the company quietly announced the incident. Users criticised the slow disclosure and the minimal information provided about what was accessed.
The breach, which occurred on 21 January, exposed names, contact details and dates of birth from backup systems. Cloud Imperium insists no passwords, financial information or game data were compromised.
Players have expressed frustration over the company’s reassurances, arguing that even basic personal details could be used in phishing campaigns. Forums and social media quickly filled with criticism, calling the announcement hidden and inadequate.
Cloud Imperium said it acted quickly to contain the breach, refresh security settings, and monitor systems for further incidents. The studio maintains that the issue should not affect gameplay or user safety, but some users remain sceptical.
The company’s flagship game, Star Citizen, is crowdfunded and boasts millions of players. However, it has not disclosed the total number of accounts affected, leaving the community uneasy about the transparency of the response.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Ocado has announced plans to cut 1,000 jobs from its 20,000 strong global workforce, with roles mainly affected in technology and support. The company, headquartered in Hatfield, Hertfordshire, said the move would save £150m and follows major investment in robotics and automation.
Chief executive Tim Steiner said Ocado had completed a significant phase of investment in automation, but the company declined to confirm that AI directly led to the redundancies. At its Luton warehouse, opened in 2023, human staff continue to work alongside AI powered robots.
Analysts suggested that competition has intensified as retailers in the UK, the US and Canada adopt similar AI driven systems. Some former clients in the US and Canada have invested in their own technology, reducing reliance on Ocado’s platform.
Retail experts argued that deeper structural challenges, including changing consumer expectations and cost pressures in Hertfordshire and beyond, are also at play. Local leaders in Welwyn Hatfield have requested urgent talks as the company reshapes its operating model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Samsung has secured an agreement with Rakuten Mobile to deliver Open RAN-compliant 5G radios supporting a nationwide mobile network upgrade across Japan. Commercial deployment is expected to begin in 2026 following extensive testing of the cloud-native infrastructure.
Rakuten Mobile continues to expand its fully virtualised network architecture, designed to improve flexibility, performance, and vendor interoperability. The integration of Samsung equipment demonstrates growing industry confidence in Open RAN technology at large-scale commercial deployments.
Equipment supplied includes low-band and mid-band radios, alongside energy-efficient Massive MIMO systems operating in the 3.8 GHz spectrum. Compact hardware enables easier installation on buildings and street infrastructure while improving capacity in dense urban areas.
Executives from both companies highlighted ambitions to accelerate AI-enabled networks and global Open RAN adoption. Samsung also positioned the partnership as a step toward future 6G innovation and broader next-generation connectivity services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A deepfake video of Bombay Stock Exchange chief executive Sundararaman Ramamurthy circulated on social media in India, falsely offering stock advice to investors. The exchange moved quickly to report and remove the content, warning the public not to trust fake investment clips.
Cybersecurity experts say such cases are rising sharply, with one US firm estimating a 3,000 percent increase in deepfake incidents over two years. Executives in the US and the UK have also been impersonated using AI-generated audio and video.
In Hong Kong, police said a UK engineering firm lost $25m after an employee joined a video call featuring deepfake versions of senior colleagues. The transfer was made to multiple accounts before the fraud was discovered.
Security companies in the US and the UK are developing detection tools that analyse facial movement and blood flow patterns to identify AI-generated footage. Analysts warn that as costs fall and tools improve, businesses in India, Hong Kong and beyond face an escalating arms race against digital fraud.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has outlined a plan to strengthen Chrome’s HTTPS security against future quantum-computing threats. Rather than expanding traditional X.509 certificate chains in Chrome with post-quantum cryptography, the company is developing a new model based on Merkle Tree Certificates (MTCs).
The proposal from the PLANTS working group seeks to modernise the web public key infrastructure. Under the MTC model, a Certification Authority signs a single ‘Tree Head’ covering many certificates. Browsers receive a lightweight proof instead of a full certificate chain.
Google said this structure reduces authentication data exchanged during TLS handshakes while supporting post-quantum algorithms. By decoupling cryptographic strength from certificate size, the approach seeks to preserve performance as stronger security standards are adopted.
The company is already testing MTCs with real internet traffic. Phase one involves feasibility studies with Cloudflare, while phase two, in early 2027, will invite selected Certificate Transparency log operators to support initial public deployment.
By the third quarter of 2027, Google plans to establish requirements for onboarding certificate authorities to the quantum-resistant Chrome Root Store, which exclusively supports MTCs. The company described the initiative as foundational to maintaining long-term web security resilience.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!