Anthropic has enhanced its Claude AI chatbot to make switching from other platforms easier. Users on the free plan can now activate Claude’s memory feature, which allows them to import data from other AI platforms using a new dedicated tool.
The update ensures that users don’t have to start over when transferring context and history from competitors like OpenAI’s ChatGPT or Google’s Gemini.
The memory import option, first introduced in October for paid subscribers, now appears under ‘settings’ → ‘capabilities’ for all users. The tool lets users copy a prompt from their previous AI and paste the output into Claude, seamlessly transferring past interactions.
The recent popularity of Claude has been driven by tools such as Claude Code and Claude Cowork, as well as the launch of the Opus 4.6 and Sonnet 4.6 models. Upgrades enhance Claude’s coding, spreadsheet, and complex task capabilities, boosting its appeal to new users.
Anthropic’s visibility has also increased amid debates with the Pentagon, as the company refuses to loosen AI safeguards for military use, drawing ‘red lines’ around mass surveillance and autonomous weapons.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
When Hayao Miyazaki dismissed early AI-generated animation as ‘an insult to life itself’ in 2016, the technology felt distant from mainstream creative work. Less than a decade later, generative AI tools produce images and text in seconds, reviving debate over authorship, copyright, and artistic identity.
In Japan, debate reflects both anxiety and ambition. Illustrators question the use of their work in training data, while policymakers and corporations see AI as vital to easing a projected labour shortfall by 2040. Legal provisions allowing data use for analysis have intensified calls for safeguards.
Public sentiment in Japan remains broadly favourable toward AI adoption. Surveys indicate relatively high levels of trust, with many viewing AI as part of long-term structural adjustment rather than an immediate threat. Economic expectations often outweigh concerns about disruption.
Workplace implementation, however, remains limited. OECD research shows only a small share of employees actively use AI tools, citing skills shortages and cautious corporate culture. Analysts describe a paradox: AI could ease labour pressures, yet adoption is constrained by limited expertise.
Creative professionals report more immediate effects. Surveys highlight income pressures and uncertainty among illustrators and freelancers. As deployment expands, Japan faces the task of balancing economic necessity with cultural preservation and fair access to emerging technologies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Twenty-five years after its launch, SharePoint has grown into one of Microsoft’s largest collaboration platforms, serving more than one billion users annually. The service now underpins vast volumes of enterprise content, with billions of files and millions of sites created each day.
Microsoft positions the platform as a foundational knowledge layer for Microsoft 365 Copilot. As the primary grounding source for Copilot, it contributes to the Work IQ intelligence layer, enabling AI tools to operate within an organisational context.
New agentic capabilities allow teams to build solutions using natural language prompts within governed Microsoft 365 environments. Custom AI skills package organisational standards, terminology, and business logic, helping ensure outputs align with internal policies and workflows.
AI-driven publishing features are now embedded across its web authoring tools. Organisations can plan, refine, and distribute content at scale while maintaining governance controls and consistent communication standards.
Content stored in SharePoint also powers semantic indexing and retrieval systems that support contextual discovery across Microsoft 365 applications. Microsoft says these capabilities enable more proactive knowledge surfacing and strengthen Copilot’s ability to deliver grounded responses.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Thailand has published a draft public guidance document to help citizens use AI safely and responsibly. The ‘AI Guide for Citizens’ outlines key AI concepts, benefits, limitations, and practical examples for users engaging with generative AI tools.
Data safety is a central focus, with officials warning against entering personal identifiers, financial data, confidential information, or government secrets into public AI platforms.
The guide also details technical risks such as AI’ hallucinations,’ prompt injection, and data poisoning, advising users to verify outputs and treat AI as a support tool rather than a decision maker.
The guidance addresses ethical and legal responsibilities, warning against using AI to generate misinformation, deepfakes, or harmful content. It emphasises fairness and bias, noting AI systems can inherit human prejudices from training data.
Citizens encountering AI-related scams or harmful content are advised to collect evidence, report incidents to cybercrime authorities, and contact Thailand’s personal data protection agency if privacy is compromised.
The draft aligns Thailand’s AI policies with national rules and international standards, including ISO governance principles and the EU AI Act. The initiative aims to boost AI literacy and safeguards as AI becomes more integrated into daily life.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hundreds of academics urged governments to halt plans for mandatory age checks on social media, rather than accelerating deployment without assessing the risks.
Researchers argue that current systems expose people to privacy breaches, security vulnerabilities and malicious sites that ignore verification rules instead of offering meaningful protection.
They say scientific consensus has not yet formed on the benefits or harms of age-assurance technologies, making large-scale implementation premature and potentially discriminatory.
The letter stresses that any credible system would require cryptographic safeguards for every query, protecting data in transit rather than leaving identity checks to platforms without robust technical guarantees.
Academics believe such infrastructure would be complex to build globally and would create friction that many providers may refuse to adopt.
Concern escalated after early deployments in Italy and France, where verification is already mandatory.
Signatories, including Ronald Rivest and Bart Preneel, warn that governments risk introducing a socially unacceptable system that increases exposure to data misuse instead of ensuring children’s safety online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The social media platform, X, has introduced a new ‘Paid Partnership’ label that creators can attach to posts to show when content is promotional instead of leaving audiences unsure about commercial intent.
An update that improves transparency for followers while meeting rules set by the Federal Trade Commission, which expects sponsored material to be disclosed clearly.
Creators previously relied on hashtags such as #ad or #paidpartnership instead of an integrated disclosure option. The new feature allows users to apply the label through a content-disclosure toggle either during posting or afterwards.
X’s product lead, Nikita Bier, said undisclosed promotions damage trust and weaken the platform’s integrity, so the tool is meant to support creators and regulators simultaneously.
X has been trying to build a stronger creator ecosystem by offering payouts, subscriptions and other incentives. Yet many creators still favour Instagram or YouTube over X as their primary channel, because those platforms have longer-standing monetisation tools.
The addition of a built-in label aligns X with broader industry practice and aims to regain credibility among advertisers and creators.
The company has also tightened API access, preventing programmatic replies unless a user is directly mentioned or quoted.
A change that seeks to limit LLM-generated spam instead of allowing automated responses to distort discussions or appear as fake engagement beneath sponsored content.
X hopes these combined measures will enhance authenticity around commercial posts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic’s AI chatbot, Claude, experienced a global outage, leaving users unable to access the platform. Visitors reported error messages indicating the system had broken down, though the company said it was working to resolve the issue.
The Claude API, used by other websites to integrate the chatbot, remained operational. Anthropic confirmed that the outage was limited to the Claude web interface and did not affect other integrations, emphasising that engineers were actively resolving the issue.
The outage, tracked by Down Detector, began around noon in the UK and affected users worldwide. Messages on the platform reassured users that Claude would return soon and that the problem had been identified and was being fixed.
The interruption comes at a sensitive time for Anthropic, as the company navigates heightened attention surrounding access to its Claude AI system. The situation unfolds amid broader discussions about the role of advanced AI tools in defence contexts, with industry players facing increasing scrutiny over their policies and partnerships.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in the European Parliament are pressing the European Commission for clarity after reports that Meta’s smart glasses recorded people in intimate moments without their knowledge.
Concerns intensified when Swedish outlets reported that Ray-Ban AI glasses captured and uploaded sensitive footage in violation of strict consent requirements under the EU’s General Data Protection Regulation.
The reports indicate that personal data from EU users was sent to Sama, a third-party contractor, in Kenya for human review. Annotators working there said they viewed images of individuals changing clothes and believed the recordings were taken without consent.
They added that Meta’s attempts to blur faces or apply other safeguards failed often enough to expose identifiable material instead of ensuring proper anonymisation.
EU privacy law requires clear information and consent before collecting and processing personal data, and additional safeguards when exporting data to countries without recognised adequacy status.
Kenya is still negotiating such recognition with the Commission, meaning contractual protections would be necessary.
The Irish Data Protection Commission, responsible for Meta’s GDPR oversight, has been contacted amid questions about whether Meta complied with EU requirements.
Lawmakers also want the Commission to examine whether proposed changes in the Digital Omnibus package could dilute privacy protections rather than strengthen them.
Critics argue the reforms might ease data-use rules for AI training at a moment when allegations about Meta’s smart glasses have intensified scrutiny of the EU’s broader digital policy agenda.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Britain has opened a public consultation examining whether children under 16 should face restrictions or a potential ban on social media use. Young people, parents and educators are being invited to share views before ministers decide on future policy.
Officials are considering several options beyond a full ban, including disabling addictive platform features, introducing overnight curfews, regulating access to AI chatbots, and tightening age verification rules. Pilot schemes will test proposed measures to gather practical evidence on their effectiveness.
The debate follows international momentum after Australia introduced restrictions on under-16 access to major platforms, with Spain signalling similar intentions. Political parties, charities and campaigners remain divided over whether bans or stronger safety regulations offer better protection.
Children’s organisations warn blanket prohibitions could push young users towards less regulated online spaces, creating a ‘false sense of security’. Researchers and policymakers instead emphasise improving platform safety standards while allowing young people to socialise and express themselves online responsibly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Breakthroughs in AI and neuroscience are bringing researchers closer to translating human thoughts into words, offering new communication tools for people living with paralysis or severe speech disorders. Experiments with implanted brain electrodes have enabled patients to produce sentences simply by imagining speech.
Machine learning systems analyse neural signals captured from small electrode arrays placed in speech-related brain regions, converting activity into text at increasing speed and accuracy. Recent trials achieved communication rates approaching practical conversation while also capturing tone, rhythm and emotional expression.
Scientists have begun detecting ‘inner speech’, identifying silent counting or imagined phrases without physical attempts to speak. Findings suggest thinking and speaking rely on overlapping neural networks, although spontaneous thoughts remain difficult to decode reliably.
Beyond language, researchers are reconstructing images, music and sensory experiences from brain scans using generative AI models. Studies analysing visual and auditory processing reveal how different brain regions encode perception, opening possibilities for studying hallucinations, dreams and animal cognition.
Technology companies, including Neuralink, are pushing brain-computer interfaces toward commercial use, though current systems sample only a tiny fraction of the brain’s billions of neurons. Experts believe widespread applications such as natural speech restoration or even brain-to-brain communication may emerge within the next two decades, alongside growing ethical debates around privacy and mental autonomy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!