ByteDance cuts use of Claude after Anthropic blocks China access

The split marks a deepening tech war centring on AI services and control.

ByteDance has ended its use of Anthropic’s Claude AI model amid access disputes.

An escalating tech clash has emerged between ByteDance and Anthropic over AI access and service restrictions. ByteDance has halted use of Anthropic’s Claude model on its infrastructure after the US firm imposed access limitations for Chinese users.

The suspension follows Anthropic’s move to restrict China-linked deployments and aligns with broader geopolitical tensions in the AI sector. ByteDance reportedly said it would now rely on domestic alternatives, signalling a strategic pivot away from western-based AI models.

Industry watchers view the dispute as a marker of how major tech firms are navigating export controls, national security concerns and sovereignty in AI. Observers warn the rift may prompt accelerated investment in home-grown AI ecosystems by Chinese companies.

While neither company has detailed all operational impacts, the episode highlights AI’s fraught position at the intersection of technology and geopolitics. US market reaction may hinge on whether other firms follow suit or partnerships are redefined around regional access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot