Negative narratives follow XRP price rallies

Search behaviour around XRP increasingly reflects the psychological side of the crypto market. Negative narratives spread quickly online, shaping sentiment and fuelling volatility. Data shows that ‘XRP scam’ search spikes often appear during strong price rallies.

Crypto analyst Leonidas compared Google Trends data for ‘Ripple scam’ and ‘XRP scam’ with XRP’s price chart. Results show that damaging search surges typically align with bullish moves and sometimes precede pullbacks, suggesting that perception pressure builds during peak momentum.

Rapid price growth tends to trigger retail curiosity and concern, primarily when sensational claims circulate widely. Search spikes often coincide with heightened mainstream and social media exposure, indicating sentiment reacts to price action rather than fundamentals.

Despite recurring allegations and past regulatory scrutiny, institutional partnerships and XRP Ledger adoption remain intact. Analysts stress that sentiment spikes rarely signal structural weakness, urging investors to prioritise utility and adoption metrics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

User activity stabilises as TikTok recovers from transition disruption

TikTok has largely recovered from a brief decline in daily active users following its US ownership change, when a group of American investors assumed control of domestic operations. Usage fell temporarily as uncertainty spread among users. Competing video apps saw short-term gains during the disruption.

Data from Similarweb shows TikTok’s US daily active users dropped to between 86 and 88 million after the transition, compared with a typical average of around 92 million. Activity has since rebounded to more than 90 million. Many users who experimented with alternatives have returned.

Platforms rivalling TikTok, including UpScrolled and Skylight Social, experienced rapid but limited growth. UpScrolled peaked at 138,500 daily users before falling back to roughly 68,000. Skylight Social reached 81,200 daily users, then declined to around 56,300.

User concerns were driven less by ownership itself and more by fears around platform changes. An updated privacy policy allowing precise GPS tracking triggered backlash, alongside confusion over language referencing sensitive personal data. Some interpreted the changes as increased surveillance.

A multi-day data centre outage disrupted search, likes, and in-app messaging, resulting in user frustration. Some users attributed the glitches to possible censorship or platform instability. Once services were restored, activity stabilised, and concerns eased.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US security process delays Nvidia chip sales

Nvidia’s plans to export its H200 AI chips to China remain pending nearly two months after US President Donald Trump approved. A national security review is still underway before licences can be issued to Chinese customers.

Chinese companies have delayed new H200 orders while awaiting clarity on licence approvals and potential conditions, according to people familiar with the discussions. The uncertainty has slowed anticipated demand and affected production planning across Nvidia’s supply chain.

In January, the US Commerce Department eased H200 export restrictions to China but required licence applications to be reviewed by the departments of State, Defence, and Energy.

Commerce has completed its analysis, but inter-agency discussions continue, with the US State Department seeking additional safeguards.

The export framework, which also applies to AMD, introduces conditions related to shipment allocation, testing, and end-use reporting. Until the review process concludes, Nvidia and prospective Chinese buyers remain unable to proceed with confirmed transactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bitcoin drops to 2024 low as AI fears and geopolitics rattle markets

A cautious mood spread across global markets as US stocks declined and Bitcoin slid to its lowest level since late 2024. Technology and software shares led losses, pushing major indices to their weakest performance in two weeks.

Bitcoin fell sharply before stabilising, remaining well below its October peak despite continued pro-crypto messaging from Washington. Gold and silver moved higher during the session, reinforcing their appeal as defensive assets amid rising uncertainty.

Investor sentiment weakened after Anthropic unveiled new legal-focused features for its Claude chatbot, reviving fears of disruption across software and data-driven business models. Analysts at Morgan Stanley pointed to rotation within the technology sector, with investors reducing exposure to software stocks.

Geopolitical tensions intensified after reports of US military action involving Iran, pushing oil prices higher and increasing market volatility. Combined AI uncertainty, geopolitical risk, and shifting safe-haven flows continue to weigh on equities and digital assets alike.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India pushes Meta to justify WhatsApp’s data-sharing

The Supreme Court of India has delivered a forceful warning to Meta after judges said the company could not play with the right to privacy.

The court questioned how WhatsApp monetises personal data in a country where the app has become the de facto communications tool for hundreds of millions of people. Judges added that meaningful consent is difficult when users have little practical choice.

Meta was told not to share any user information while the appeal over WhatsApp’s 2021 privacy policy continues. Judges pressed the company to explain the value of behavioural data instead of relying solely on claims about encrypted messages.

Government lawyers argued that personal data was collected and commercially exploited in ways most users would struggle to understand.

The case stems from a major update to WhatsApp’s data-sharing rules that India’s competition regulator said abused the platform’s dominant position.

A significant penalty was issued before Meta and WhatsApp challenged the ruling at the Supreme Court. The court has now widened the proceedings by adding the IT ministry and has asked Meta to provide detailed answers before the next hearing on 9 February.

WhatsApp is also under heightened scrutiny worldwide as regulators examine how encrypted platforms analyse metadata and other signals.

In India, broader regulatory changes, such as new SIM-binding rules, could restrict how small businesses use the service rather than broadening its commercial reach.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Ofcom expands scrutiny of X over Grok deepfake concerns

The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.

As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.

X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.

The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.

Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.

Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.

Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.

Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.

Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.

Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU AI Act guidance delay raises compliance uncertainty

The European Commission has missed a key deadline to issue guidance on how companies should classify high-risk AI systems under the EU AI Act, fuelling uncertainty around the landmark law’s implementation.

Guidance on Article 6, which defines high-risk AI systems and stricter compliance rules, was due by early February. Officials have indicated that feedback is still being integrated, with a revised draft expected later this month and final adoption potentially slipping to spring.

The delay follows warnings that regulators and businesses are unprepared for the act’s most complex rules, due to apply from August. Brussels has suggested delaying high-risk obligations under its Digital Omnibus package, citing unfinished standards and the need for legal clarity.

Industry groups want enforcement delayed until guidance and standards are finalised, while some lawmakers warn repeated slippage could undermine confidence in the AI Act. Critics warn further changes could deepen uncertainty if proposed revisions fail or disrupt existing timelines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU moves closer to decision on ChatGPT oversight

The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.

OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.

Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.

The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.

ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.

The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.

A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI legal tool rattles European data stocks

European data and legal software stocks fell sharply after US AI startup Anthropic launched a new tool for corporate legal teams. The company said the software can automate contract reviews, compliance workflows, and document triage, while clarifying that it does not offer legal advice.

Investors reacted swiftly, sending shares in Pearson, RELX, Sage, Wolters Kluwer, London Stock Exchange Group, and Experian sharply lower. Thomson Reuters also suffered a steep decline, reflecting concern that AI tools could erode demand for traditional data-driven services.

Market commentators warned that broader adoption of AI in professional services could compress margins or bypass established providers altogether. Morgan Stanley flagged intensifying competition, while AJ Bell pointed to rising investor anxiety across the sector.

The sell-off also revived debate over AI’s impact on employment, particularly in legal and other office-based roles. Recent studies suggest the UK may face greater disruption than other large economies as companies adopt AI tools, even as productivity gains continue to rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Innovation and security shape the UAE’s tech strategy

The United Arab Emirates is strengthening its global tech role by treating advanced innovation as a pillar of sovereignty rather than a standalone growth driver. National strategy increasingly links technology with long-term economic resilience, security, and geopolitical relevance.

A key milestone was the launch of the UAE Advanced Technology Centre with the Technology Innovation Institute and the World Economic Forum, announced alongside the Davos gathering.

The initiative highlights the UAE’s transition from technology consumer to active participant in shaping global governance frameworks for emerging technologies.

The centre focuses on policy and governance for areas including artificial intelligence, quantum computing, biotechnology, robotics, and space-based payment systems.

Backed by a flexible regulatory environment, the UAE is promoting regulatory experimentation and translating research into real-world applications through institutions such as the Mohamed bin Zayed University of Artificial Intelligence and innovation hubs like Masdar City.

Alongside innovation, authorities are addressing rising digital risks, particularly deepfake technologies that threaten financial systems, public trust, and national security.

By combining governance, ethical standards, and international cooperation, the UAE is advancing a model of digital sovereignty that prioritises security, shared benefits, and long-term strategic independence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!