TikTok faces perilous legal challenge over child safety concerns

British parents suing TikTok over the deaths of their children have called for greater accountability from the platform, as the case begins hearings in the United States. One of the claimants said social media companies must be held accountable for the content shown to young users.

Ellen Roome, whose son died in 2022, said the lawsuit is about understanding what children were exposed to online.

The legal filing claims the deaths were a foreseeable result of TikTok’s design choices, which allegedly prioritised engagement over safety. TikTok has said it prohibits content that encourages dangerous behaviour.

Roome is also campaigning for proposed legislation that would allow parents to access their children’s social media accounts after a death. She said the aim is to gain clarity and prevent similar tragedies.

TikTok said it removes most harmful content before it is reported and expressed sympathy for the families. The company is seeking to dismiss the case, arguing that the US court lacks jurisdiction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Indian companies remain committed to AI spending

Almost all Indian companies plan to sustain AI spending even without near-term financial returns. A BCG survey shows 97 percent will keep investing, higher than the 94 percent global rate.

Corporate AI budgets in India are expected to rise to about 1.7 percent of revenue in 2026. Leaders see AI as a long-term strategic priority rather than a short-term cost.

Around 88 percent of Indian executives express confidence in AI generating positive business outcomes. That is above the global average of 82 percent, reflecting strong optimism among local decision-makers.

Despite enthusiasm, fewer Indian CEOs personally lead AI strategy than their global peers, and workforce AI skills lag international benchmarks. Analysts say talent and leadership alignment remain key as spending grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Smarter interconnects become essential for AI processors

AI workloads are placing unprecedented strain on system on chip interconnects. Designers face complexity that exceeds the limits of traditional manual engineering approaches.

Semiconductor engineers are increasingly turning to automated network on chip design. Algorithms now generate interconnect topologies optimised for bandwidth, latency, power and area.

Physically aware automation reduces wirelengths, congestion and timing failures. Industry specialists report dramatically shorter design cycles and more predictable performance outcomes.

As AI spreads from data centres to edge devices, interconnect automation is becoming essential. The shift enables smaller teams to deliver powerful, energy efficient processors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Technology is reshaping smoke alarm safety

Smoke alarms remain critical in preventing fatal house fires, according to fire safety officials. Real-life incidents show how early warnings can allow families to escape rapidly spreading blazes.

Modern fire risks are evolving, with lithium-ion batteries and e-bikes creating fast and unpredictable fires. These incidents can release toxic gases and escalate before flames are clearly visible.

Traditional smoke alarm technology continues to perform reliably despite changes in household risks. At the same time, intelligent and AI-based systems are being developed to detect danger sooner.

Reducing false alarms has become a priority, as nuisance alerts often lead people to turn off devices. Fire experts stress that a maintained, certified smoke alarm is far safer than no smoke alarm at all.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI hoax targets Kate Garraway and family

Presenter Kate Garraway has condemned a cruel AI-generated hoax that falsely showed her with a new boyfriend. The images appeared online shortly after the death of her husband, Derek Draper.

Fake images circulated mainly on Facebook through impersonation accounts using her name and likeness. Members of the public and even friends mistakenly believed the relationship was real.

The situation escalated when fabricated news sites began publishing false stories involving her teenage son Billy. Garraway described the experience as deeply hurtful during an already raw period.

Her comments followed renewed scrutiny of AI image tools and platform responsibility. Recent restrictions aim to limit harmful and misleading content generated using artificial intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft obtains UK and US court orders to disable cybercrime infrastructure

Microsoft has obtained court orders in the United Kingdom and the United States to disrupt the cybercrime-as-a-service platform RedVDS, marking the first time its Digital Crimes Unit (DCU) has pursued a major civil action outside the US.

According to Microsoft, the legal action targeted infrastructure supporting RedVDS, a service that provided virtualised computing resources used in fraud and other cyber-enabled criminal activity. The company sought relief in the UK courts because elements of the platform’s infrastructure were hosted by a UK-based provider, and a significant number of affected victims were located in the UK.

It is reported that the action was conducted with support from Europol’s European Cybercrime Centre (EC3), as well as German authorities, including the Central Office for Combating Internet Crime (ZIT) at the Frankfurt-am-Main Public Prosecutor’s Office and the Criminal Police Office of the state of Brandenburg.

RedVDS operated on a subscription basis, with access reportedly available for approximately $24 per month. The service provided customers with short-lived virtual machines, which could be used to support activities such as phishing campaigns, hosting malicious infrastructure, and facilitating online fraud.

Microsoft states that RedVDS infrastructure has been used in a range of cyber-enabled criminal activities since September 2025, including business email compromise (BEC). In BEC cases, attackers impersonate trusted individuals or organisations to induce victims to transfer funds to accounts under the attackers’ control.

According to Microsoft’s assessment, users of the service targeted organisations across multiple sectors and regions. The real estate sector was among those affected, with estate agents, escrow agents, and title companies reportedly targeted in Australia and Canada. Microsoft estimates that several thousand organisations in that sector experienced some level of impact.

The company also noted that RedVDS users combined the service with other tools, including generative AI technologies, to scale operations, identify potential targets, and generate fraudulent content.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea establishes legal framework for tokenised securities

South Korea has approved legislation establishing a legal framework for issuing and trading tokenised securities. Amendments recognise blockchain-based securities as legitimate, with rules taking effect in January 2027.

Eligible issuers can create tokenised debt and equity products using blockchain infrastructure, while brokerages and licensed intermediaries will facilitate trading.

Regulators aim to combine the efficiency of distributed ledgers with investor protections and expand the use of smart contracts, enabling previously restricted investments in real estate, art, or agriculture to reach a broader audience.

Implementation will be led by the Financial Services Commission, in collaboration with the Financial Supervisory Service, the Korea Securities Depository, and industry participants.

Consultation bodies will develop infrastructure such as ledger-based account management systems, while local firms, including Mirae Asset Securities and Hana Financial Group, are preparing platforms for the new rules.

Analysts project tokenised assets could reach $2 trillion globally by 2028, with South Korea’s market at $249 billion.

The legislation also complements South Korea’s efforts to regulate blockchain and curb cryptocurrency-related financial crime.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Switzerland can shape AI in 2026

Switzerland is heading into 2026 facing an AI transition marked by uncertainty, and it may not win a raw ‘compute race’ dominated by the biggest hardware buyers. In his blog ‘10 Swiss values and practices for AI & digitalisation in 2026,’ Jovan Kurbalija argues that Switzerland’s best response is to build resilience around an ‘AI Trinity’ of Zurich’s entrepreneurship, Geneva’s governance, and communal subsidiarity, using long-standing Swiss practices as a practical compass rather than a slogan.

A central idea is subsidiarity. When top-down approaches hit limits, Switzerland can push ‘bottom-up AI’ grounded in local knowledge and real community needs. Kurbalija points to practical steps such as turning libraries, post offices, and community centres into AI knowledge hubs, creating apprenticeship-style AI programmes, and small grants that help communities develop local AI tools. He also cites a proposal for a ‘Geneva stack’ of sovereign digital tools adopted across public institutions, alongside the notion of a decentralised ‘cyber militia’ capacity for defence.

The blog also leans heavily on entrepreneurship and innovation, especially Switzerland’s SME culture and Zurich’s tech ecosystem. The message for 2026 is to strengthen partnerships between Swiss startups and major global tech firms present in the region, while also connecting more actively with fast-growing digital economy actors from places like India and Singapore.

Instead of chasing moonshots alone, Kurbalija says Switzerland can double down on ‘precision AI’ in areas such as medtech, fintech, and cleantech, and expand its move toward open-source AI tools across the full lifecycle, from models to localised agents.

Another theme is trust and quality, and the challenge of translating Switzerland’s high-trust reputation into the AI era. Beyond cybersecurity, the question is whether Switzerland can help define ‘trustworthy AI,’ potentially even as an international verifier certifying systems.

At the same time, Kurbalija frames quality as a Swiss competitive edge in a world frustrated with low-grade ‘AI slop,’ arguing that better outcomes often depend less on new algorithms and more on well-curated knowledge and data.

He also flags neutrality and sovereignty as issues that will move from abstract debates to urgent policy questions, such as what neutrality means when cyber weapons and AI systems are involved, and how much control a country can realistically keep over data and infrastructure in an interdependent world. He notes that digital sovereignty is a key priority in Switzerland’s 2026 digital strategy, with a likely focus on mapping where critical digital assets are stored and on protecting sensitive domains, such as health, elections, and security, while running local systems when feasible.

Finally, the blog stresses solidarity and resilience as the social and infrastructural foundations of the transition. As AI-driven centralisation risks widening divides, Kurbalija calls for reskilling, support for regions and industries in transition, and digital tools that strengthen social safety nets rather than weaken them.

His bottom line is that Switzerland can’t, and shouldn’t, try to outspend others on hardware. Still, it can choose whether to ‘import the future as a dependency’ or build it as a durable capability, carefully and inclusively, on unmistakably Swiss strengths.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot