Small businesses battle rising cyber attacks in the US

Many small businesses in the US are facing a sharp rise in cyber attacks, yet large numbers still try to manage the risk on their own.

A recent survey by Guardz found that more than four in ten SMBs have already experienced a cyber incident, while most owners believe the overall threat level is continuing to increase.

Rather than relying on specialist teams, over half of small businesses still leave critical cybersecurity tasks to untrained staff or the owner. Only a minority have a formal incident response plan created with a cybersecurity professional, and more than a quarter do not carry cyber insurance.

Phishing, ransomware and simple employee mistakes remain the most common dangers, with negligence seen as the biggest internal risk.

Recovery times are improving, with most affected firms able to return to normal operations quickly and very few suffering lasting damage.

However, many still fail to conduct routine security assessments, and outdated technology remains a widespread concern. Some SMBs are increasing cybersecurity budgets, yet a significant share still spend very little or do not know how much is being invested.

More small firms are now turning to managed service providers instead of trying to cope alone.

The findings suggest that preparation, professional support and clearly defined response plans can greatly improve resilience, helping organisations reduce disruption and maintain business continuity when an attack occurs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Bank warns on scale of AI spending

Deutsche Bank has warned that surging AI investment is helping to prop up US economic growth. Analysts say that broader spending would have stalled without the heavy outlays on technology.

The bank estimates hyperscalers could spend $4 trillion on AI data centres by 2030. Analysts cautioned returns remain uncertain despite the scale of investment.

Official data showed US GDP grew at a 4.3% annualised rate in the third quarter. Economists linked much of the momentum to AI-driven capital expenditure.

Market experts remain divided on risks, although many reject fears of a bubble. Corporate cash flows, rather than excessive borrowing, are funding the majority of AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ByteDance prepares major AI investment for 2026

ByteDance plans a major jump in AI spending next year as global chip access remains uncertain. The firm is preparing heavier investment in processors and infrastructure to support demanding models across its apps and cloud platforms.

The company is budgeting nearly nine billion pounds for AI chips despite strict US export rules. A potential trial purchase of Nvidia H200 hardware could expand its computing capacity if wider access is approved for Chinese firms.

Rivals in the US continue to outspend ByteDance, with large tech groups pouring hundreds of billions into data centres. Chinese platforms face tighter limits and are developing models that run efficiently with fewer resources.

ByteDance’s consumer AI ecosystem keeps accelerating, led by its Doubao chatbot and growing cloud business. Private ownership gives the firm flexibility to invest aggressively while placing AI at the heart of its long-term strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Florida moves ahead with new AI Bill of Rights

Florida lawmakers are preparing a sweeping AI Bill of Rights as political debates intensify. Senator Tom Leek introduced a proposal to provide residents with clearer safeguards while regulating how firms utilise advanced systems across the state.

The plan outlines parental control over minors’ interactions with AI and requires disclosure when people engage with automated systems. It also sets boundaries on political advertising created with AI and restricts state contracts with suppliers linked to countries of concern.

Governor Ron DeSantis maintains Florida can advance its agenda despite federal attempts to curb state-level AI rules. He argues the state has the authority to defend consumers while managing the rising costs of new data centre developments.

Democratic lawmakers have raised concerns about young users forming harmful online bonds with AI companions, prompting calls for stronger protections. The legislation now forms part of a broader clash over online safety, privacy rights and fast-growing AI industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nvidia seeks China market access as US eases AI chip restrictions

The US tech giant NVIDIA has largely remained shut out of China’s market for advanced AI chips, as US export controls have restricted sales due to national security concerns.

High-performance processors such as the H100 and H200 were barred, forcing NVIDIA to develop downgraded alternatives tailored for Chinese customers instead of flagship products.

A shift in policy emerged after President Donald Trump announced that H200 chip sales to China could proceed following a licensing review and a proposed 25% fee. The decision reopened a limited pathway for exporting advanced US AI hardware, subject to regulatory approval in both Washington and Beijing.

If authorised, the H200 shipments would represent the most powerful US-made AI chips permitted in China since restrictions were introduced. The move could help NVIDIA monetise existing H200 inventory while easing pressure on its China business as it transitions towards newer Blackwell chips.

Strategically, the decision may slow China’s push for AI chip self-sufficiency, as domestic alternatives still lag behind NVIDIA’s technology.

At the same time, the policy highlights a transactional approach to export controls, raising uncertainty over long-term US efforts to contain China’s technological rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Private surveillance raises concerns in New Orleans

New Orleans has become the first US city to use real time facial recognition through a privately operated system. The technology flags wanted individuals as they pass cameras, with alerts sent directly to police despite ongoing disputes between city officials.

A local non profit runs the network independently and sets its own guard rails for police cooperation. Advocates claim the arrangement limits bureaucracy, while critics argue it bypasses vital public oversight and privacy protections.

Debate over facial recognition has intensified nationwide as communities question accuracy, fairness and civil liberties. New Orleans now represents a major test case for how such tools may develop without clear government regulation.

Officials remain divided over long term consequences while campaigners warn of creeping surveillance risks. Residents are likely to face years of uncertainty as policies evolve and private systems grow more influential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US TikTok investors face uncertainty as sale delayed again

Investors keen to buy TikTok’s US operations say they are left waiting as the sale is delayed again. ByteDance, TikTok’s Chinese owner, was required to sell or be blocked under a 2024 law.

US President Donald Trump seems set to extend the deadline for a fifth time. Billionaires, including Frank McCourt, Alexis Ohanian and Kevin O’Leary, are awaiting approval.

Investor McCourt confirmed his group has raised the necessary capital and is prepared to move forward once the sale is allowed. National security concerns remain the main reason for the ongoing delays.

Project Liberty, led by McCourt, plans to operate TikTok without Chinese technology, including the recommendation algorithm. The group has developed alternative systems to run the platform independently.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Trump signs order blocking individual US states from enforcing AI rules

US President Donald Trump has signed an executive order aimed at preventing individual US states from enforcing their own AI regulations, arguing that AI oversight should be handled at the federal level. Speaking at the White House, Trump said a single national framework would avoid fragmented rules, while his AI adviser, David Sacks, added that the administration would push back against what it views as overly burdensome state laws, except for measures focused on child safety.

The move is welcomed by major technology companies, which have long warned that a patchwork of state-level regulations could slow innovation and weaken the US position in the global AI race, particularly in comparison to China. Industry groups say a unified national approach would provide clarity for companies investing billions of dollars in AI development and help maintain US leadership in the sector.

However, the executive order has sparked strong backlash from several states, most notably California. Governor Gavin Newsom criticised the decision as an attempt to undermine state protections, pointing to California’s own AI law that requires large developers to address potential risks posed by their models.

Other states, including New York and Colorado, have also enacted AI regulations, arguing that state action is necessary in the absence of comprehensive federal safeguards.

Critics warn that blocking state laws could leave consumers exposed if federal rules are weak or slow to emerge, while some legal experts caution that a national framework will only be effective if it offers meaningful protections. Despite these concerns, tech lobby groups have praised the order and expressed readiness to work with the White House and Congress to establish nationwide AI standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!