Florida moves ahead with new AI Bill of Rights

Florida lawmakers are preparing a sweeping AI Bill of Rights as political debates intensify. Senator Tom Leek introduced a proposal to provide residents with clearer safeguards while regulating how firms utilise advanced systems across the state.

The plan outlines parental control over minors’ interactions with AI and requires disclosure when people engage with automated systems. It also sets boundaries on political advertising created with AI and restricts state contracts with suppliers linked to countries of concern.

Governor Ron DeSantis maintains Florida can advance its agenda despite federal attempts to curb state-level AI rules. He argues the state has the authority to defend consumers while managing the rising costs of new data centre developments.

Democratic lawmakers have raised concerns about young users forming harmful online bonds with AI companions, prompting calls for stronger protections. The legislation now forms part of a broader clash over online safety, privacy rights and fast-growing AI industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nvidia seeks China market access as US eases AI chip restrictions

The US tech giant NVIDIA has largely remained shut out of China’s market for advanced AI chips, as US export controls have restricted sales due to national security concerns.

High-performance processors such as the H100 and H200 were barred, forcing NVIDIA to develop downgraded alternatives tailored for Chinese customers instead of flagship products.

A shift in policy emerged after President Donald Trump announced that H200 chip sales to China could proceed following a licensing review and a proposed 25% fee. The decision reopened a limited pathway for exporting advanced US AI hardware, subject to regulatory approval in both Washington and Beijing.

If authorised, the H200 shipments would represent the most powerful US-made AI chips permitted in China since restrictions were introduced. The move could help NVIDIA monetise existing H200 inventory while easing pressure on its China business as it transitions towards newer Blackwell chips.

Strategically, the decision may slow China’s push for AI chip self-sufficiency, as domestic alternatives still lag behind NVIDIA’s technology.

At the same time, the policy highlights a transactional approach to export controls, raising uncertainty over long-term US efforts to contain China’s technological rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Private surveillance raises concerns in New Orleans

New Orleans has become the first US city to use real time facial recognition through a privately operated system. The technology flags wanted individuals as they pass cameras, with alerts sent directly to police despite ongoing disputes between city officials.

A local non profit runs the network independently and sets its own guard rails for police cooperation. Advocates claim the arrangement limits bureaucracy, while critics argue it bypasses vital public oversight and privacy protections.

Debate over facial recognition has intensified nationwide as communities question accuracy, fairness and civil liberties. New Orleans now represents a major test case for how such tools may develop without clear government regulation.

Officials remain divided over long term consequences while campaigners warn of creeping surveillance risks. Residents are likely to face years of uncertainty as policies evolve and private systems grow more influential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US TikTok investors face uncertainty as sale delayed again

Investors keen to buy TikTok’s US operations say they are left waiting as the sale is delayed again. ByteDance, TikTok’s Chinese owner, was required to sell or be blocked under a 2024 law.

US President Donald Trump seems set to extend the deadline for a fifth time. Billionaires, including Frank McCourt, Alexis Ohanian and Kevin O’Leary, are awaiting approval.

Investor McCourt confirmed his group has raised the necessary capital and is prepared to move forward once the sale is allowed. National security concerns remain the main reason for the ongoing delays.

Project Liberty, led by McCourt, plans to operate TikTok without Chinese technology, including the recommendation algorithm. The group has developed alternative systems to run the platform independently.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Trump signs order blocking individual US states from enforcing AI rules

US President Donald Trump has signed an executive order aimed at preventing individual US states from enforcing their own AI regulations, arguing that AI oversight should be handled at the federal level. Speaking at the White House, Trump said a single national framework would avoid fragmented rules, while his AI adviser, David Sacks, added that the administration would push back against what it views as overly burdensome state laws, except for measures focused on child safety.

The move is welcomed by major technology companies, which have long warned that a patchwork of state-level regulations could slow innovation and weaken the US position in the global AI race, particularly in comparison to China. Industry groups say a unified national approach would provide clarity for companies investing billions of dollars in AI development and help maintain US leadership in the sector.

However, the executive order has sparked strong backlash from several states, most notably California. Governor Gavin Newsom criticised the decision as an attempt to undermine state protections, pointing to California’s own AI law that requires large developers to address potential risks posed by their models.

Other states, including New York and Colorado, have also enacted AI regulations, arguing that state action is necessary in the absence of comprehensive federal safeguards.

Critics warn that blocking state laws could leave consumers exposed if federal rules are weak or slow to emerge, while some legal experts caution that a national framework will only be effective if it offers meaningful protections. Despite these concerns, tech lobby groups have praised the order and expressed readiness to work with the White House and Congress to establish nationwide AI standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

US approaches universal 5G as global adoption surges

New data from Omdia and 5G Americas showed rapid global growth in wireless connectivity during the third quarter of 2025, with nearly three billion 5G connections worldwide.

North America remained the most advanced region in terms of adoption, reaching penetration levels that almost match its population.

The US alone recorded 341 million 5G connections, marking one of the highest per capita adoption rates in the world, compared to the global average, which remains far lower.

Analysts noted that strong device availability and sustained investment continue to reinforce the region’s leadership. Enhanced features such as improved uplink performance and integrated sensing are expected to accelerate the shift towards early 5G-Advanced capabilities.

Growth in cellular IoT also remained robust. North America supported more than 270 million connected devices and is forecast to reach nearly half a billion by 2030 as sectors such as manufacturing and utilities expand their use of connected systems.

AI is becoming central to these deployments by managing traffic, automating operations and enabling more innovative industrial applications.

Future adoption is set to intensify as regional 5G connections are projected to surpass 8.6 billion by 2030.

Rising interest in fixed wireless access is driving multi-device usage, offering high-speed connectivity for households and small firms instead of relying solely on fibre networks that remain patchy in many areas.

Globally, the sector has reached more than 78 million connections, with strong annual growth. Analysts believe that expanding infrastructure will support demand for low-latency connectivity, and the addition of satellite-based systems is expected to extend coverage to remote locations.

By mid-November 2025, operators had launched 379 commercial 5G networks worldwide, including seventeen in North America. A similar number of LTE networks operated across the region.

Industry observers said that expanding terrestrial and non-terrestrial networks will form a layered architecture that strengthens resilience, supports emergency response and improves service continuity across land, sea and air.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Three in ten US teens now use AI chatbots every day, survey finds

According to new data from the Pew Research Center, roughly 64% of US teens (aged 13–17) say they have used an AI chatbot; about three in ten (≈ 30%) report daily use. Among those teens, the leading chatbot is ChatGPT (used by 59%), followed by Gemini (23%) and Meta AI (20%).

The widespread adoption raises growing safety and welfare concerns. As teenagers increasingly rely on AI for information, companionship or emotional support, critics point to potential risks, including exposure to biased content, misinformation, or emotionally manipulative interactions, particularly among vulnerable youth.

Legal action has already followed, with families of at least two minors suing AI-developer companies after alleged harmful advice from chatbots.

Demographic patterns reveal that Black and Hispanic teens report higher daily usage rates (around 33-35%) compared to their White peers (≈ 22%). Daily use is also more common among older teens (15–17) than younger ones.

For policymakers and digital governance stakeholders, the findings add urgency to calls for AI-specific safeguarding frameworks, especially where young people are concerned. As AI tools become embedded in adolescent life, ensuring transparency, responsible design, and robust oversight will be critical to preventing unintended harms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China pushes global leadership on AI governance

Global discussions on artificial intelligence have multiplied, yet the world still lacks a coherent system to manage the technology’s risks. China is attempting to fill that gap by proposing a new World Artificial Intelligence Cooperation Organisation to coordinate regulation internationally.

Countries face mounting concerns over unsafe AI development, with the US relying on fragmented rules and voluntary commitments from tech firms. The EU has introduced binding obligations through its AI Act, although companies continue to push for weaker oversight.

China’s rapid rollout of safety requirements, including pre-deployment checks and watermarking of AI-generated content, is reshaping global standards as many firms overseas adopt Chinese open-weight models.

A coordinated international framework similar to the structure used for nuclear oversight could help governments verify compliance and stabilise the global AI landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!