US warns of rising senior health fraud as AI lifts scam sophistication

AI-driven fraud schemes are on the rise across the US health system, exposing older adults to increasing financial and personal risks. Officials say tens of billions in losses have already been uncovered this year. High medical use and limited digital literacy leave seniors particularly vulnerable.

Criminals rely on schemes such as phantom billing, upcoding and identity theft using Medicare numbers. Fraud spans home health, hospice care and medical equipment services. Authorities warn that the ageing population will deepen exposure and increase long-term harm.

AI has made scams harder to detect by enabling cloned voices, deepfakes and convincing documents. The tools help impersonate providers and personalise attacks at scale. Even cautious seniors may struggle to recognise false calls or messages.

Investigators are also using AI to counter fraud by spotting abnormal billing, scanning records for inconsistencies and flagging high-risk providers. Cross-checking data across clinics and pharmacies helps identify duplicate claims. Automated prompts can alert users to suspicious contacts.

Experts urge seniors to monitor statements, ignore unsolicited calls and avoid clicking unfamiliar links. They should verify official numbers, protect Medicare details and use strong login security. Suspicious activity should be reported to Medicare or to local fraud response teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI Academy supports small firms with AI training

OpenAI Academy is running a US nationwide Small Business AI Jam for more than 1,000 owners. Workshops in San Francisco, New York, Detroit, Houston and Miami give practical help using AI to handle everyday tasks.

Participants from restaurants, retailers, professional services and creative firms work alongside mentors to build tailored AI tools. Typical projects include marketing assistants, customer communication helpers and organisers for bookings, stock or paperwork. Everyone leaves with at least one ready to use workflow.

A survey for OpenAI found around half of small business leaders want staff comfortable with AI. About sixty percent expect clear efficiency gains when employees have those skills, from faster content writing to smoother operations.

Only available in the US, owners gain access to an online academy hub before and after the in person events. Follow up offers a virtual jam on 4 December, office hours, and links to an AI for Main Street certification track and jobs platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech groups welcome EU reforms as privacy advocates warn of retreat

The EU has unveiled plans to scale back certain aspects of its AI and data privacy rules to revive innovation and alleviate regulatory pressure on businesses. The Digital Omnibus package delays stricter oversight for high-risk AI until 2027 and permits the use of anonymised personal data for model training.

The reforms amend the AI Act and several digital laws, cutting cookie pop-ups and simplifying documentation requirements for smaller firms. EU tech chief Henna Virkkunen says the aim is to boost competitiveness by removing layers of rigid regulation that have hindered start-ups and SMEs.

US tech lobby groups welcomed the overall direction. Still, they criticised the package for not going far enough, particularly on compute thresholds for systemic-risk AI and copyright provisions with cross-border effects. They argue the reforms only partially address industry concerns.

Privacy and digital rights advocates sharply opposed the changes, warning they represent a significant retreat from Europe’s rights-centric regulatory model. Groups including NOYB accused Brussels of undermining hard-won protections in favour of Big Tech interests.

Legal scholars say the proposals could shift Europe closer to a more permissive, industry-driven approach to AI and data use. They warn that the reforms may dilute the EU’s global reputation as a standard-setter for digital rights, just as the world seeks alternatives to US-style regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Foxconn and OpenAI strengthen US AI manufacturing

OpenAI has formed a new partnership with Foxconn to prepare US manufacturing for a fresh generation of AI infrastructure hardware.

The agreement centres on design support and early evaluation instead of immediate purchase commitments, which gives OpenAI a path to influence development while Foxconn builds readiness inside American facilities.

Both companies expect rapid advances in AI capability to demand a new class of physical infrastructure. They plan to co-design several generations of data centre racks that can keep pace with model development instead of relying on slower single-cycle upgrades.

OpenAI will share insight into future hardware needs while Foxconn provides engineering knowledge and large-scale manufacturing capacity across the US.

A key aim is to strengthen domestic supply chains by improving rack architecture, widening access to domestic chip suppliers and expanding local testing and assembly. Foxconn intends to produce essential data centre components in the US, including cabling, networking, cooling and power systems.

The companies present such an effort as a way to support faster deployment, create more resilient infrastructure and bring economic benefits to American workers.

OpenAI frames the partnership as part of a broader push to ensure that critical AI infrastructure is built within the US instead of abroad. Company leaders argue that a robust domestic supply chain will support American leadership in AI and keep the benefits widely shared across the economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta wins antitrust case over monopoly claims

Meta has defeated a major antitrust challenge after a US federal judge ruled it does not currently hold monopoly power in social networking. The decision spares the company from being forced to separate Instagram and WhatsApp, which regulators had argued were acquired to suppress competition.

The judge found the Federal Trade Commission failed to prove Meta maintains present-day dominance, noting that the market has been reshaped by rivals such as TikTok. Meta argued it now faces intense competition across mobile platforms as user behaviour shifts rapidly.

FTC lawyers revisited internal emails linked to Meta’s past acquisitions, but the ruling emphasised that the case required proof of ongoing violations.

Analysts say the outcome contrasts sharply with recent decisions against Google in search and advertising, signalling mixed fortunes for large tech firms.

Industry observers note that Meta still faces substantial regulatory pressure, including upcoming US trials regarding children’s mental health and questions about its heavy investment in AI.

The company welcomed the ruling and stated that it intends to continue developing products within a competitive market framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fight over state AI authority heats up in US Congress

US House Republicans are mounting a new effort to block individual states from regulating AI, reviving a proposal that the Senate overwhelmingly rejected just four months ago. Their push aligns with President Donald Trump’s call for a single federal AI standard, which he argues is necessary to avoid a ‘patchwork’ of state-level rules that he claims hinder economic growth and fuel what he described as ‘woke AI.’

House Majority Leader Steve Scalise is now attempting to insert the measure into the National Defence Authorisation Act, a must-pass annual defence spending bill expected to be finalised in the coming weeks. If successful, the move would place a moratorium on state-level AI regulation, effectively ending the states’ current role as the primary rule-setters on issues ranging from child safety and algorithmic fairness to workforce impacts.

The proposal faces significant resistance, including from within the Republican Party. Lawmakers who blocked the earlier attempt in July warned that stripping states of their authority could weaken protections in areas such as copyright, child safety, and political speech.

Critics, such as Senator Marsha Blackburn and Florida Governor Ron DeSantis, argue that the measure would amount to a handout to Big Tech and leave states unable to guard against the use of predatory or intrusive AI.

Congressional leaders hope to reach a deal before the Thanksgiving recess, but the ultimate fate of the measure remains uncertain. Any version of the moratorium would still need bipartisan support in the Senate, where most legislation requires 60 votes to advance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI energy demand strains electrical grids

Microsoft CEO Satya Nadella recently delivered a key insight, stating that the biggest hurdle to deploying new AI solutions is now electrical power, not chip supply. The massive energy requirements for running large language models (LLMs) have created a critical bottleneck for major cloud providers.

Nadella specified that Microsoft currently has a ‘bunch of chips sitting in inventory’ that cannot be plugged in and utilised. The problem is a lack of ‘warm shells’, meaning data centre buildings that are fully equipped with the necessary power and cooling capacity.

The escalating power requirements of AI infrastructure are placing extreme pressure on utility grids and capacity. Projections from the Lawrence Berkeley National Laboratory indicate that US data centres could consume up to 12 percent of the nation’s total electricity by 2028.

The disclosure should serve as a warning to investors, urging them to evaluate the infrastructure challenges alongside AI’s technological promise. This energy limitation could create a temporary drag on the sector, potentially slowing the massive projected returns on the $5 trillion investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Electricity bills surge as data centres drive up costs across the US

Massive new data centres, built to power the AI industry, are being blamed for a dramatic rise in electricity costs across the US. Residential utility bills in states with high concentrations of these facilities, such as Virginia and Illinois, are surging far beyond the national average.

The escalating energy demand has caused a major capacity crisis on large grids like the PJM Interconnection, with data centre load identified as the primary reason for a multi-billion pound spike in future power costs. These extraordinary increases are being passed directly to consumers, making affordability a central issue for politicians ahead of upcoming elections.

Lawmakers are now targeting tech companies and AI labs, promising to challenge what they describe as ‘sweetheart deals’ and to make the firms contribute more to the infrastructure they rely upon.

Although rising costs are also attributed to an ageing grid and inflation, experts warn that utility bills are unlikely to decrease this decade due to the unprecedented demand from rapid data centre expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Old laws now target modern tracking technology

Class-action privacy litigation continues to grow in frequency, repurposing older laws to address modern data tracking technologies. Recent high-profile lawsuits have applied the California Invasion of Privacy Act and the Video Privacy Protection Act.

A unanimous jury verdict recently found Meta Platforms violated CIPA Section 632 (which is now under appeal) by eavesdropping on users’ confidential communications without consent. The court ruled that Meta intentionally used its SDK within a sexual health app, Flo, to intercept sensitive real-time user inputs.

That judgement suggests an electronic device under the statute need not be physical, with a user’s phone qualifying as the requisite device. The legal success in these cases highlights a significant, rising risk for all companies utilising tracking pixels and software development kits (SDKs).

Separately, the VPPA has found new power against tracking pixels in the case of Jancik v. WebMD concerning video-viewing data. The court held that a consumer need not pay for a video service but can subscribe by simply exchanging their email address for a newsletter.

Companies must ensure their privacy policies clearly disclose all such tracking conduct to obtain explicit, valid consent. The courts are taking real-time data interception seriously, noting intentionality may be implied when a firm fails to stem the flow of sensitive personally identifiable information.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US states weigh VPN restrictions to protect minors online

US legislators in Wisconsin and Michigan are weighing proposals that would restrict the use of VPNs to access sites deemed harmful to minors. The bills build on age-verification rules for websites hosting sexual content, which lawmakers say are too easy to bypass when users connect via VPNs.

In Wisconsin, a bill that has already passed the State Assembly would require adult sites to both verify age and block visitors using VPNs, potentially making the state the first in the US to outlaw VPN use for accessing such content if the Senate approves it.

In Michigan, similar legislation would go further by obliging internet providers to monitor and block VPN connections, though that proposal has yet to advance.

The Digital Rights Group and the Electronic Frontier Foundation argue that the approach would erode privacy for everyone, not just minors.

It warns that blanket restrictions would affect businesses, students, journalists and abuse survivors who rely on VPNs for security, calling the measures ‘surveillance dressed up as safety’ and urging lawmakers instead to improve education, parental tools and support for safer online environments.

The debate comes as several European countries, including France, Italy and the UK, have introduced age-verification rules for pornography sites, but none have proposed banning VPNs.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!