Virginia’s data centre boom divides residents and industry

Loudoun County in Virginia, known as Data Center Alley, now hosts nearly 200 data centres powering much of the world’s internet and AI infrastructure. Their growth has brought vast economic benefits but stirred concerns about noise, pollution, and rising energy bills for nearby residents.

The facilities occupy about 3% of the county’s land yet generate 40% of its tax revenue. Locals say the constant humming and industrial sprawl have driven away wildlife and inflated electricity costs, which have surged by over 250% in five years.

Despite opposition, new US and global data centre projects continue to receive state support. The industry contributes $5.5 billion annually to Virginia’s economy and sustains around 74,000 jobs. Additionally, President Trump’s administration recently pledged to accelerate permits.

Residents like Emily Kasabian argue the expansion is eroding community life, replacing trees with concrete and machinery to fuel AI. Activists are now lobbying for construction pauses, warning that unchecked development threatens to transform affluent suburbs beyond recognition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Celebrity estates push back on Sora as app surges to No.1

OpenAI’s short-video app Sora topped one million downloads in under a week, then ran headlong into a likeness-rights firestorm. Celebrity families and studios demanded stricter controls. Estates for figures like Martin Luther King Jr. sought blocks on unauthorised cameos.

Users showcased hyperreal mashups that blurred satire and deception, from cartoon crossovers to dead celebrities in improbable scenes. All clips are AI-made, yet reposting across platforms spread confusion. Viewers faced a constant real-or-fake dilemma.

Rights holders pressed for consent, compensation, and veto power over characters and personas. OpenAI shifted toward opt-in for copyrighted properties and enabled estate requests to restrict cameos. Policy language on who qualifies as a public figure remains fuzzy.

Agencies and unions amplified pressure, warning of exploitation and reputational risks. Detection firms reported a surge in takedown requests for unauthorised impersonations. Watermarks exist, but removal tools undercut provenance and complicate enforcement.

Researchers warned about a growing fog of doubt as realistic fakes multiply. Every day, people are placed in deceptive scenarios, while bad actors exploit deniability. OpenAI promised stronger guardrails as Sora scales within tighter rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple may have to pay $1.9B in damages to UK consumers over unfair App Store fees

Apple could face damages of up to £1.5 billion ($1.9 billion) after a British court ruled it overcharged consumers by imposing unfair commission fees on app developers.

The Competition Appeal Tribunal found that Apple abused its dominant position between 2015 and 2020 by charging excessive commissions, up to 30%, on App Store purchases and in-app payments. Judges ruled that the company’s fees should not have exceeded 17.5% for app sales and 10% for in-app transactions, concluding that half of the inflated costs were passed on to consumers.

The total damages, to be set next month, would compensate users who paid higher prices for apps, subscriptions and digital purchases. Apple said it will appeal, arguing that the App Store ‘helps developers succeed and provides consumers with a safe and trusted place to discover apps and make payments’.

The ruling comes as Apple continues to resist more burdensome antitrust regulation in Europe, which adds to Apple’s growing list of competition battles across Europe. Courts in the Netherlands and Belgium have accused the company of blocking alternative payment methods and charging excessive commissions, while similar lawsuits are ongoing in the United States.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UN cybercrime treaty signed in Hanoi amid rights concerns

Around 60 countries signed a landmark UN cybercrime convention in Hanoi, seeking faster cooperation against online crime. Leaders cited trillions in annual losses from scams, ransomware, and trafficking. The pact enters into force after 40 ratifications.

UN supporters say the treaty will streamline evidence sharing, extradition requests, and joint investigations. Provisions target phishing, ransomware, online exploitation, and hate speech. Backers frame the deal as a boost to global security.

Critics warn the text’s breadth could criminalise security research and dissent. The Cybersecurity Tech Accord called it a surveillance treaty. Activists fear expansive data sharing with weak safeguards.

The UNODC argues the agreement includes rights protections and space for legitimate research. Officials say oversight and due process remain essential. Implementation choices will decide outcomes on the ground.

The EU, Canada, and Russia signed in Hanoi, underscoring geopolitical buy-in. Vietnam, being the host, drew scrutiny over censorship and arrests. Officials there cast the treaty as a step toward resilience and stature.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU MiCA greenlight turns Blockchain.com’s Malta base into hub

Blockchain.com received a MiCA license from Malta’s Financial Services Authority, enabling passported crypto services across all 30 EEA countries under one EU framework. Leaders called it a step toward safer, consistent access.

Malta becomes the hub for scaling operations, citing regulatory clarity and cross-border support. Under the authorisation, teams will expand secure custody and wallets, enterprise treasury tools, and localised products for the EU consumers.

A unified license streamlines go-to-market and accelerates launches in priority jurisdictions. Institutions gain clearer expectations on safeguarding, disclosures, and governance, while retail users benefit from standardised protections and stronger redress.

Fiorentina D’Amore will lead the EU strategy with deep fintech experience. Plans include phased rollouts, supervisor engagement, and controls aligned to MiCA’s conduct and prudential requirements across key markets.

Since 2011, Blockchain.com says it has processed over one trillion dollars and serves more than 90 million wallets. Expansion under MiCA adds scalable infrastructure, robust custody, and clearer disclosures for users and institutions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU investigates Meta and TikTok for DSA breaches

The European Commission has accused Meta and TikTok of breaching the Digital Services Act (DSA), highlighting failures in handling illegal content and providing researchers access to public data.

Meta’s Facebook and Instagram were found to make it too difficult for users to report illegal content or receive responses to complaints, the Commission said in its preliminary findings.

Investigations began after complaints to Ireland’s content regulator, where Meta’s EU base is located. The Commission’s inquiry, which has been ongoing since last year, aims to ensure that large platforms protect users and meet EU safety obligations.

Meta and TikTok can submit counterarguments before penalties of up to six percent of global annual turnover are imposed.

Both companies face separate concerns about denying researchers adequate access to platform data and preventing oversight of systemic online risks. TikTok is under further examination for minor protection and advertising transparency issues.

The Commission has launched 14 such DSA-related proceedings, none concluded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft faces legal action for alleged Copilot subscription deception

The Australian Competition and Consumer Commission (ACCC) has launched Federal Court proceedings against Microsoft Australia and its parent company. The regulator alleges Microsoft misled 2.7 million Australians over Microsoft 365 subscription changes after adding its AI assistant, Copilot.

The ACCC says Microsoft told subscribers to accept higher-priced Copilot plans or cancel, without mentioning the cheaper Classic plan that kept original features. Customers could only discover this option by starting the cancellation process.

ACCC Chair Gina Cass-Gottlieb said Microsoft deliberately concealed the Classic plan to push users onto more expensive subscriptions. She noted that Microsoft 365 is essential for many and that customers deserve transparent information to make informed choices.

The regulator believes many users would have stayed with their original plans if they had known all the options.

The ACCC is seeking penalties, injunctions, and redress, claiming millions faced financial harm from higher renewal charges. The case underscores the regulator’s focus on protecting consumers in the digital economy and ensuring fair practices by major technology firms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Copilot Mode turns Edge into an active assistant

Edge says the browser should work with you, not just wait for clicks. Copilot Mode adds chat-first tabs, multi-tab reasoning, and a dynamic pane for in-context help. Plan trips, compare options, and generate schedules without tab chaos.

Microsoft Copilot now resumes past sessions, so projects pick up exactly where you stopped. It can execute multi-step actions, like building walking tours, end-to-end. Optional history signals improve suggestions and speed up research-heavy tasks.

Voice controls handle quick actions and deeper chores with conversational prompts. Ask Copilot to open pages, summarise threads, or unsubscribe you from promo emails. Reservations and other multi-step chores are rolling out next.

Journeys groups past browsing into topic timelines for fast re-entry, with explicit opt-in. Privacy controls are prominent: clear cues when Copilot listens, acts, or views. You can toggle Copilot Mode off anytime.

Security features round things out: local AI blocks scareware overlays by default. Built-in password tools continuously create, store, and monitor credentials. Copilot Mode is in all Copilot markets on Edge desktop and mobile and is coming soon.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft revives friendly AI helper with Mico

Microsoft has unveiled a new AI companion called Mico, designed to replace the infamous Clippy as the friendly face of its Copilot assistant. The animated avatar, shaped like a glowing flame or blob, reacts emotionally and visually during conversations with users.

Executives said Mico aims to balance warmth and utility, offering human-like cues without becoming intrusive. Unlike Clippy, the character can easily be switched off and is intended to feel supportive rather than persistent or overly personal.

Mico’s launch reflects growing debate about personality in AI assistants as tech firms navigate ethical concerns. Microsoft stressed that its focus remains on productivity and safety, distancing itself from flirtatious or emotionally manipulative AI designs seen elsewhere.

The character will first appear in US versions of Copilot on laptops and mobile apps. Microsoft also revealed an AI tutoring mode for students, reinforcing its efforts to create more educational and responsibly designed AI experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta expands AI safety tools for teens

Meta has announced new AI safety tools to give parents greater control over how teenagers use its AI features. The update will first launch on Instagram, allowing parents to disable one-on-one chats between teens and AI characters.

Parents will be able to block specific AI assistants and see topics teens discuss with them. Meta said the goal is to encourage transparency and support families as young users learn to navigate AI responsibly.

Teen protections already include PG-13-guided responses and restrictions on sensitive discussions, such as self-harm or eating disorders. The company said it also uses AI detection systems to apply safeguards when suspected minors misreport their age.

The new parental controls will roll out in English early next year across the US, UK, Canada, and Australia. Meta said it will continue updating features to address parents’ concerns about privacy, safety, and teen wellbeing online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot