Microsoft deal signals pay-per-use path for AI access to People Inc. content

People Inc. has joined Microsoft’s publisher content marketplace in a pay-per-use deal that compensates media for AI access. Copilot will be the first buyer, while People Inc. continues to block most AI crawlers via Cloudflare to force paid licensing.

People Inc., formerly Dotdash Meredith, said Microsoft’s marketplace lets AI firms pay ‘à la carte’ for specific content. The agreement differs from its earlier OpenAI pact, which the company described as more ‘all-you-can-eat’, but the priority remains ‘respected and paid for’ use.

Executives disclosed a sharp fall in Google search referrals: from 54% of traffic two years ago to 24% last quarter, citing AI Overviews. Leadership argues that crawler identification and paid access should become the norm as AI sits between publishers and audiences.

Blocking non-paying bots has ‘brought almost everyone to the table’, People Inc. said, signalling more licences to come. Such an approach by Microsoft is framed as a model for compensating rights-holders while enabling AI tools to use high-quality, authorised material.

IAC reported People Inc. digital revenue up 9% to $269m, with performance marketing and licensing up 38% and 24% respectively. The publisher also acquired Feedfeed, expanding its food vertical reach while pursuing additional AI content partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU considers classifying ChatGPT as a search engine under the DSA. What are the implications?

The European Commission is pondering whether OpenAI’s ChatGPT should be designated as a ‘Very Large Online Search Engine’ (VLOSE) under the Digital Services Act (DSA), a move that could reshape how generative AI tools are regulated across Europe.

OpenAI recently reported that ChatGPT’s search feature reached 120.4 million monthly users in the EU over the past six months, well above the 45 million threshold that triggers stricter obligations for major online platforms and search engines. The Commission confirmed it is reviewing the figures and assessing whether ChatGPT meets the criteria for designation.

The key question is whether ChatGPT’s live search function should be treated as an independent service or as part of the chatbot as a whole. Legal experts note that the DSA applies to intermediary services such as hosting platforms or search engines, categories that do not neatly encompass generative AI systems.

Implications for OpenAI

If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks.

‘As part of mitigation measures, OpenAI may need to adapt ChatGPT’s design, features, and functionality,’ said Laureline Lemoine of AWO. ‘Compliance could also slow the rollout of new tools in Europe if risk assessments aren’t planned in advance.’

The company could also face new data-sharing obligations under Article 40 of the DSA, allowing vetted researchers to request information about systemic risks and mitigation efforts, potentially extending to model data or training processes.

A test case for AI oversight

Legal scholars say the decision could set a precedent for generative AI regulation across the EU. ‘Classifying ChatGPT as a VLOSE will expand scrutiny beyond what’s currently covered under the AI Act,’ said Natali Helberger, professor of information law at the University of Amsterdam.

Experts warn the DSA would shift OpenAI from voluntary AI-safety frameworks and self-defined benchmarks to binding obligations, moving beyond narrow ‘bias tests’ to audited systemic-risk assessments, transparency and mitigation duties. ‘The DSA’s due diligence regime will be a tough reality check,’ said Mathias Vermeulen, public policy director at AWO.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

National internet shutdown grips Tanzania during contested vote

Tanzania is facing a nationwide internet shutdown that began as citizens headed to the polls in a tense general election. Connectivity across the country has been severely disrupted, with platforms like X (formerly Twitter), WhatsApp, and Instagram rendered inaccessible.

The blackout, confirmed by monitoring group NetBlocks, has left journalists, election observers, and citizens in Tanzania struggling to share updates as reports of protests and unrest spread. The government has reportedly deployed the army, deepening concerns over efforts to control information during this volatile period.

The move mirrors a growing global pattern where authorities restrict internet access during elections and political crises to curb dissent and manage narratives. Amnesty International has condemned the shutdown, warning that it risks escalating tensions and violating citizens’ right to information.

‘Authorities must ensure full internet access and allow free reporting before, during, and after the elections,’ said Tigere Chagutah, Amnesty’s Regional Director for East and Southern Africa.

Tanzania’s blackout follows similar crackdowns elsewhere, such as Afghanistan’s total internet shutdown, which left citizens completely cut off from the world.

These incidents underscore the fragility of digital freedoms in times of political turmoil. When governments ‘pull the plug,’ societies lose not only communication but also trust, transparency, and the fundamental ability to hold power to account.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN report shows human cost of Afghan telecommunications shutdowns

A new UN briefing highlights the severe human rights effects of recent telecommunications shutdowns in Afghanistan. The 48-hour nationwide disruption hindered access to healthcare, emergency services, banking, education, and daily communications, worsening the hardships already faced by the population.

Women and girls were disproportionately affected, with restricted contact with guardians preventing travel for essential activities and limiting access to online education. Health workers reported preventable deaths due to the inability to call for emergency assistance, while humanitarian aid was delayed in regions still recovering from natural disasters and involuntary returns from neighbouring countries.

The UN stresses that such shutdowns violate rights to freedom of expression and access to information, and urges authorities to ensure any communication restrictions comply with international human rights standards. Rapid restoration of services and legally justified measures are essential to protect the Afghan population.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Big Tech ramps up Brussels lobbying as EU considers easing digital rules

Tech firms now spend a record €151 million a year on lobbying at EU institutions, up from €113 million in 2023, according to transparency-register analysis by Corporate Europe Observatory and LobbyControl.

Spending is concentrated among US giants. The ten biggest tech companies, including Meta, Microsoft, Apple, Amazon, Qualcomm and Google, together outspend the top ten in pharma, finance and automotive. Meta leads with a budget above €10 million.

Estimates calculate there are 890 full-time lobbyists now working to influence tech policy in Brussels, up from 699 in 2023, with 437 holding European Parliament access badges. In the first half of 2025, companies declared 146 meetings with the Commission and 232 with MEPs, with artificial intelligence regulation and the industry code of practice frequently on the agenda.

As industry pushes back on the Digital Markets Act and Digital Services Act and the Commission explores the ‘simplification’ of EU rulebooks, lobbying transparency campaigners fear a rollback on the progress made to regulate the digital sector. On the contrary, companies argue that lobbying helps lawmakers grasp complex markets and assess impacts on innovation and competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australia rules out AI copyright exemption

The Albanese Government has confirmed that it will not introduce a Text and Data Mining Exception in Australia’s copyright law, reinforcing its commitment to protecting local creators.

The decision follows calls from the technology sector for an exemption allowing AI developers to use copyrighted material without permission or payment.

Attorney-General Michelle Rowland said the Government aims to support innovation and creativity but will not weaken existing copyright protections. The Government plans to explore fair licensing options to support AI innovation while ensuring creators are paid fairly.

The Copyright and AI Reference Group will focus on fair AI use, more explicit copyright rules for AI works, and simpler enforcement through a possible small claims forum.

The Government said Australia must prepare for AI-related copyright challenges while keeping strong protections for creators. Collaboration between the technology and creative sectors will be essential to ensure that AI development benefits everyone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge bars NSO Group from using spyware to target WhatsApp in landmark ruling

A US federal judge has permanently barred NSO Group, a commercial spyware company, from targeting WhatsApp and, in the same ruling, cut damages owed to Meta from $168 million to $4 million.

The decision by Judge Phyllis Hamilton of the Northern District of California stems from NSO’s 2019 hack of WhatsApp, when the company’s Pegasus spyware targeted 1,400 users through a zero-click exploit. The injunction bans NSO from accessing or assisting access to WhatsApp’s systems, a restriction the firm previously warned could threaten its business model.

An NSO spokesperson said the order ‘will not apply to NSO’s customers, who will continue using the company’s technology to help protect public safety,’ but declined to clarify how that interpretation aligns with the court’s wording. By contrast, Will Cathcart, head of WhatsApp, stated on X that the decision ‘bans spyware maker NSO from ever targeting WhatsApp and our global users again.’

Pegasus has allegedly been used against journalists, activists, and dissidents worldwide. The ruling sets an important precedent for US companies whose platforms have been compromised by commercial surveillance firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Beware the language of human flourishing in AI regulation

TechPolicy.Press recently published ‘Confronting Empty Humanism in AI Policy’, a thought piece by Matt Blaszczyk exploring how human-centred and humanistic language in AI policy is widespread, but often not backed by meaningful legal or regulatory substance.

Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.

The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.

For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.

Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.

He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube settles Donald Trump lawsuit over account suspension for $24.5 million

YouTube has agreed to a $24.5 million settlement to resolve a lawsuit filed by President Donald Trump, stemming from the platform’s decision to suspend his account after the 6 January 2021 Capitol riot.

The lawsuit was part of a broader legal push by Trump against major tech companies over what he calls politically motivated censorship.

As part of the deal, YouTube will donate $22 million to the Trust for the National Mall on Trump’s behalf, funding a new $200 million White House ballroom project. Another $2.5 million will go to co-plaintiffs, including the American Conservative Union and author Naomi Wolf.

The settlement includes no admission of wrongdoing by YouTube and was intended to avoid further legal costs. The move follows similar multimillion-dollar settlements by Meta and X, which also suspended Trump’s accounts post-January 6.

Critics argue the settlement signals a retreat from consistent content moderation. Media scholar Timothy Koskie warned it sets a troubling precedent for global digital governance and selective enforcement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Business Insider says journalists may use AI to draft stories

Business Insider has issued a memo saying journalists may use AI to help draft stories, while making it clear that authors remain fully responsible for what is published under their names.

The guidelines define what kinds of AI use are permitted, such as assisting with research or generating draft text, but stress that final edits, fact-checking, and the author’s voice must be preserved.

Some staff welcomed the clarity after months of uncertainty, saying the new policy could help speed up routine work. Others raised concerns about preserving editorial quality and resisting over-reliance on AI for creativity or original insight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!