Google lays off over 200 AI contractors amid union tensions

The US tech giant, Google, has dismissed over 200 contractors working on its Gemini chatbot and AI Overviews tool. However, this sparks criticism from labour advocates and claims of retaliation against workers pushing for unionisation.

Many affected staff were highly trained ‘super raters’ who helped refine Google’s AI systems, yet were abruptly laid off.

The move highlights growing concerns over job insecurity in the AI sector, where companies depend heavily on outsourced and low-paid contract workers instead of permanent employees.

Workers allege they were penalised for raising issues about inadequate pay, poor working conditions, and the risks of training AI that could eventually replace them.

Google has attempted to distance itself from the controversy, arguing that subcontractor GlobalLogic handled the layoffs rather than the company itself.

Yet critics say that outsourcing allows the tech giant to expand its AI operations without accountability, while undermining collective bargaining efforts.

Labour experts warn that the cuts reflect a broader industry trend in which AI development rests on precarious work arrangements. With union-busting claims intensifying, the dismissals are now seen as part of a deeper struggle over workers’ rights in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

PDGrapher AI tool aims to speed up precision medicine development

Harvard Medical School researchers have developed an AI tool that could transform drug discovery by identifying multiple drivers of disease and suggesting treatments to restore cells to a healthy state.

The model, called PDGrapher, utilises graph neural networks to map the relationships between genes, proteins, and cellular pathways, thereby predicting the most effective targets for reversing disease. Unlike traditional approaches that focus on a single protein, it considers multiple factors at once.

Trained on datasets of diseased cells before and after treatment, PDGrapher correctly predicted known drug targets and identified new candidates supported by emerging research. The model ranked potential targets up to 35% higher and worked 25 times faster than comparable tools.

Researchers are now applying PDGrapher to complex diseases such as Parkinson’s, Alzheimer’s, and various cancers, where single-target therapies often fail. By identifying combinations of targets, the tool can help overcome drug resistance and expedite treatment design.

Senior author Marinka Zitnik said the ultimate goal is to create a cellular ‘roadmap’ to guide therapy development and enable personalised treatments for patients. After further validation, PDGrapher could become a cornerstone in precision medicine.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNDP publishes digital participation guide to empower civic action

A newly published guide by People Powered and UNDP aims to connect people in their communities through inclusive, locally relevant digital participation platforms. Designed with local governments, civic groups, and organisations in mind, it highlights digital platforms that enable inclusive, action-oriented civic engagement.

According to the UNDP, ‘the guide covers the latest trends, including the integration of AI features, and addresses challenges such as digital inclusion, data privacy, accessibility, and sustainability.’

The guide focuses on actively maintained, publicly available platforms, typically offered through cloud-based software (SaaS) models, and prioritises flexible, multi-purpose tools over single-use options. While recognising the dominance of platforms from wealthier countries, it makes a deliberate effort to feature case studies and tools from the Global Majority.

Political advocacy platforms, internal government tools, and issue-reporting apps are excluded to keep the focus on technologies that drive meaningful public participation. Lastly, the guide emphasises the importance of local context and community empowerment, encouraging a shift from passive input to meaningful public influence in governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rising data centre demand pushes utilities to invest

US electricity prices are rising as the energy demands of data centres surge, driven by the rapid growth of AI technologies. The average retail price per kilowatt-hour increased by 6.5% between May 2024 and May 2025, with some states experiencing significantly sharper increases.

Maine saw the sharpest rise in electricity prices at 36.3%, with Connecticut and Utah following closely behind. Utilities are passing on infrastructure costs, including new transmission lines, to consumers. In Northern Virginia, residents could face monthly bill increases of up to $37 by 2040.

Analysts warn that the shift to generative AI will lead to a 160% surge in energy use at data centres by 2030. Water use is also rising sharply, as Google reported its facilities consumed around 6 billion gallons in 2024 alone, amid intensifying global AI competition.

Tech giants are turning to alternative energy to keep pace. Google has announced plans to power data centres with small nuclear reactors through a partnership with Kairos Power, while Microsoft and Amazon are ramping up nuclear investments to secure long-term supply.

President Donald Trump has pledged more than $92 billion in AI and energy infrastructure investments, underlining Washington’s push to ensure the US remains competitive in the AI race despite mounting strain on the grid and water resources.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Educators question boundaries of plagiarism in AI era

As AI tools such as ChatGPT become more common among students, schools and colleges report that some educators see assignments done at home as almost sure to involve AI. Educators say take-home writing tasks and traditional homework risk being devalued.

Teachers and students are confused over what constitutes legitimate versus dishonest AI use. Some students use AI to outline, edit, or translate texts. Others avoid asking for guidance about AI usage because rules vary by classroom, and admitting AI help might lead to accusations.

Schools are adapting by shifting towards in-class writing, verbal assessments and locked-down work environments.

Faculty at institutions, like the University of California, Berkeley and Carnegie Mellon, have started to issue updated syllabus templates that spell out AI expectations, including bans, approvals or partial allowances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK plans AI systems to monitor offenders and prevent crimes before they occur

The UK government is expanding its use of AI across prisons, probation and courts to monitor offenders, assess risk and prevent crime before it occurs under the AI Action Plan.

One key measure involves an AI violence prediction tool that uses factors like an offender’s age, past violent incidents and institutional behaviour to identify those most likely to pose risk.

These predictions will inform decisions to increase supervision or relocate prisoners in custody wings ahead of potential violence.

Another component scans seized mobile phone content to highlight secret or coded messages that may signal plotting of violent acts, intelligence operations or contraband activities.

Officials are also working to merge offender records across courts, prisons and probation to create a single digital identity for each offender.

UK authorities say the goal is to reduce reoffending and prioritise public and staff safety, while shifting resources from reactive investigations to proactive prevention. Civil liberties groups caution about privacy, bias and the risk of overreach if transparency and oversight are not built in.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple notifies French users after commercial spyware threats surge

France’s national cybersecurity agency, CERT-FR, has confirmed that Apple issued another set of threat notifications on 3 September 2025. The alerts inform certain users that devices linked to their iCloud accounts may have been targeted by spyware.

These latest alerts mark this year’s fourth campaign, following earlier waves in March, April and June. Targeted individuals include journalists, activists, politicians, lawyers and senior officials.

CERT-FR says the attacks are highly sophisticated and involve mercenary spyware tools. Many intrusions appear to exploit zero-day or zero-click vulnerabilities, meaning no victim interaction must be compromised.

Apple advises victims to preserve threat notifications, avoid altering device settings that could obscure forensic evidence, and contact authorities and cybersecurity specialists. Users are encouraged to enable features like Lockdown Mode and update devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EDPB adopts guidelines on the interplay between DSA and GDPR

The European Data Protection Board (EDPB) has adopted its first guidelines on how the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR) work together. The aim is to understand how GDPR should be applied in the context of DSA.

Presented during the EDPB’s September plenary, the guidelines ensure consistent interpretation where the DSA involves personal data processing by online intermediaries like search engines and platforms. While enforcement of DSA falls under authorities’ discretion, the EDPB’s input supports harmonised application across the EU’s evolving digital regulatory framework, including:

  • Notice-and-action systems that help individuals or entities report illegal content,
  • Recommender systems used by online platforms to automatically present specific content to the users of the platform with a particular relative order or prominence,
  • The provisions to ensure minors’ privacy, safety, and security and prohibit profile-based advertising using their data are presented to them.
  • transparency of advertising by online platforms, and
  • Prohibition of profiling-based advertising using special categories of data.

Following initial guidelines on the GDPR and DSA, the EDPB is now working with the European Commission on joint guidelines covering the interplay between the Digital Markets Act (DMA) and GDPR, as well as between the upcoming AI Act and the EU data protection laws. The aim is to ensure consistency and coherent safeguards across the evolving regulatory landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France pushes for nighttime social media curfews for teens

French lawmakers are calling for stricter regulations on teen social media use, including mandatory nighttime curfews, following a parliamentary report examining TikTok’s psychological impact on minors.

The 324-page report, published Thursday by a National Assembly Inquiry Commission, proposes that social media accounts for 15- to 18-year-olds be automatically disabled between 10 p.m. and 8 a.m. to help combat mental health issues.

The report contains 43 recommendations, including greater funding for youth mental health services, awareness campaigns in schools, and a national ban on social media access for those under 15. Platforms with algorithmic recommendation systems, like TikTok, are specifically targeted.

Arthur Delaporte, the lead rapporteur and a socialist MP, also announced plans to refer TikTok to the Paris Public Prosecutor, accusing the platform of knowingly exposing minors to harmful content.

The report follows a December 2024 lawsuit filed by seven families who claim TikTok’s content contributed to their children’s suicides.

TikTok rejected the accusations, calling the report “misleading” and highlighting its safety features for minors.

The report urges France not to wait for EU-level legislation and instead to lead on national regulation. President Emmanuel Macron previously demanded an EU-wide ban on social media for under-15s.

However, the European Commission has said cultural differences make such a bloc-wide approach unfeasible.

Looking ahead, the report supports stronger obligations in the upcoming Digital Fairness Act, such as giving users greater control over content feeds and limiting algorithmic manipulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!