Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rising data centre demand pushes utilities to invest

US electricity prices are rising as the energy demands of data centres surge, driven by the rapid growth of AI technologies. The average retail price per kilowatt-hour increased by 6.5% between May 2024 and May 2025, with some states experiencing significantly sharper increases.

Maine saw the sharpest rise in electricity prices at 36.3%, with Connecticut and Utah following closely behind. Utilities are passing on infrastructure costs, including new transmission lines, to consumers. In Northern Virginia, residents could face monthly bill increases of up to $37 by 2040.

Analysts warn that the shift to generative AI will lead to a 160% surge in energy use at data centres by 2030. Water use is also rising sharply, as Google reported its facilities consumed around 6 billion gallons in 2024 alone, amid intensifying global AI competition.

Tech giants are turning to alternative energy to keep pace. Google has announced plans to power data centres with small nuclear reactors through a partnership with Kairos Power, while Microsoft and Amazon are ramping up nuclear investments to secure long-term supply.

President Donald Trump has pledged more than $92 billion in AI and energy infrastructure investments, underlining Washington’s push to ensure the US remains competitive in the AI race despite mounting strain on the grid and water resources.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Educators question boundaries of plagiarism in AI era

As AI tools such as ChatGPT become more common among students, schools and colleges report that some educators see assignments done at home as almost sure to involve AI. Educators say take-home writing tasks and traditional homework risk being devalued.

Teachers and students are confused over what constitutes legitimate versus dishonest AI use. Some students use AI to outline, edit, or translate texts. Others avoid asking for guidance about AI usage because rules vary by classroom, and admitting AI help might lead to accusations.

Schools are adapting by shifting towards in-class writing, verbal assessments and locked-down work environments.

Faculty at institutions, like the University of California, Berkeley and Carnegie Mellon, have started to issue updated syllabus templates that spell out AI expectations, including bans, approvals or partial allowances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK plans AI systems to monitor offenders and prevent crimes before they occur

The UK government is expanding its use of AI across prisons, probation and courts to monitor offenders, assess risk and prevent crime before it occurs under the AI Action Plan.

One key measure involves an AI violence prediction tool that uses factors like an offender’s age, past violent incidents and institutional behaviour to identify those most likely to pose risk.

These predictions will inform decisions to increase supervision or relocate prisoners in custody wings ahead of potential violence.

Another component scans seized mobile phone content to highlight secret or coded messages that may signal plotting of violent acts, intelligence operations or contraband activities.

Officials are also working to merge offender records across courts, prisons and probation to create a single digital identity for each offender.

UK authorities say the goal is to reduce reoffending and prioritise public and staff safety, while shifting resources from reactive investigations to proactive prevention. Civil liberties groups caution about privacy, bias and the risk of overreach if transparency and oversight are not built in.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple notifies French users after commercial spyware threats surge

France’s national cybersecurity agency, CERT-FR, has confirmed that Apple issued another set of threat notifications on 3 September 2025. The alerts inform certain users that devices linked to their iCloud accounts may have been targeted by spyware.

These latest alerts mark this year’s fourth campaign, following earlier waves in March, April and June. Targeted individuals include journalists, activists, politicians, lawyers and senior officials.

CERT-FR says the attacks are highly sophisticated and involve mercenary spyware tools. Many intrusions appear to exploit zero-day or zero-click vulnerabilities, meaning no victim interaction must be compromised.

Apple advises victims to preserve threat notifications, avoid altering device settings that could obscure forensic evidence, and contact authorities and cybersecurity specialists. Users are encouraged to enable features like Lockdown Mode and update devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EDPB adopts guidelines on the interplay between DSA and GDPR

The European Data Protection Board (EDPB) has adopted its first guidelines on how the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR) work together. The aim is to understand how GDPR should be applied in the context of DSA.

Presented during the EDPB’s September plenary, the guidelines ensure consistent interpretation where the DSA involves personal data processing by online intermediaries like search engines and platforms. While enforcement of DSA falls under authorities’ discretion, the EDPB’s input supports harmonised application across the EU’s evolving digital regulatory framework, including:

  • Notice-and-action systems that help individuals or entities report illegal content,
  • Recommender systems used by online platforms to automatically present specific content to the users of the platform with a particular relative order or prominence,
  • The provisions to ensure minors’ privacy, safety, and security and prohibit profile-based advertising using their data are presented to them.
  • transparency of advertising by online platforms, and
  • Prohibition of profiling-based advertising using special categories of data.

Following initial guidelines on the GDPR and DSA, the EDPB is now working with the European Commission on joint guidelines covering the interplay between the Digital Markets Act (DMA) and GDPR, as well as between the upcoming AI Act and the EU data protection laws. The aim is to ensure consistency and coherent safeguards across the evolving regulatory landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France pushes for nighttime social media curfews for teens

French lawmakers are calling for stricter regulations on teen social media use, including mandatory nighttime curfews, following a parliamentary report examining TikTok’s psychological impact on minors.

The 324-page report, published Thursday by a National Assembly Inquiry Commission, proposes that social media accounts for 15- to 18-year-olds be automatically disabled between 10 p.m. and 8 a.m. to help combat mental health issues.

The report contains 43 recommendations, including greater funding for youth mental health services, awareness campaigns in schools, and a national ban on social media access for those under 15. Platforms with algorithmic recommendation systems, like TikTok, are specifically targeted.

Arthur Delaporte, the lead rapporteur and a socialist MP, also announced plans to refer TikTok to the Paris Public Prosecutor, accusing the platform of knowingly exposing minors to harmful content.

The report follows a December 2024 lawsuit filed by seven families who claim TikTok’s content contributed to their children’s suicides.

TikTok rejected the accusations, calling the report “misleading” and highlighting its safety features for minors.

The report urges France not to wait for EU-level legislation and instead to lead on national regulation. President Emmanuel Macron previously demanded an EU-wide ban on social media for under-15s.

However, the European Commission has said cultural differences make such a bloc-wide approach unfeasible.

Looking ahead, the report supports stronger obligations in the upcoming Digital Fairness Act, such as giving users greater control over content feeds and limiting algorithmic manipulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU enforces tougher cybersecurity rules under NIS2

The European Union’s NIS2 directive has officially come into force, imposing stricter cybersecurity duties on thousands of organisations.

Adopted in 2022 and implemented into national law by late 2024, the rules extend beyond critical infrastructure to cover more industries. Energy, healthcare, transport, ICT, and even waste management firms now face mandatory compliance.

Measures include multifactor authentication, encryption, backup systems, and stronger supply chain security. Senior executives are held directly responsible for failures, with penalties ranging from heavy fines to operational restrictions.

Companies must also report major incidents promptly to national authorities. Unlike ISO certifications, NIS2 requires organisations to prove compliance through internal processes or independent audits, depending on national enforcement.

Analysts warn that firms still reliant on legacy systems face a difficult transition. Yet experts agree the directive signals a decisive shift: cybersecurity is now a legal duty, not simply best practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ukraine urges ethical use of AI in education

AI can help build individual learning paths for Ukraine’s 3.5 million students, but its use must remain ethical, First Deputy Minister of Education and Science Yevhen Kudriavets has said.

Speaking to UNN, Kudriavets stressed that AI can analyse large volumes of information and help students acquire the knowledge they need more efficiently. He said AI could construct individual learning trajectories faster than teachers working manually.

He warned, however, that AI should not replace the educational process and that safeguards must be found to prevent misuse.

Kudriavets also said students in Ukraine should understand the reasons behind using AI, adding that it should be used to achieve knowledge rather than to obtain grades.

The deputy minister emphasised that technology itself is neutral, and how people choose to apply it determines whether it benefits education.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube expands AI dubbing to millions of creators

Real-time translation is becoming a standard feature across consumer tech, with Samsung, Google, and Apple all introducing new tools. Apple’s recently announced Live Translation on AirPods demonstrates the utility of such features, particularly for travellers.

YouTube has joined the trend, expanding its multi-language audio feature to millions of creators worldwide. The tool enables creators to add dubbed audio tracks in multiple languages, powered by Google’s Gemini AI, replicating tone and emotion.

The feature was first tested with creators like MrBeast, Mark Rober, and Jamie Oliver. YouTube reports that Jamie Oliver’s channel saw its views triple, while over 25% of the watch time came from non-primary languages.

Mark Rober’s channel now supports more than 30 languages per video, helping creators reach audiences far beyond their native markets. YouTube states that this expansion should make content more accessible to global viewers and increase overall engagement.

Subtitles will still be vital for people with hearing difficulties, but AI-powered dubbing could reduce reliance on them for language translation. For creators, it marks a significant step towards making content truly global.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!