Campaigning in the age of generative AI

Generative AI is rapidly altering the political campaign landscape, argues the ORF article, which outlines how election teams worldwide are adopting AI tools for persuasion, outreach and content creation.

Campaigns can now generate customised messages for different voter groups, produce multilingual content at scale, and automate much of the traditional grunt work of campaigning.

On one hand, proponents say the technology makes campaigning more efficient and accessible, particularly in multilingual or resource-constrained settings. But the ease and speed with which content can be generated also lowers the barrier for misuse: AI-driven deepfakes, synthetic voices and disinformation campaigns can be deployed to mislead voters or distort public discourse.

Recent research supports these worries. For example, a large-scale study published in Science and Nature demonstrated that AI chatbots can influence voter opinions, swaying a non-trivial share of undecided voters toward a target candidate simply by presenting persuasive content.

Meanwhile, independent analyses show that during the 2024 US election campaign, a noticeable fraction of content on social media was AI-generated, sometimes used to spread misleading narratives or exaggerate support for certain candidates.

For democracy and governance, the shift poses thorny challenges. AI-driven campaigns risk eroding public trust, exacerbating polarisation and undermining electoral legitimacy. Regulators and policymakers now face pressure to devise new safeguards, such as transparency requirements around AI usage in political advertising, stronger fact-checking, and clearer accountability for misuse.

The ORF article argues these debates should start now, before AI becomes so entrenched that rollback is impossible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI stroke-imaging tool halves time to treatment

A new AI-powered tool rolled out across England is helping clinicians diagnose strokes much sooner, significantly speeding up treatment decisions and improving patient outcomes. According to a study published in The Lancet Digital Health, roughly 15,000 patients benefited directly from AI-assisted scan reviews.

The tool, deployed at over 70 hospitals, analyses brain scans in minutes to rapidly identify clots, supporting doctors in deciding whether a patient needs urgent procedures such as a thrombectomy. Sites using the AI saw thrombectomy rates double (from 2.3% to 4.6%), compared with more modest increases at hospitals not using the technology.

Time is critical in stroke treatment: each 20-minute delay in thrombectomy reduces a patient’s chance of full recovery by around 1 per cent. The AI-driven system also helped cut the average ‘door-in to door-out’ time at primary stroke centres by 64 minutes, making it far more likely that patients reach a specialist centre in time for treatment.

Health-service leaders say the findings provide real-world evidence that AI imaging can save lives and reduce disability after stroke. As a result, the technology is now part of a wider national rollout across every regularly admitting stroke service in England.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta begins removing underage users in Australia

Meta has begun removing Australian users under 16 from Facebook, Instagram and Threads ahead of a national ban taking effect on 10 December. Canberra requires major platforms to block younger users or face substantial financial penalties.

Meta says it is deleting accounts it reasonably believes belong to underage teenagers while allowing them to download their data. Authorities expect hundreds of thousands of adolescents to be affected, given Instagram’s large cohort of 13 to 15 year olds.

Regulators argue the law addresses harmful recommendation systems and exploitative content, though YouTube has warned that safety filters will weaken for unregistered viewers. The Australian communications minister has insisted platforms must strengthen their own protections.

Rights groups have challenged the law in court, claiming unjust limits on expression. Officials concede teenagers may try using fake identification or AI-altered images, yet still expect platforms to deploy strong countermeasures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyber Resilience Act signals a major shift in EU product security

EU regulators are preparing to enforce the Cyber Resilience Act, setting core security requirements for digital products in the European market. The law spans software, hardware, and firmware, establishing shared expectations for secure development and maintenance.

Scope captures apps, embedded systems, and cloud-linked features. Risk classes run from default to critical, directing firms to self-assess or undergo third-party checks. Any product sold beyond December 2027 must align with the regulation.

Obligations apply to manufacturers, importers, distributors, and developers. Duties include secure-by-design practices, documented risk analysis, disclosure procedures, and long-term support. Firms must notify ENISA within 24 hours of active exploitation and provide follow-up reports on a strict timeline.

Compliance requires technical files covering threat assessments, update plans, and software bills of materials. High-risk categories demand third-party evaluation, while lower-risk segments may rely on internal checks. Existing certifications help, but cannot replace CRA-specific conformity work.

Non-compliance risks fines, market restrictions, and reputational damage. Organisations preparing early are urged to classify products, run gap assessments, build structured roadmaps, and align development cycles with CRA guidance. EU authorities plan to provide templates and support as firms transition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canada sets national guidelines for equitable AI

Yesterday, Canada released the CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems standard, marking the first national standard focused specifically on accessible AI.

A framework that ensures AI systems are inclusive, fair, and accessible from design through deployment. Its release coincides with the International Day of Persons with Disabilities, emphasising Canada’s commitment to accessibility and inclusion.

The standard guides organisations and developers in creating AI that accommodates people with disabilities, promotes fairness, prevents exclusion, and maintains accessibility throughout the AI lifecycle.

It provides practical processes for equity in AI development and encourages education on accessible AI practices.

The standard was developed by a technical committee composed largely of people with disabilities and members of equity-deserving groups, incorporating public feedback from Canadians of diverse backgrounds.

Approved by the Standards Council of Canada, CAN-ASC-6.2 meets national requirements for standards development and aligns with international best practices.

Moreover, the standard is available for free in both official languages and accessible formats, including plain language, American Sign Language and Langue des signes québécoise.

By setting clear guidelines, Canada aims to ensure AI serves all citizens equitably and strengthens workforce inclusion, societal participation, and technological fairness.

An initiative that highlights Canada’s leadership in accessible technology and provides a practical tool for organisations to implement inclusive AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CJEU tightens duties for online marketplaces

EU judges have ruled that online marketplaces must verify advertisers’ identities before publishing personal data. The judgment arose from a Romanian case involving an abusive anonymous advertisement containing sensitive information.

In this Romanian case, the Court found that marketplace operators influence the purposes and means of processing and therefore act as joint controllers. They must identify sensitive data before publication and ensure consent or another lawful basis exists.

Judges also held that anonymous users cannot lawfully publish sensitive personal data without proving the data subject’s explicit agreement. Platforms must refuse publication when identity checks fail or when no valid GDPR ground applies.

Operators must introduce safeguards to prevent unlawful copying of sensitive content across other websites. The Court confirmed that exemptions under E-commerce rules cannot override GDPR accountability duties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and automation need human oversight in decision-making

Leaders from academia and industry in Hyderabad, India are stressing that humans must remain central in decision-making as AI and automation expand across society. Collaborative intelligence, combining AI experts, domain specialists and human judgement, is seen as essential for responsible adoption.

Universities are encouraged to treat students as primary stakeholders, adapting curricula to integrate AI responsibly and avoid obsolescence. Competency-based, values-driven learning models are being promoted to prepare students to question, shape and lead through digital transformation.

Experts highlighted that modern communication is co-produced by humans, machines and algorithms. Designing AI to augment human agency rather than replace it ensures a balance between technology and human decision-making across education and industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Legal sector urged to plan for cultural change around AI

A digital agency has released new guidance to help legal firms prepare for wider AI adoption. The report urges practitioners to assess cultural readiness before committing to major technology investment.

Sherwen Studios collected views from lawyers who raised ethical worries and practical concerns. Their experiences shaped recommendations intended to ensure AI serves real operational needs across the sector.

The agency argues that firms must invest in oversight, governance and staff capability. Leaders are encouraged to anticipate regulatory change and build multidisciplinary teams that blend legal and technical expertise.

Industry analysts expect AI to reshape client care and compliance frameworks over the coming years. Firms prepared for structural shifts are likely to benefit most from long-term transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AstraZeneca backs Pangaea’s AI platform to scale precision healthcare

Pangaea Data, a health-tech firm specialising in patient-intelligence platforms, announced a strategic, multi-year partnership with AstraZeneca to deploy multimodal artificial intelligence in clinical settings. The goal is to bring AI-driven, data-rich clinical decision-making to scale, improving how patients are identified, diagnosed, treated and connected to therapies or clinical trials.

The collaboration will see AstraZeneca sponsoring the configuration, validation and deployment of Pangaea’s enterprise-grade platform, which merges large-scale clinical, imaging, genomic, pathology and real-world data. It will also leverage generative and predictive AI capabilities from Microsoft and NVIDIA for model training and deployment.

Among the planned applications are supporting point-of-care treatment decisions and identifying patients who are undiagnosed, undertreated or misdiagnosed, across diseases ranging from chronic conditions to cancer.

Pangaea’s CEO said the partnership aims to efficiently connect patients to life-changing therapies and trials in a compliant, financially sustainable way. For AstraZeneca, the effort reflects a broader push to integrate AI-driven precision medicine across its R&D and healthcare delivery pipeline.

From a policy and health-governance standpoint, this alliance is important. It demonstrates how multimodal AI, combining different data types beyond standard medical records, is being viewed not just as a research tool, but as a potentially transformative element of clinical care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU opens antitrust probe into Meta’s WhatsApp AI rollout

Brussels has opened an antitrust inquiry into Meta over how AI features were added to WhatsApp, focusing on whether the updated access policies hinder market competition. Regulators say scrutiny is needed as integrated assistants become central to messaging platforms.

Meta AI has been built into WhatsApp across Europe since early 2025, prompting questions about whether external AI providers face unfair barriers. Meta rejects the accusations and argues that users can reach rival tools through other digital channels.

Italy launched a related proceeding in July and expanded it in November, examining claims that Meta curtailed access for competing chatbots. Authorities worry that dominance in messaging could influence the wider AI services market.

EU officials confirmed the case will proceed under standard antitrust rules rather than the Digital Markets Act. Investigators aim to understand how embedded assistants reshape competitive dynamics in services used by millions.

European regulators say outcomes could guide future oversight as generative AI becomes woven into essential communications. The case signals growing concern about concentrated power in fast-evolving AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!