Campaigning in the age of generative AI

Generative AI is rapidly altering the political campaign landscape, argues the ORF article, which outlines how election teams worldwide are adopting AI tools for persuasion, outreach and content creation.

Campaigns can now generate customised messages for different voter groups, produce multilingual content at scale, and automate much of the traditional grunt work of campaigning.

On one hand, proponents say the technology makes campaigning more efficient and accessible, particularly in multilingual or resource-constrained settings. But the ease and speed with which content can be generated also lowers the barrier for misuse: AI-driven deepfakes, synthetic voices and disinformation campaigns can be deployed to mislead voters or distort public discourse.

Recent research supports these worries. For example, a large-scale study published in Science and Nature demonstrated that AI chatbots can influence voter opinions, swaying a non-trivial share of undecided voters toward a target candidate simply by presenting persuasive content.

Meanwhile, independent analyses show that during the 2024 US election campaign, a noticeable fraction of content on social media was AI-generated, sometimes used to spread misleading narratives or exaggerate support for certain candidates.

For democracy and governance, the shift poses thorny challenges. AI-driven campaigns risk eroding public trust, exacerbating polarisation and undermining electoral legitimacy. Regulators and policymakers now face pressure to devise new safeguards, such as transparency requirements around AI usage in political advertising, stronger fact-checking, and clearer accountability for misuse.

The ORF article argues these debates should start now, before AI becomes so entrenched that rollback is impossible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI stroke-imaging tool halves time to treatment

A new AI-powered tool rolled out across England is helping clinicians diagnose strokes much sooner, significantly speeding up treatment decisions and improving patient outcomes. According to a study published in The Lancet Digital Health, roughly 15,000 patients benefited directly from AI-assisted scan reviews.

The tool, deployed at over 70 hospitals, analyses brain scans in minutes to rapidly identify clots, supporting doctors in deciding whether a patient needs urgent procedures such as a thrombectomy. Sites using the AI saw thrombectomy rates double (from 2.3% to 4.6%), compared with more modest increases at hospitals not using the technology.

Time is critical in stroke treatment: each 20-minute delay in thrombectomy reduces a patient’s chance of full recovery by around 1 per cent. The AI-driven system also helped cut the average ‘door-in to door-out’ time at primary stroke centres by 64 minutes, making it far more likely that patients reach a specialist centre in time for treatment.

Health-service leaders say the findings provide real-world evidence that AI imaging can save lives and reduce disability after stroke. As a result, the technology is now part of a wider national rollout across every regularly admitting stroke service in England.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japanese high-schooler suspected of hacking net-cafe chain using AI

Authorities in Tokyo have issued an arrest warrant for a 17-year-old boy from Osaka on suspicion of orchestrating a large-scale cyberattack using artificial intelligence. The alleged target was the operator of the Kaikatsu Club internet-café chain (along with related fitness-gym business), which may have exposed the personal data of about 7.3 million customers.

According to investigators, the suspect used a computer programme, reportedly built with help from an AI chatbot, to send unauthorised commands around 7.24 million times to the company’s servers in order to extract membership information. The teenager was previously arrested in November in connection with a separate fraud case involving credit-card misuse.

Police have charged him under Japan’s law against unauthorised computer access and for obstructing business, though so far no evidence has emerged of misuse (for example, resale or public leaks) of the stolen data.

In his statement to investigators, the suspect reportedly said he carried out the hack simply because he found it fun to probe system vulnerabilities.

This case is the latest in a growing pattern of so-called AI-enabled cyber crimes in Japan, from fraudulent subscription schemes to ransomware generation. Experts warn that generative AI is lowering the barrier to entry for complex attacks, enabling individuals with limited technical training to carry out large-scale hacking or fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google boosts Nigeria’s AI development

The US tech giant, Google, has announced a $2.1 million Google.org commitment to support Nigeria’s AI-powered future, aiming to strengthen local talent and improve digital safety nationwide.

An initiative that supports Nigeria’s National AI Strategy and its ambition to create one million digital jobs, recognising the economic potential of AI, which could add $15 billion to the country’s economy by 2030.

The investment focuses on developing advanced AI skills among students and developers instead of limiting progress to short-term training schemes.

Google will fund programmes led by expert partners such as FATE Foundation, the African Institute for Mathematical Sciences, and the African Technology Forum.

Their work will introduce advanced AI curricula into universities and provide developers with structured, practical routes from training to building real-world products.

The commitment also expands digital safety initiatives so communities can participate securely in the digital economy.

Junior Achievement Africa will scale Google’s ‘Be Internet Awesome’ curriculum to help families understand safe online behaviour, while the CyberSafe Foundation will deliver cybersecurity training and technical assistance to public institutions, strengthening national digital resilience.

Google aims to create more opportunities similar to those of Nigerian learners who used digital skills to secure full-time careers instead of remaining excluded from the digital economy.

By combining advanced AI training with improved digital safety, the company intends to support inclusive growth and build long-term capacity across Nigeria.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SAP elevates customer support with proactive AI systems

AI has pushed customer support into a new era, where anticipation replaces reaction. SAP has built a proactive model that predicts issues, prevents failures and keeps critical systems running smoothly instead of relying on queues and manual intervention.

Major sales events, such as Cyber Week and Singles Day, demonstrated the impact of this shift, with uninterrupted service and significant growth in transaction volumes and order numbers.

Self-service now resolves most issues before they reach an engineer, as structured knowledge supports AI agents that respond instantly with a confidence level that matches human performance.

Tools such as the Auto Response Agent and Incident Solution Matching enable customers to retrieve solutions without having to search through lengthy documentation.

SAP has also prepared organisations scaling AI by offering support systems tailored for early deployment.

Engineers have benefited from AI as much as customers. Routine tasks are handled automatically, allowing experts to focus on problems that demand insight instead of administration.

Language optimisation, routing suggestions, and automatic error categorisation support faster and more accurate resolutions. SAP validates every AI tool internally before release, which it views as a safeguard for responsible adoption.

The company maintains that AI will augment staff rather than replace them. Creative and analytical work becomes increasingly important as automation handles repetitive tasks, and new roles emerge in areas such as AI training and data stewardship.

SAP argues that progress relies on a balanced relationship between human judgement and machine intelligence, strengthened by partnerships that turn enterprise data into measurable outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada sets national guidelines for equitable AI

Yesterday, Canada released the CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems standard, marking the first national standard focused specifically on accessible AI.

A framework that ensures AI systems are inclusive, fair, and accessible from design through deployment. Its release coincides with the International Day of Persons with Disabilities, emphasising Canada’s commitment to accessibility and inclusion.

The standard guides organisations and developers in creating AI that accommodates people with disabilities, promotes fairness, prevents exclusion, and maintains accessibility throughout the AI lifecycle.

It provides practical processes for equity in AI development and encourages education on accessible AI practices.

The standard was developed by a technical committee composed largely of people with disabilities and members of equity-deserving groups, incorporating public feedback from Canadians of diverse backgrounds.

Approved by the Standards Council of Canada, CAN-ASC-6.2 meets national requirements for standards development and aligns with international best practices.

Moreover, the standard is available for free in both official languages and accessible formats, including plain language, American Sign Language and Langue des signes québécoise.

By setting clear guidelines, Canada aims to ensure AI serves all citizens equitably and strengthens workforce inclusion, societal participation, and technological fairness.

An initiative that highlights Canada’s leadership in accessible technology and provides a practical tool for organisations to implement inclusive AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and automation need human oversight in decision-making

Leaders from academia and industry in Hyderabad, India are stressing that humans must remain central in decision-making as AI and automation expand across society. Collaborative intelligence, combining AI experts, domain specialists and human judgement, is seen as essential for responsible adoption.

Universities are encouraged to treat students as primary stakeholders, adapting curricula to integrate AI responsibly and avoid obsolescence. Competency-based, values-driven learning models are being promoted to prepare students to question, shape and lead through digital transformation.

Experts highlighted that modern communication is co-produced by humans, machines and algorithms. Designing AI to augment human agency rather than replace it ensures a balance between technology and human decision-making across education and industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Legal sector urged to plan for cultural change around AI

A digital agency has released new guidance to help legal firms prepare for wider AI adoption. The report urges practitioners to assess cultural readiness before committing to major technology investment.

Sherwen Studios collected views from lawyers who raised ethical worries and practical concerns. Their experiences shaped recommendations intended to ensure AI serves real operational needs across the sector.

The agency argues that firms must invest in oversight, governance and staff capability. Leaders are encouraged to anticipate regulatory change and build multidisciplinary teams that blend legal and technical expertise.

Industry analysts expect AI to reshape client care and compliance frameworks over the coming years. Firms prepared for structural shifts are likely to benefit most from long-term transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FCA begins live AI testing with UK financial firms

The UK’s Financial Conduct Authority has started a live testing programme for AI with major financial firms. The initiative aims to explore AI’s benefits and risks in retail financial services while ensuring safe and responsible deployment.

Participating firms, including NatWest, Monzo, Santander and Scottish Widows, receive guidance from FCA regulators and technical partner Advai. Use cases being trialled range from debt resolution and financial advice to customer engagement and smarter spending tools.

Insights from the testing will help the FCA shape future regulations and governance frameworks for AI in financial markets. The programme complements the regulator’s Supercharged Sandbox, with a second cohort of firms due to begin testing in April 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sega cautiously adopts AI in game development

Game development is poised to transform as Sega begins to incorporate AI selectively. The Japanese studio aims to enhance efficiency across production processes while preserving the integrity of creative work, such as character design.

Executives emphasised that AI will primarily support tasks such as content transcription and workflow optimisation, avoiding roles that require artistic skills. Careful evaluation of each potential use case will guide its implementation across projects.

The debate over generative AI continues to divide the gaming industry, with some developers raising concerns that candidates may misrepresent AI-generated work during the hiring process. Studios are increasingly requiring proof of actual creative ability to avoid productivity issues.

Other developers, including Arrowhead Game Studios, emphasise the importance of striking a balance between AI use and human creativity. By reducing repetitive tasks rather than replacing artistic roles, studios aim to enhance efficiency while preserving the unique contributions of human designers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!