The WTO launched the 2025 World Trade Report, titled ‘Making trade and AI work together to benefit all’. The report argues that AI could potentially boost global trade by up to 37% and GDP by 12–13% by 2040, particularly through digitally deliverable services.
It notes that AI can lower trade costs, improve supply-chain efficiency, and create opportunities for small firms and developing countries. Still, it warns that without deliberate action, AI could deepen global inequalities and widen the gap between advanced and developing economies.
The report underscores the need for investment in digital infrastructure, energy, skills, and enabling policies, highlighting the importance of IP protection, competition frameworks, and government support.
A newly developed indicator, the WTO AI Trade Policy Openness Index (AI-TPOI), revealed significant variation in AI-related trade policies across and within income groups.
It assessed three policy areas relevant to AI diffusion: barriers to services trade, restrictions on trade in AI-enabling goods, and limitations on cross-border data flows.
Stronger multilateral cooperation and targeted capacity-building were presented as essential to ensure AI-enabled trade supports inclusive, sustainable prosperity rather than reinforcing existing divides.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Broadcasters and advertisers seek clarity before the EU’s political advertising rules become fully applicable on 10 October. The European Commission has promised further guidance, but details on what qualifies as political advertising remain vague.
Meta and Google will block the EU’s political, election, and social issue ads when the rules take effect, citing operational challenges and legal uncertainty. The regulation, aimed at curbing disinformation and foreign interference, requires ads to display labels with sponsors, payments, and targeting.
Publishers fear they lack the technical means to comply or block non-compliant programmatic ads, risking legal exposure. They call for clear sponsor identification procedures, standardised declaration formats, and robust verification processes to ensure authenticity.
Advertisers warn that the rules’ broad definition of political actors may be hard to implement. At the same time, broadcasters fear issue-based campaigns – such as environmental awareness drives – could unintentionally fall under the scope of political advertising.
The Dutch parliamentary election on 29 October will be the first to take place under the fully applicable rules, making clarity from Brussels urgent for media and advertisers across the bloc.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Scientists from Australian universities and The George Institute for Global Health have developed an AI tool that analyses mammograms and a woman’s age to predict her risk of heart-related hospitalisation or death within 10 years.
Published in Heart on 17 September, the study highlights the lack of routine heart disease screening for women, despite cardiovascular conditions causing 35% of female deaths. The tool delivers a two-in-one health check by integrating heart risk prediction into breast cancer screening.
The model was trained on data from over 49,000 women and performs as accurately as traditional models that require blood pressure and cholesterol data. Researchers emphasise its low-resource nature, making it viable for broad deployment in rural or underserved areas.
Study co-author Dr Jennifer Barraclough said mobile mammography services could adopt the tool to deliver breast cancer and heart health screenings in one visit. Such integration could help overcome healthcare access barriers in remote regions.
Next, before a broader rollout, the researchers plan to validate the tool in more diverse populations and study practical challenges, such as technical requirements and regulatory approvals.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Advertising is heading for a split future. By 2030, brands will run hyper-personalised AI campaigns or embrace raw human storytelling. Everything in between will vanish.
AI-driven advertising will go far beyond text-to-image gimmicks. These adaptive systems will combine social trends, search habits, and first-party data to create millions of real-time ad variations.
The opposite approach will lean into imperfection, featuring unpolished TikToks, founder-shot iPhone videos, and authentic and alive content. Audiences reward authenticity over carefully scripted, generic campaigns.
Mid-tier, polished, forgettable, creative work will be the first to fade away. AI can replicate it instantly, and audiences will scroll past it without noticing.
Marketers must now pick a side: feed AI with data and scale personalisation, or double down on community-driven, imperfect storytelling. The middle won’t survive.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Ghana has launched the National Privacy Awareness Campaign, a year-long initiative to strengthen citizens’ privacy rights and build public trust in the country’s expanding digital ecosystem.
Unveiled by Deputy Minister Mohammed Adams Sukparu, the campaign emphasises that data protection is not just a legal requirement but essential to innovation, digital participation, and Ghana’s goal of becoming Africa’s AI hub.
The campaign will run from September 2025 to September 2026 across all 16 regions, using English and key local languages to promote widespread awareness.
The initiative includes the inauguration of the Ghana Association of Privacy Professionals (GAPP) and recognition of new Certified Data Protection Officers, many trained through the One Million Coders Programme.
Officials stressed that effective data governance requires government, private sector, civil society, and media collaboration. The Data Protection Commission reaffirmed its role in protecting privacy while noting ongoing challenges such as limited awareness and skills gaps.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A newly published guide by People Powered and UNDP aims to connect people in their communities through inclusive, locally relevant digital participation platforms. Designed with local governments, civic groups, and organisations in mind, it highlights digital platforms that enable inclusive, action-oriented civic engagement.
According to the UNDP, ‘the guide covers the latest trends, including the integration of AI features, and addresses challenges such as digital inclusion, data privacy, accessibility, and sustainability.’
The guide focuses on actively maintained, publicly available platforms, typically offered through cloud-based software (SaaS) models, and prioritises flexible, multi-purpose tools over single-use options. While recognising the dominance of platforms from wealthier countries, it makes a deliberate effort to feature case studies and tools from the Global Majority.
Political advocacy platforms, internal government tools, and issue-reporting apps are excluded to keep the focus on technologies that drive meaningful public participation. Lastly, the guide emphasises the importance of local context and community empowerment, encouraging a shift from passive input to meaningful public influence in governance.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Abu Dhabi hosted a Weather Summit that explored how AI could transform forecasting and support operations, such as cloud seeding. Experts emphasised that AI enhances analysis but must complement, rather than replace, human judgement.
Discussions focused on Earth-system forecasting using satellite datasets, IoT devices, and geospatial systems. Quality, interoperability, and equitable access to weather services were highlighted as pressing priorities.
Speakers raised questions about public and private sector incentives’ reliability, transparency, and influence on AI. Collaboration across sectors was crucial to strengthening trust and global cooperation in meteorology.
WMO President Dr Abdulla Al Mandous said forecasting has evolved from traditional observation to supercomputing and AI. He argued that integrating models with AI could deliver more precise local forecasts for agriculture, aviation, and disaster management.
The summit brought together leaders from UN bodies, research institutions, and tech firms, including Google, Microsoft, and NVIDIA. Attendees highlighted the need to bridge data gaps, particularly in developing regions, to confront rising climate challenges.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.
Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.
The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.
Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.
Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s AI chatbot, Grok, has faced repeated changes to its political orientation, with updates shifting its answers towards more conservative views.
xAI, Musk’s company, initially promoted Grok as neutral and truth-seeking, but internal prompts have steered it on contentious topics. Adjustments included portraying declining fertility as the greatest threat to civilisation and downplaying right-wing violence.
Analyses of Grok’s responses by The New York Times showed that the July updates shifted answers to the right on government and economy, while some social responses remained left-leaning. Subsequent tweaks pulled it back closer to neutrality.
Critics say that system prompts, such as short instructions like ‘be politically incorrect’, make it easy to adjust outputs, but also leave the model prone to erratic or offensive responses. A July update saw Grok briefly endorse a controversial historical figure before xAI turned it off.
The case highlights growing concerns about political bias in AI systems. Researchers argue that all chatbots reflect the worldviews of their training data, while companies increasingly face pressure to align them with user expectations or political demands.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!