Tourism boards across Europe embrace AI but face gaps in strategy and skills

A new study by the European Travel Commission shows that national tourism organisations (NTOs) are experimenting with AI but are facing gaps in strategy and skills.

Marketing teams are leading the way, applying AI in content generation and workflow streamlining, whereas research departments primarily view the tools as exploratory. Despite uneven readiness, most staff show enthusiasm, with little resistance reported.

The survey highlights challenges, including limited budgets, sparse training, and the absence of a clear roadmap. Early adopters report tangible productivity gains, but most NTOs are still running small pilots rather than embedding AI across operations.

Recommendations include ring-fencing time for structured experiments, offering role-specific upskilling, and scaling budgets aligned with results. The report also urges the creation of shared learning spaces and providing practical support to help organisations transition from testing to sustained adoption.

ETC President Miguel Sanz said AI offers clear opportunities for tourism boards, but uneven capacity means shared tools and targeted investment will be essential to ensure innovation benefits all members.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI and AR reshape Starbucks’ back-of-house systems

Starbucks will deploy an AI-powered inventory system across all North American stores. Built with NomadGo, it automatically scans shelves using AR and computer vision to flag low stock.

Counts that once took an hour now take about 15 minutes, enabling up to eight counts weekly. The system frees staff to focus on service while providing real-time data for more intelligent supply chain decisions.

The rollout follows other digital upgrades, including a Shift Marketplace for scheduling, Green Dot Assist for AI support, and a new point-of-sale system. Together, these tools show Starbucks’ growing reliance on AI.

Competitors like McDonald’s and Chick-fil-A are also turning to AI for back-of-house operations. From accuracy scales to computer vision food checks, fast-food chains are betting heavily on automation to boost efficiency.

For Starbucks, success will be judged by fewer shortages, consistent customer experiences, and staff reinvested in service. AI-driven accuracy could become a defining advantage in an industry built on trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered home cinema and smart appliances unveiled by Hisense at IFA 2025

Hisense will debut AI-powered innovations at IFA 2025 under the theme ‘AI Your Life,’ showcasing entertainment, smart homes, and climate-friendly technologies. The company aims to make AI seamless and personal.

Entertainment highlights include the 116-inch RGB-MiniLED UX TV with 8,000 nits brightness, plus new laser projectors offering IMAX-level clarity and portability for home cinema and gaming.

Appliances get smarter with the PureFlat refrigerator, featuring a 21-inch screen for cooking, streaming, and AI art. ConnectLife agents will optimise chores and energy use in daily routines.

The U8 S Pro Air Conditioner brings presence detection, AI voice controls, and air purification, while Hisense expands into smart buildings, energy systems, and automotive climate solutions.

Combining advanced display technologies with next-gen appliances, Hisense says its innovations will empower people to live more freely and confidently across global markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Latvia launches open AI framework for Europe

Language technology company Tilde has released an open AI framework designed for all European languages.

The model, named ‘TildeOpen’, was developed with the support of the European Commission and trained on the Lumi supercomputer in Finland.

According to Tilde’s head Artūrs Vasiļevskis, the project addresses a key gap in US-based AI systems, which often underperform for smaller European languages such as Latvian. By focusing on European linguistic diversity, the framework aims to provide better accessibility across the continent.

Vasiļevskis also suggested that Latvia has the potential to become an exporter of AI solutions. However, he acknowledged that development is at an early stage and that current applications remain relatively simple. The framework and user guidelines are freely accessible online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China and India adopt contrasting approaches to AI governance

As AI becomes central to business strategy, questions of corporate governance and regulation are gaining prominence. The study by Akshaya Kamalnath and Lin Lin examines how China and India are addressing these issues through law, policy, and corporate practice.

The paper focuses on three questions: how regulations are shaping AI and data protection in corporate governance, how companies are embedding technological expertise into governance structures, and how institutional differences influence each country’s response.

Findings suggest a degree of convergence in governance practices. Both countries have seen companies create chief technology officer roles, establish committees to manage technological risks, and disclose information about their use of AI.

In China, these measures are largely guided by central and provincial authorities, while in India, they reflect market-driven demand.

China’s approach is characterised by a state-led model that combines laws, regulations, and soft-law tools such as guidelines and strategic plans. The system is designed to encourage innovation while addressing risks in an adaptive manner.

India, by contrast, has fewer binding regulations and relies on a more flexible, principles-based model shaped by judicial interpretation and self-regulation.

Broader themes also emerge. In China, state-owned enterprises are using AI to support environmental, social, and governance (ESG) goals, while India has framed its AI strategy under the principle of ‘AI for All’ with a focus on the role of public sector organisations.

Together, these approaches underline how national traditions and developmental priorities are shaping AI governance in two of the world’s largest economies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

RBA develops internal AI chatbot

The Reserve Bank of Australia has developed and is testing an in-house, AI-powered chatbot to assist its staff with research and analysis.

Named RBAPubChat, the tool is trained on the central bank’s knowledge base of nearly 20,000 internal and external analytical documents spanning four decades. It aims to help employees ask policy-relevant questions and get useful summaries of existing information.

Speaking at the Shann memorial lecture in Perth, Governor Michele Bullock said that the AI is not being used to formulate or set monetary policy. Instead, it is intended to improve efficiency and amplify the impact of staff efforts.

A separate tool using natural language processing has also been developed to analyse over 22,000 conversations from the bank’s business liaison programme. The Reserve Bank of Australia has noted that this tool has already shown promise, helping to forecast wage growth more accurately than traditional models.

The RBA has also acquired its first enterprise-grade graphics processing unit to support developing and running advanced AI-driven tools.

The bank’s internal coding community is now a well-established part of its operations, with one in four employees using coding as a core part of their daily work. Governor Bullock stressed that the bank’s approach to technology is one of “deliberate, well-managed evolution” rather than disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Is AI therapy safe, effective, and ethical?

Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.

With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?

Therapy keeps secrets; AI keeps data

Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.

The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.

Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.

To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.

According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.

Falling for your AI ‘therapist’

Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.

The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.

With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.

As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.

Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.

Who loses work when therapy goes digital?

Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.

Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.

Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.

Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.

Can AI ‘therapists’ handle crisis conversations

Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.

In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.

One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.

Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.

In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.

Chatbots are companions, not health professionals

AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.

While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!


US government and New Mexico team up on quantum computing

A new partnership between the federal government and New Mexico’s state and local businesses aims to establish the state as a leader in quantum computing.

The initiative will see the Defence Advanced Research Projects Agency (DARPA) working alongside local researchers and companies to develop and commercialise next-generation technology. A total of up to $120 million could be invested in the project over four years.

New Mexico’s selection for the project is due to its long history of innovation, its two national defence labs, and a high concentration of leading scientists in the field.

The goal is to harness the ‘brainpower’ of the state to build computers that can solve currently impossible problems, such as developing materials that resist corrosion or finding cures for diseases. One of the project’s aims is to test the technology and differentiate between genuine breakthroughs and mere hype.

Roadrunner Venture Studios will be assisting in developing new quantum computing businesses within the state. A successful venture would bring economic gains and jobs and position New Mexico to lead the nation in solving some of its most pressing challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wearable brain-computer interface pairs EEG with AI for robotic control

UCLA engineers have developed a wearable brain-computer interface that utilises AI to interpret intent, allowing for the control of robotic arms and computer cursors.

The non-invasive system uses electroencephalography (EEG) to decode brain signals and combines them with an AI camera platform for real-time assistance. The results, published in ‘Nature Machine Intelligence’, demonstrate significant performance improvements over traditional BCIs.

Participants tested the device on two tasks: moving a cursor across a computer screen and directing a robotic arm to reposition blocks. All completed tasks faster with AI assistance, while a paralysed participant, unable to finish without support, succeeded in under seven minutes.

Researchers emphasise the importance of safety and accessibility. Unlike surgically implanted BCIs, which remain confined to limited clinical trials, the wearable device avoids neurosurgical risks while offering new independence for people with paralysis or ALS.

Future development will focus on making AI ‘co-pilots’ more adaptive, allowing robotic arms to move with greater precision, dexterity, and task awareness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Quantum and supercomputing converge in IBM-AMD initiative

IBM has announced plans to develop next-generation computing architectures by integrating quantum computers with high-performance computing, a concept it calls quantum-centric supercomputing.

The company is working with AMD to build scalable, open-source platforms that combine IBM’s quantum expertise with AMD’s strength in HPC and AI accelerators. The aim is to move beyond the limits of traditional computing and explore solutions to problems that classical systems cannot address alone.

Quantum computing uses qubits governed by quantum mechanics, offering a far richer computational space than binary bits. In a hybrid model, quantum machines could simulate atoms and molecules, while supercomputers powered by CPUs, GPUs, and AI manage large-scale data analysis.

Arvind Krishna, IBM’s CEO, said the approach represents a new way of simulating the natural world. AMD’s Lisa Su described high-performance computing as foundational to tackling global challenges, noting the partnership could accelerate discovery and innovation.

An initial demonstration is planned for later this year, showing IBM quantum computers working with AMD technologies. Both companies say open-source ecosystems like Qiskit will be crucial to building new algorithms and advancing fault-tolerant quantum systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!