3D-printed ion traps could accelerate quantum computer scaling

Quantum computers may soon grow more powerful through 3D printing, with researchers building miniaturised ion traps to improve scalability and performance.

Ion traps, which confine ions and control their quantum states, play a central role in ion-based qubits. Researchers at UC Berkeley created 3D-printed traps just a few hundred microns wide, which captured ions up to ten times more efficiently than conventional versions.

The new traps also reduced waiting times, allowing ions to be usable more quickly once the system is activated. Hartmut Häffner, who led the study, said the approach could enable scaling to far larger numbers of qubits while boosting speed.

3D printing offers flexibility not possible with chip-style manufacturing, allowing for more complex shapes and designs. Team members say they are already working on new iterations, with future versions expected to integrate optical components such as miniaturised lasers.

Experts argue that this method could address the challenges of low yield, high costs, and poor reproducibility in current ion-trap manufacturing, paving the way for scalable quantum computing and applications in other fields, including mass spectrometry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ITU warns global Internet access by 2030 could cost nearly USD 2.8 trillion

Universal Internet connectivity by 2030 could cost up to $2.8 trillion, according to the International Telecommunication Union (ITU) and Saudi Arabia’s Communications, Space, and Technology (CST) Commission. The blueprint urges global cooperation to connect the one-third of humanity still offline.

The largest share, up to $1.7 trillion, would be allocated to expanding broadband through fibre, wireless, and satellite networks. Nearly $1 trillion is needed for affordability measures, alongside $152 billion for digital skills programmes.

ITU Secretary-General Doreen Bogdan-Martin emphasised that connectivity is essential for access to education, employment, and vital services. She noted the stark divide between high-income countries, where 93% of people are online, and low-income states, where only 27% use the Internet.

The study shows costs have risen fivefold since ITU’s 2020 Connecting Humanity report, reflecting both higher demand and widening divides. Haytham Al-Ohali from Saudi Arabia said the figures underscore the urgency of investment and knowledge sharing to achieve meaningful connectivity.

The report recommends new business models and stronger cooperation between governments, industry, and civil society. Proposed measures include using schools as Internet gateways, boosting Africa’s energy infrastructure, and improving localised data collection to accelerate digital inclusion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Advanced Pilot Assistance System enters year-long trial on CB Pacific

Mythos AI has installed its Advanced Pilot Assistance System (APAS) on the CB Pacific, a chemical tanker operated by CB Tankers under the Lomar group. The deployment marks the beginning of a year-long trial to introduce advanced bridge intelligence to the commercial shipping industry.

APAS uses a radar-first perception system that integrates with existing ship radars, processing multiple data streams to deliver prioritised alerts. By reducing its reliance on machine vision, the system aims to eliminate distractions, enhance decision-making, and improve navigation safety.

The CB Pacific, equipped with Furuno radar and consistent routes, will serve as a testbed to evaluate APAS performance in live conditions. Trials will assess collision prediction, safe navigation, signal processing, and compliance with maritime rules.

Mythos AI emphasises that APAS is designed to support crews, not replace them. CEO Geoff Douglass said the installation marks the company’s first operational use of the system on a tanker and a milestone in its wider commercial roadmap.

For LomarLabs, the pilot showcases its hands-on innovation model, offering vessel access and oversight to facilitate collaboration with startups. Managing Director Stylianos Papageorgiou said the radar-first architecture shows how modular autonomy can be advanced through trust, time, and fleet partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Record funding and new assurance measures mark fresh UK AI push

Private backing for UK AI companies has reached £2.9 billion, with average deals of £5.9 million, driving record growth across the sector. Ministers say investment is spreading regionally, with the number of firms in the Midlands, Yorkshire, Wales, and the North West doubling in just three years.

At Mansion House, Technology Secretary Peter Kyle urged industry to cut red tape, expand data centres, and attract global talent. He emphasised that public trust, supported by AI assurance measures, is crucial for growth.

The assurance roadmap aims to add billions to the economy by creating a dedicated profession to review AI systems for safety, ethics, and accountability. Independent experts will be tasked with certifying systems, while a consortium of professional bodies develops a code of ethics to guide standards.

Further initiatives include £2.7m to boost regulator capacity and AI projects for Ofgem, the Civil Aviation Authority, and the Office for Nuclear Regulation, covering energy, aviation, and nuclear waste.

Officials say these measures will help position the UK as a world leader in AI innovation, while ensuring growth is matched with robust oversight and public confidence in the technology.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tourism boards across Europe embrace AI but face gaps in strategy and skills

A new study by the European Travel Commission shows that national tourism organisations (NTOs) are experimenting with AI but are facing gaps in strategy and skills.

Marketing teams are leading the way, applying AI in content generation and workflow streamlining, whereas research departments primarily view the tools as exploratory. Despite uneven readiness, most staff show enthusiasm, with little resistance reported.

The survey highlights challenges, including limited budgets, sparse training, and the absence of a clear roadmap. Early adopters report tangible productivity gains, but most NTOs are still running small pilots rather than embedding AI across operations.

Recommendations include ring-fencing time for structured experiments, offering role-specific upskilling, and scaling budgets aligned with results. The report also urges the creation of shared learning spaces and providing practical support to help organisations transition from testing to sustained adoption.

ETC President Miguel Sanz said AI offers clear opportunities for tourism boards, but uneven capacity means shared tools and targeted investment will be essential to ensure innovation benefits all members.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek prepares new AI agent model to rival US competitors

According to people familiar with the plans, Chinese startup DeepSeek is developing an AI model with enhanced agent features to compete with US firms such as OpenAI.

The Hangzhou-based company intends for the system to perform multi-step tasks with limited input and adapt from its previous actions.

Founder Liang Wenfeng has urged his team to prepare the release before the end of 2025. The project follows DeepSeek’s earlier success with R1, a reasoning-focused model launched in January that attracted attention for its low development costs.

Since then, DeepSeek has delivered only incremental updates while rivals in China and the US have accelerated new product launches.

The shift towards AI agents reflects a broader industry move to develop tools capable of managing complex real-world tasks, from research to coding, with less reliance on users. OpenAI, Anthropic, Microsoft, and Manus AI have already introduced similar projects.

Most systems still require significant oversight, highlighting the challenges of building fully autonomous agents.

DeepSeek declined to comment on the development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Key AI researchers depart Apple for rivals Meta and OpenAI

Apple is confronting a significant exodus of AI talent, with key researchers departing for rival firms instead of advancing projects in-house.

The company lost its lead robotics researcher, Jian Zhang, to Meta’s Robotics Studio, alongside several core Foundation Models team members responsible for the Apple Intelligence platform. The brain drain has triggered internal concerns about Apple’s strategic direction and declining staff morale.

Instead of relying entirely on its own systems, Apple is reportedly considering a shift towards using external AI models. The departures include experts like Ruoming Pang, who accepted a multi-year package from Meta reportedly worth $200 million.

Other AI researchers are set to join leading firms like OpenAI and Anthropic, highlighting a fierce industry-wide battle for specialised expertise.

At the centre of the talent war is Meta CEO Mark Zuckerberg, offering lucrative packages worth up to $100 million to secure leading researchers for Meta’s ambitious AI and robotics initiatives.

The aggressive recruitment strategy is strengthening Meta’s capabilities while simultaneously weakening the internal development efforts of competitors like Apple.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CJEU dismisses bid to annul EU-US data privacy framework

The General Court of the Court of Justice of the European Union (CJEU) has dismissed an action seeking the annulment of the EU–US Data Privacy Framework (DPF). Essentially, the DPF is an agreement between the EU and the USA allowing personal data to be transferred from the EU to US companies without additional data protection safeguards.

Following the agreement, the European Commission conducted further investigations to assess whether it offered adequate safeguards. On 10 July 2023, the Commission adopted an adequacy decision concluding that the USA ensures a sufficient level of protection comparable to that of the EU when transferring data from the EU to the USA, and that there is no need for supplementary data protection measures.

However, on 6 September 2023, Philippe Latombe, a member of the French Parliament, brought an action seeking annulment of the EU–US DPF.

He argued that the framework fails to ensure adequate protection of personal data transferred from the EU to the USA. Latombe also claimed that the Data Protection Review Court (DPRC), which is responsible for reviewing safeguards during such data transfers, lacks impartiality and independence and depends on the executive branch.

Finally, Latombe asserted that ‘the practice of the intelligence agencies of that country of collecting bulk personal data in transit from the European Union, without the prior authorisation of a court or an independent administrative authority, is not circumscribed in a sufficiently clear and precise manner and is, therefore, illegal.’As a result, the General Court of the EU dismissed the action for annulment, stating that:

  • The DPRC has sufficient safeguards to ensure judicial independence,
  • US intelligence agencies’ bulk data collection practices are compatible with the EU fundamental rights, and
  • The decision consolidates the European Commission’s ability to suspend or amend the framework if US legal safeguards change.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

RBA develops internal AI chatbot

The Reserve Bank of Australia has developed and is testing an in-house, AI-powered chatbot to assist its staff with research and analysis.

Named RBAPubChat, the tool is trained on the central bank’s knowledge base of nearly 20,000 internal and external analytical documents spanning four decades. It aims to help employees ask policy-relevant questions and get useful summaries of existing information.

Speaking at the Shann memorial lecture in Perth, Governor Michele Bullock said that the AI is not being used to formulate or set monetary policy. Instead, it is intended to improve efficiency and amplify the impact of staff efforts.

A separate tool using natural language processing has also been developed to analyse over 22,000 conversations from the bank’s business liaison programme. The Reserve Bank of Australia has noted that this tool has already shown promise, helping to forecast wage growth more accurately than traditional models.

The RBA has also acquired its first enterprise-grade graphics processing unit to support developing and running advanced AI-driven tools.

The bank’s internal coding community is now a well-established part of its operations, with one in four employees using coding as a core part of their daily work. Governor Bullock stressed that the bank’s approach to technology is one of “deliberate, well-managed evolution” rather than disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Is AI therapy safe, effective, and ethical?

Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.

With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?

Therapy keeps secrets; AI keeps data

Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.

The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.

Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.

To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.

According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.

Falling for your AI ‘therapist’

Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.

The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.

With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.

As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.

Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.

Who loses work when therapy goes digital?

Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.

Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.

Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.

Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.

Can AI ‘therapists’ handle crisis conversations

Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.

In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.

One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.

Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.

In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.

Chatbots are companions, not health professionals

AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.

While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!