AWS launches frontier agents to boost software development

AWS has launched frontier agents, autonomous AI tools that extend software development teams. The first three – Kiro, AWS Security Agent, and AWS DevOps Agent – enhance development, security, and operations while working independently for extended periods.

Kiro functions as a virtual developer, maintaining context, learning from feedback, and managing tasks across multiple repositories. AWS Security Agent automates code reviews, penetration testing, and enforces organisational security standards.

AWS DevOps Agent identifies root causes of incidents, reduces alerts, and provides proactive recommendations to improve system reliability.

These agents operate autonomously, scale across multiple tasks, and free teams from repetitive work, allowing focus on high-priority projects. Early users, including SmugMug and Commonwealth Bank of Australia, report quicker development, stronger security, and more efficient operations.

By integrating frontier agents into the software development lifecycle, AWS is shifting AI from task assistance to completing complex projects independently, marking a significant step forward in what AI can achieve for development teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Irish regulator probes an investigation into TikTok and LinkedIn

Regulators in Ireland have opened investigations into TikTok and LinkedIn under the EU Digital Services Act.

Coimisiún na Meán’s Investigations Team believes there may be shortcomings in how both platforms handle reports of suspected illegal material. Concerns emerged during an exhaustive review of Article 16 compliance that began last year and focused on the availability of reporting tools.

The review highlighted the potential for interface designs that could confuse users, particularly when choosing between reporting illegal content and content that merely violates platform rules.

An investigation that will examine whether reporting tools are easy to access, user-friendly and capable of supporting anonymous reporting of suspected child sexual abuse material, as required under Article 16(2)(c).

It will also assess whether platform design may discourage users from reporting material as illegal under Article 25.

Coimisiún na Meán stated that several other providers made changes to their reporting systems following regulatory engagement. Those changes are being reviewed for effectiveness.

The regulator emphasised that platforms must avoid practices that could mislead users and must provide reliable reporting mechanisms instead of diverting people toward less protective options.

These investigations will proceed under the Broadcasting Act of Ireland. If either platform is found to be in breach of the DSA, the regulator can impose administrative penalties that may reach six percent of global turnover.

Coimisiún na Meán noted that cooperation remains essential and that further action may be necessary if additional concerns about DSA compliance arise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

eSafety highlights risks in connected vehicle technology

Australia’s eSafety regulator is drawing attention to concerns about how connected car features can be misused within domestic and family violence situations.

Reports from frontline workers indicate that remote access tools, trip records and location tracking can be exploited instead of serving their intended purpose as safety and convenience features.

The Australian regulator stresses that increased connectivity across vehicles and devices is creating new challenges for those supporting victim-survivors.

Smart cars often store detailed travel information and allow remote commands through apps and online accounts. These functions can be accessed by someone with shared credentials or linked accounts, which can expose sensitive information.

eSafety notes that misuse of connected vehicles forms part of a broader pattern of technology-facilitated coercive control, where multiple smart devices such as watches, tablets, cameras and televisions can play a role.

The regulator has produced updated guidance to help people understand potential risks and take practical steps with the support of specialist services.

Officials highlight the importance of stronger safeguards from industry, including simpler methods for revoking access, clearer account transfer processes during separation and more transparent logs showing when remote commands are used.

Retailers and dealerships are encouraged to ensure devices and accounts are reset when ownership changes. eSafety argues that design improvements introduced early can reduce the likelihood of harm, rather than requiring complex responses later.

Agencies and community services continue to assist those affected by domestic and family violence, offering advice on account security, safe device use and available support services.

The guidance aims to help people take protective measures in a controlled and safe way, while emphasising the importance of accessing professional assistance.

eSafety encourages ongoing cooperation between industry, government and frontline workers to manage risks linked to emerging automotive and digital technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum money meets Bitcoin: Building unforgeable digital currency

Quantum money might sound like science fiction, yet it is rapidly emerging as one of the most compelling frontiers in modern digital finance. Initially a theoretical concept, it was far ahead of the technology of its time, making practical implementation impossible. Today, thanks to breakthroughs in quantum computing and quantum communication, scientists are reviving the idea, investigating how the principles of quantum physics could finally enable unforgeable quantum digital money. 

Comparisons between blockchain and quantum money are frequent and, on the surface, appear logical, yet can these two visions of new-generation cash genuinely be measured by the same yardstick? 

Origins of quantum money 

Quantum money was first proposed by physicist Stephen Wiesner in the late 1960s. Wiesner envisioned a system in which each banknote would carry quantum particles encoded in specific states, known only to the issuing bank, making the notes inherently secure. 

Due to the peculiarities of quantum mechanics, these quantum states could not be copied, offering a level of security fundamentally impossible with classical systems. At the time, however, quantum technologies were purely theoretical, and devices capable of creating, storing, and accurately measuring delicate quantum states simply did not exist. 

For decades, Wiesner’s idea remained a fascinating thought experiment. Today, the rise of functional quantum computers, advanced photonic systems, and reliable quantum communication networks is breathing new life into the concept, allowing researchers to explore practical applications of quantum money in ways that were once unimaginable.

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

The no-cloning theorem: The physics that makes quantum money impossible to forge

At the heart of quantum money lies the no-cloning theorem, a cornerstone of quantum mechanics. The principle establishes that it is physically impossible to create an exact copy of an unknown quantum state. Any attempt to measure a quantum state inevitably alters it, meaning that copying or scanning a quantum banknote destroys the very information that ensures its authenticity. 

The unique property makes quantum money exceptionally secure: unlike blockchain, which relies on cryptographic algorithms and distributed consensus, quantum money derives its protection directly from the laws of physics. In theory, a quantum banknote cannot be counterfeited, even by an attacker with unlimited computing resources, which is why quantum money is considered one of the most promising approaches to unforgeable digital currency.

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

How quantum money works in theory

Quantum money schemes are typically divided into two main types: private and public. 

In private quantum money systems, a central authority- such as a bank- creates quantum banknotes and remains the only entity capable of verifying them. Each note carries a classical serial number alongside a set of quantum states known solely to the issuer. The primary advantage of this approach is its absolute immunity to counterfeiting, as no one outside the issuing institution can replicate the banknote. However, such systems are fully centralised and rely entirely on the security and infrastructure of the issuing bank, which inherently limits scalability and accessibility.

Public quantum money, by contrast, pursues a more ambitious goal: allowing anyone to verify a quantum banknote without consulting a central authority. Developing this level of decentralisation has proven exceptionally difficult. Numerous proposed schemes have been broken by researchers who have managed to extract information without destroying the quantum states. Despite these challenges, public quantum money remains a major focus of quantum cryptography research, with scientists actively pursuing secure and scalable methods for open verification. 

Beyond theoretical appeal, quantum money faces substantial practical hurdles. Quantum states are inherently fragile and susceptible to decoherence, meaning they can lose their information when interacting with the surrounding environment. 

Maintaining stable quantum states demands highly specialised and costly equipment, including photonic processors, quantum memory modules, and sophisticated quantum error-correction systems. Any error or loss could render a quantum banknote completely worthless, and no reliable method currently exists to store these states over long periods. In essence, the concept of quantum money is groundbreaking, yet real-world implementation requires technological advances that are not yet mature enough for mass adoption. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Bitcoin solves the duplication problem differently

While quantum money relies on the laws of physics to prevent counterfeiting, Bitcoin tackles the duplication problem through cryptography and distributed consensus. Each transaction is verified across thousands of nodes, and SHA-256 hash functions secure the blockchain against double spending without the need for a central authority. 

Unlike elliptic curve cryptography, which could eventually be vulnerable to large-scale quantum attacks, SHA-256 has proven remarkably resilient; even quantum algorithms such as Grover’s offer only a marginal advantage, reducing the search space from 2256 to 2128– still far beyond any realistic brute-force attempt. 

Bitcoin’s security does not hinge on unbreakable mathematics alone but on a combination of decentralisation, network verification, and robust cryptographic design. Many experts therefore consider Bitcoin effectively quantum-proof, with most of the dramatic threats predicted from quantum computers likely to be impossible in practice. 

Software-based and globally accessible, Bitcoin operates independently of specialised hardware, allowing users to send, receive, and verify value anywhere in the world without the fragility and complexity inherent in quantum systems. Furthermore, the network can evolve to adopt post-quantum cryptographic algorithms, ensuring long-term resilience, making Bitcoin arguably the most battle-hardened digital financial instrument in existence. 

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Could quantum money be a threat to Bitcoin?

In reality, quantum money and Bitcoin address entirely different challenges, meaning the former is unlikely to replace the latter. Bitcoin operates as a global, decentralised monetary network with established economic rules and governance, while quantum money represents a technological approach to issuing physically unforgeable tokens. Bitcoin is not designed to be physically unclonable; its strength lies in verifiability, decentralisation, and network-wide trust.

However, SHA-256- the hashing algorithm that underpins Bitcoin mining and block creation- remains highly resistant to quantum threats. Quantum computers achieve only a quadratic speed-up through Grover’s algorithm, which is insufficient to break SHA-256 in practical terms. Bitcoin also retains the ability to adopt post-quantum cryptographic standards as they mature, whereas quantum money is limited by rigid physical constraints that are far harder to update.

Quantum money also remains too fragile, complex, and costly for widespread use. Its realistic applications are limited to state institutions, military networks, or highly secure financial environments rather than everyday payments. Bitcoin, by contrast, already benefits from extensive global infrastructure, strong market adoption, and deep liquidity, making it far more practical for daily transactions and long-term digital value transfer. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Where quantum money and blockchain could coexist

Although fundamentally different, quantum money and blockchain technologies have the potential to complement one another in meaningful ways. Quantum key distribution could strengthen the security of blockchain networks by protecting communication channels from advanced attacks, while quantum-generated randomness may enhance cryptographic protocols used in decentralised systems. 

Researchers have also explored the idea of using ‘quantum tokens’ to provide an additional privacy layer within specialised blockchain applications. Both technologies ultimately aim to deliver secure and verifiable forms of digital value. Their coexistence may offer the most resilient future framework for digital finance, combining the physics-based protection of quantum money with the decentralisation, transparency, and global reach of blockchain technology. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Quantum physics meets blockchain for the future of secure currency

Quantum money remains a remarkable concept, originally decades ahead of its time, and now revived by advances in quantum computing and quantum communication. Although it promises theoretically unforgeable digital currency, its fragility, technical complexity, and demanding infrastructure make it impractical for large-scale use. 

Bitcoin, by contrast, stands as the most resilient and widely adopted model of decentralised digital money, supported by a mature global network and robust cryptographic foundations. 

Quantum money and Bitcoin stand as twin engines of a new digital finance era, where quantum physics is reshaping value creation, powering blockchain innovation, and driving next-generation fintech solutions for secure and resilient digital currency. 

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU moves forward with Bulgaria payment review

The European Commission has given partial approval to Bulgaria’s request for €1.6 billion under the Recovery and Resilience Facility. The assessment followed the country’s submission in early October and confirmed that most reforms and investments linked to the payment were completed.

Progress spanned the green and digital transition, research, innovation, healthcare, social protection, sustainable transport and business modernisation.

Officials confirmed that 48 of 50 milestones were met, supporting Bulgaria’s efforts to strengthen economic growth and improve long-term competitiveness, rather than delaying structural change.

Measures covered a prohibition on new coal or lignite power installations, limits on emissions from existing plants, investment in renewable energy and steps to make healthcare careers more appealing.

The Commission noted that these areas formed core elements of Bulgaria’s recovery plan.

Two milestones were considered incomplete. The first relates to the establishment of an operational anti-corruption body; the second concerns aspects of legal acts linked to criminal proceedings and the accountability of the Prosecutor General.

Additionally, the Commission proposed a temporary deferral for the portion of funding connected to those elements, allowing Bulgaria to receive money for milestones already achieved instead of holding back the entire request.

The next stage involves a review by the Economic and Financial Committee within four weeks. Bulgaria will also have one month to respond to the Commission’s concerns. If issues remain unresolved, part of the payment will be withheld until the outstanding milestones are met.

Once corrective actions are completed, the remaining funds will be released in line with the standard procedure for the Recovery and Resilience Facility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple support scam targets users with real tickets

Cybercriminals are increasingly exploiting Apple’s support system to trick users into surrendering their accounts. Fraudsters open real support tickets in a victim’s name, which triggers official Apple emails and creates a false sense of legitimacy. These messages appear professional, making it difficult for users to detect the scam.

Victims often receive a flood of alerts, including two-factor authentication notifications, followed by phone calls from callers posing as Apple agents. The scammers guide users through steps that appear to secure their accounts, often directing them to convincing fake websites that request sensitive information.

Entering verification codes or following instructions on these fraudulent pages gives attackers access to the account. Even experienced users can fall prey because the emails come from official Apple domains, and the phone calls are carefully scripted to build trust.

Experts recommend checking support tickets directly within your Apple ID account, never sharing verification codes, and reviewing all devices linked to your account. Using antivirus software, activating two-factor authentication, and limiting personal information online further strengthen protection against such sophisticated phishing attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Singapore and the EU advance their digital partnership

The European Union met Singapore in Brussels for the second Digital Partnership Council, reinforcing a joint ambition to strengthen cooperation across a broad set of digital priorities.

Both sides expressed a shared interest in improving competitiveness, expanding innovation and shaping common approaches to digital rules instead of relying on fragmented national frameworks.

Discussions covered AI, cybersecurity, online safety, data flows, digital identities, semiconductors and quantum technologies.

Officials highlighted the importance of administrative arrangements in AI safety. They explored potential future cooperation on language models, including the EU’s work on the Alliance for Language Technologies and Singapore’s Sea-Lion initiative.

Efforts to protect consumers and support minors online were highlighted, alongside the potential role of age verification tools.

Further exchanges focused on trust services and the interoperability of digital identity systems, as well as collaborative research on semiconductors and quantum technologies.

Both sides emphasised the importance of robust cyber resilience and ongoing evaluation of cybersecurity risks, rather than relying on reactive measures. The recently signed Digital Trade Agreement was welcomed for improving legal certainty, building consumer trust and reducing barriers to digital commerce.

The meeting between the EU and Singapore confirmed the importance of the partnership in supporting economic security, strengthening research capacity and increasing resilience in critical technologies.

It also reflected the wider priorities outlined in the European Commission’s International Digital Strategy, which placed particular emphasis on cooperation with Asian partners across emerging technologies and digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Poetic prompts reveal gaps in AI safety, according to study

Researchers in Italy have found that poetic language can weaken the safety barriers used by many leading AI chatbots.

A work by Icaro Lab, part of DexAI, that examined whether poems containing harmful requests could provoke unsafe answers from widely deployed models across the industry. The team wrote twenty poems in English and Italian, each ending with explicit instructions that AI systems are trained to block.

The researchers tested the poems on twenty-five models developed by nine major companies. Poetic prompts produced unsafe responses in more than half of the tests.

Some models appeared more resilient than others. OpenAI’s GPT-5 Nano avoided unsafe replies in every case, while Google’s Gemini 2.5 Pro generated harmful content in all tests. Two Meta systems produced unsafe responses to twenty percent of the poems.

Researchers also argue that poetic structure disrupts the predictive patterns large language models rely on to filter harmful material. The unconventional rhythm and metaphor common in poetry make the underlying safety mechanisms less reliable.

Additionally, the team warned that adversarial poetry can be used by anyone, which raises concerns about how easily safety systems may be manipulated in everyday use.

Before releasing the study, the researchers contacted all companies involved and shared the full dataset with them.

Anthropic confirmed receipt and stated that it was reviewing the findings. The work has prompted debate over how AI systems can be strengthened as creative language becomes an increasingly common method for attempting to bypass safety controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta criticised for AI-generated adverts scams

Meta has faced criticism after numerous consumers reported being misled by companies using AI-generated adverts on Facebook and Instagram. The firms posed as UK businesses while shipping cheap goods from Asia, prompting claims that scams were ‘running rampant’ on the platforms.

Victims were persuaded by realistic adverts and AI-generated images but received poorly made clothing and jewellery. Several companies, including C’est La Vie, Mabel & Daisy, Harrison & Hayes, and Chester & Clare, were removed after investigations revealed fabricated backstories and fake shopfronts.

Consumer guides recommend vigilance, advising shoppers to check company websites, reviews, and use Trustpilot to verify legitimacy. Experts warn that overly perfect images, including AI-generated shopfronts or models, may signal fraudulent adverts.

Platforms such as Facebook and Instagram are urged to enforce stricter measures to prevent scams.

Meta stated it works with Stop Scams UK and encourages users to report suspicious adverts, while the Advertising Standards Authority continues to crack down on misleading online promotions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!