Quantum era promises new breakthroughs in security and sensing

Quantum technology has moved from academic circles into public debate, with applications already shaping industries and daily life.

For decades, quantum mechanics has powered tools like semiconductors, GPS and fibre optics, a foundation often described as Quantum 1.0. The UN has declared 2025 the International Year of Quantum Science and Technology to mark its impact.

Researchers are now advancing Quantum 2.0, which manipulates atoms, ions and photons to exploit superposition and entanglement. Emerging tools include quantum encryption systems, distributed atomic clocks to secure networks against GPS failures, and sensing devices with unprecedented precision.

Experts warn that disruptions to satellite navigation could cost billions, but quantum clocks may keep economies and critical infrastructure synchronised. With quantum computing and AI developing in parallel, future breakthroughs could transform medicine, energy, and security.

Achieving this vision will require global collaboration across governments, academia and industry to scale up technologies, ensure supply chain resilience and secure international standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia moves to block AI nudify apps

Australia has announced plans to curb AI tools that generate nude images and enable online stalking. The government said it would introduce new legislation requiring tech companies to block apps designed to abuse and humiliate people.

Communications Minister Anika Wells said such AI tools are fuelling sextortion scams and putting children at risk. So-called ‘nudify’ apps, which digitally strip clothing from images, have spread quickly online.

A Save the Children survey found one in five young people in Spain had been targeted by deepfake nudes, showing how widespread the abuse has become.

Canberra pledged to use every available measure to restrict access, while ensuring that legitimate AI services are not harmed. Australia has already passed strict laws banning under-16s from social media, with the new measures set to build on its reputation as a leader in online safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nigeria sets sights on top 50 AI-ready nations

Nigeria has pledged to become one of the top 50 AI-ready nations, according to presidential adviser Hadiza Usman. Speaking in Abuja at a colloquium on AI policy, she said the country needs strong leadership, investment, and partnerships to meet its goals.

She stressed that policies must address Nigeria’s unique challenges and not simply replicate foreign models. The government will offer collaboration opportunities with local institutions and international partners.

The Nigerian Deposit Insurance Commission reinforced its support, noting that technology should secure depositors without restricting innovators.

Private sector voices said AI could transform healthcare, agriculture, and public services if policies are designed with inclusion and trust in mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI boss, Sam Altman, fuels debate over dead internet theory

Sam Altman, chief executive of OpenAI, has suggested that the so-called ‘dead internet theory’ may hold some truth. The idea, long dismissed as a conspiracy theory, claims much of the online world is now dominated by computer-generated content rather than real people.

Altman noted on X that he had not previously taken the theory seriously but believed there were now many accounts run by large language models.

His remark drew criticism from users who argued that OpenAI itself had helped create the problem by releasing ChatGPT in 2022, which triggered a surge of automated content.

The spread of AI systems has intensified debate over whether online spaces are increasingly filled with artificially generated voices.

Some observers also linked Altman’s comments to his work on World Network, formerly Worldcoin, a project launched in 2019 to verify human identity online through biometric scans. That initiative has been promoted as a potential safeguard against the growing influence of AI-driven systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud study shows AI agents driving global business growth

A new Google Cloud study indicates that more than half of global enterprises are already using AI agents, with many reporting consistent revenue growth and faster return on investment.

The research, based on a survey of 3,466 executives across 24 countries, suggests agentic AI is moving from trial projects to large-scale deployment.

The findings by Google Cloud reveal that 52% of executives said their organisations actively use AI agents, while 39% reported launching more than ten. A group of early adopters, representing 13% of respondents, have gone further by dedicating at least half of their future AI budgets to agentic AI.

These companies are embedding agents across operations and are more likely to report returns in customer service, marketing, cybersecurity and software development.

The report also highlights how industries are tailoring adoption. Financial services focus on fraud detection, retail uses agents for quality control, and telecom operators apply them for network automation.

Regional variations are notable: European companies prioritise tech support, Latin American firms lean on marketing, while Asia-Pacific enterprises emphasise customer service.

Although enthusiasm is strong, challenges remain. Executives cited data privacy, security and integration with existing systems as key concerns.

Google Cloud executives said that early adopters are not only automating tasks but also reshaping business processes, with 2025 expected to mark a shift towards embedding AI intelligence directly into operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI teams up with PayPal for fintech expansion

PayPal has partnered with Perplexity AI to provide PayPal and Venmo users in the US and select international markets with a free 12-month Perplexity Pro subscription and early access to the AI-powered Comet browser.

The $200 subscription allows unlimited queries, file uploads and advanced search features, while Comet offers natural language browsing to simplify complex tasks.

Industry analysts see the initiative as a way for PayPal to strengthen its position in fintech by integrating AI into everyday digital payments.

By linking accounts, users gain access to AI tools and cash back incentives and subscription management features, signalling a push toward what some describe as agentic commerce, where AI assistants guide financial and shopping decisions.

The deal also benefits Perplexity AI, a rising search and browser market challenger. Exposure to millions of PayPal customers could accelerate the adoption of its technology and provide valuable data for refining models.

Analysts suggest the partnership reflects a broader trend of payment platforms evolving into service hubs that combine transactions with AI-driven experiences.

While enthusiasm is high among early users, concerns remain about data privacy and regulatory scrutiny over AI integration in finance.

Market reaction has been positive, with PayPal shares edging upward following the announcement. Observers believe such alliances will shape the next phase of digital commerce, where payments, browsing, and AI capabilities converge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fintech CISO says AI is reshaping cybersecurity skills

Financial services firms are adapting rapidly to the rise of AI in cybersecurity, according to David Ramirez, CISO at Broadridge. He said AI is changing the balance between attackers and defenders while also reshaping the skills security teams require.

On the defensive side, AI is already streamlining governance, risk management and compliance tasks, while also speeding up incident detection and training. He highlighted its growing role in areas like access management and data loss prevention.

He also stressed the importance of aligning cyber strategy with business goals and improving board-level visibility. While AI tools are advancing quickly, he urged CISOs not to lose sight of risk assessments and fundamentals in building resilient systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hollywood’s Warner Bros. Discovery challenge an AI firm over copyright claims

Warner Bros. Discovery has filed a lawsuit against AI company Midjourney, accusing it of large-scale infringement of its intellectual property. The move follows similar actions by Disney and Universal, signalling growing pressure from major studios on AI image and video generators.

The filing includes examples of Midjourney-produced images featuring DC Comics, Looney Tunes and Rick and Morty characters. Warner Bros. Discovery argues that such output undermines its business model, which relies heavily on licensed images and merchandise.

The studio also claims Midjourney profits from copyright-protected works through its subscription services and the ‘Midjourney TV’ platform.

A central question in the case is whether AI-generated material reproducing copyrighted characters constitutes infringement under US law. The courts have not decided on this issue, making the outcome uncertain.

Warner Bros. Discovery is also challenging how Midjourney trains its models, pointing to past statements from company executives suggesting vast quantities of material were indiscriminately collected to build its systems.

With three major Hollywood studios now pursuing lawsuits, the outcome of these cases could establish a precedent for how courts treat AI-generated content.

Warner Bros. Discovery seeks damages that could reach $150,000 per infringed work, or Midjourney’s profits linked to the alleged violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5 flunks Kindergarten test despite PhD-level promise

Critics quickly derided OpenAI’s newly released GPT-5 for failing tasks that a five-year-old could ace, raising questions about the disparity between hype and performance.

Despite being promoted as ‘PhD-level’, the model produced a distorted, blob-like map of North America and invented mismatched portraits of US presidents with fictional names.

AI researcher Gary Marcus lowered the threshold by giving GPT-5 a kindergarten-level challenge. The result was a clear fail. He posted: ‘GPT-5 failed a kindergarten-level task. Speechless.’ He criticised the rushed rollout and the hype that may have obscured the model’s visual reasoning weaknesses.

Further tests exposed inconsistencies: when asked to map France and label its 12 most populous cities, GPT-5 returned inaccurate or incomplete results, omitting Paris entirely and naming Orléans despite its lower ranking.

Oddly, when the same queries were posed in text-only form, the model performed better, highlighting the weakness in its image generation and visual logic.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EASA survey reveals cautious optimism over aviation AI ethics

The European Union Aviation Safety Agency (EASA) has published survey results probing the ethical outlook of aviation professionals on AI deployment, released during its AI Days event in Cologne.

The AI Days conference gathered nearly 200 on-site attendees from across the globe, with even more participating online.

The survey measured acceptance, trust and comfort across eight hypothetical AI use cases, yielding an average acceptance score of 4.4 out of 7. Despite growing interest, two-thirds of respondents declined at least one scenario.

Their key concerns included limitations of AI performance, privacy and data protection, accountability, safety risks and the potential for workforce de-skilling. A clear majority called for stronger regulation and oversight by EASA and national authorities.

In a keynote address, Christine Berg from the European Commission highlighted that AI in aviation is already practical, optimising air traffic flow and predictive maintenance, while emphasising the need for explainable, reliable and certifiable systems under the EU AI Act.

Survey findings will feed into EASA’s AI Roadmap and prompt public consultations as the agency advances policy and regulatory frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!