Beware of fake deals as Prime Day approaches

A surge in online scams is expected ahead of Amazon’s Prime Day, which runs from 8 to 11 July, as fraudsters use increasingly sophisticated tactics. Advice Direct Scotland is issuing a warning to shoppers across Scotland: AI-enhanced phishing emails, bogus renewal notices, and fake refund offers are on the rise.

In one common ruse, scammers impersonate Amazon in messages stating your Prime membership has expired or that your account needs urgent verification. Others go further, claiming your Amazon account has been hacked and demanding remote access to your device, something the real company never does. Victims in Scotland reportedly lost around £860,000 last year to similar crime, as scam technology becomes more convincing.

Advice Direct Scotland reminds shoppers not to rush and to trust their instincts. Genuine Amazon communications will never ask for remote access, passwords, or financial information over email or phone. If in doubt, hang up and check your account via official channels, or reach out to the charity’s ScamWatch hotline.

Those seeking guidance can contact Advice Direct Scotland via phone or online chat, or report suspected scams using the free ScamWatch tool. With Prime Day bargains tempting many, staying vigilant could mean avoiding a costly mistake.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use AI to create phishing sites in seconds

Hackers are now using generative AI tools to build convincing phishing websites in under a minute, researchers at Okta have warned. The company discovered that a tool developed by Vercel had been abused to replicate login portals for platforms such as Okta, Microsoft 365 and crypto services.

Using simple prompts like ‘build a copy of the website login.okta.com’, attackers can create fake login pages with little effort or technical skill. Okta’s investigation found no evidence of successful breaches, but noted that threat actors repeatedly used v0 to target new platforms.

Vercel has since removed the fraudulent sites and is working with Okta to create a system for reporting abuse. Security experts are concerned the speed and accessibility of generative AI tools could accelerate low-effort cybercrime on a massive scale.

Researchers also found cloned versions of the v0 tool on GitHub, which may allow continued abuse even if access to the original is restricted. Okta urges organisations to adopt passwordless systems, as traditional phishing detection methods are becoming obsolete.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model predicts sudden cardiac death more accurately

A new AI tool developed by researchers at Johns Hopkins University has shown promise in predicting sudden cardiac death among people with hypertrophic cardiomyopathy (HCM), outperforming existing clinical tools.

The model, known as MAARS (Multimodal AI for ventricular Arrhythmia Risk Stratification), uses a combination of medical records, cardiac MRI scans, and imaging reports to assess individual patient risk more accurately.

In early trials, MAARS achieved an AUC (area under the curve) score of 0.89 internally and 0.81 in external validation — both significantly higher than traditional risk calculators recommended by American and European guidelines.

The improvement is attributed to its ability to interpret raw cardiac MRI data, particularly scans enhanced with gadolinium, which are often overlooked in standard assessments.

While the tool has the potential to personalise care and reduce unnecessary defibrillator implants, researchers caution that the study was limited to small cohorts from Johns Hopkins and North Carolina’s Sanger Heart & Vascular Institute.

They also acknowledged that MAARS’s reliance on large and complex datasets may pose challenges for widespread clinical use.

Nevertheless, the research team believes MAARS could mark a shift in managing HCM, the most common inherited heart condition.

By identifying hidden patterns in imaging and medical histories, the AI model may protect patients more effectively, especially younger individuals who remain at risk yet receive no benefit from current interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI brings Babylon’s lost hymn back to life

A hymn to the ancient city of Babylon has been reconstructed after 2,100 years using AI to piece together 30 clay tablet fragments. Once lost after Alexander the Great’s conquest, the song praises the city’s grandeur, morals and daily life in exceptional poetic detail.

The hymn, sung to the god Marduk, depicts Babylon as a flourishing paradise filled with jewelled gates, verdant pastures and flowing rivers. AI tools helped researchers quickly assemble and translate the fragments, revealing a third of the original 250-line text.

The poem sheds rare light on Babylonian values, highlighting kindness to foreigners, the release of prisoners and the sanctity of orphans. It also gives a surprising glimpse into the role of women, including cloistered priestesses who acted as midwives.

Parts of the hymn were copied out by schoolchildren up to 1,400 years after it was composed, showing its cultural importance. Scholars now place it alongside the Epic of Gilgamesh as one of the most treasured literary works from ancient Mesopotamia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X to test AI-generated Community Notes

X, the social platform formerly known as Twitter, is preparing to test a new feature allowing AI chatbots to generate Community Notes.

These notes, a user-driven fact-checking system expanded under Elon Musk, are meant to provide context on misleading or ambiguous posts, such as AI-generated videos or political claims.

The pilot will enable AI systems like Grok or third-party large language models to submit notes via API. Each AI-generated comment will be treated the same as a human-written one, undergoing the same vetting process to ensure reliability.

However, concerns remain about AI’s tendency to hallucinate, where it may generate inaccurate or fabricated information instead of grounded fact-checks.

A recent research paper by the X Community Notes team suggests that AI and humans should collaborate, with people offering reinforcement learning feedback and acting as the final layer of review. The aim is to help users think more critically, not replace human judgment with machine output.

Still, risks persist. Over-reliance on AI, particularly models prone to excessive helpfulness rather than accuracy, could lead to incorrect notes slipping through.

There are also fears that human raters could become overwhelmed by a flood of AI submissions, reducing the overall quality of the system. X intends to trial the system over the coming weeks before any wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The cognitive cost of AI: Balancing assistance and awareness

The double-edged sword of AI assistance

The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI has become a ubiquitous companion, helping students with essays and professionals streamline emails.

However, a new study by MIT raises a crucial red flag: excessive reliance on AI may come at the cost of our own mental sharpness. Researchers discovered that frequent ChatGPT users showed significantly lower brain activity, particularly in areas tied to critical thinking and creativity.

The study introduces a concept dubbed ‘cognitive debt,’ a reminder that while AI offers convenience, it may undermine our cognitive resilience if not used responsibly.

MIT’s method: How the study was conducted

The MIT Media Lab study involved 54 participants split into three groups: one used ChatGPT, another used traditional search engines, and the third completed tasks unaided. Participants were assigned writing exercises over multiple sessions while their brain activity was tracked using electroencephalography (EEG).

That method allowed scientists to measure changes in alpha and beta waves, indicators of mental effort. The findings revealed a striking pattern: those who depended on ChatGPT demonstrated the lowest brain activity, especially in the frontal cortex, where high-level reasoning and creativity originate.

Diminished mental engagement and memory recall

One of the most alarming outcomes of the study was the cognitive disengagement observed in AI users. Not only did they show reduced brainwave activity, but they also struggled with short-term memory.

Many could not recall what they had written just minutes earlier because the AI had done most of the cognitive heavy lifting. This detachment from the creative process meant that users were no longer actively constructing ideas or arguments but passively accepting the machine-generated output.

The result? A diminished sense of authorship and ownership over one’s own work.

Homogenised output: The erosion of creativity

The study also noted a tendency for AI-generated content to appear more uniform and less original. While ChatGPT can produce grammatically sound and coherent text, it often lacks the personal flair, nuance, and originality that come from genuine human expression.

Essays written with AI assistance were found to be more homogenised, lacking distinct voice and perspective. This raises concerns, especially in academic and creative fields, where originality and critical thinking are fundamental.

The overuse of AI could subtly condition users to accept ‘good enough’ content, weakening their creative instincts over time.

The concept of cognitive debt

‘Cognitive debt’ refers to the mental atrophy that can result from outsourcing too much thinking to AI. Like financial debt, this form of cognitive laziness builds over time and eventually demands repayment, often in the form of diminished skills when the tool is no longer available.

Typing

Participants who became accustomed to using AI found it more challenging to write without it later on. The reliance suggests that continuous use without active mental engagement can erode our capacity to think deeply, form complex arguments, and solve problems independently.

A glimmer of hope: Responsible AI use

Despite these findings, the study offers hope. Participants who started tasks without AI and only later integrated it showed significantly better cognitive performance.

That implies that when AI is used as a complementary tool rather than a replacement, it can support learning and enhance productivity. By encouraging users to first engage with the problem and then use AI to refine or expand their ideas, we can strike a healthy balance between efficiency and mental effort.

Rather than abstinence, responsible usage is the key to retaining our cognitive edge.

Use it or lose it

The MIT study underscores a critical reality of our AI-driven era: while tools like ChatGPT can boost productivity, they must not become a substitute for thinking itself. Overreliance risks weakening the faculties defining human intelligence—creativity, reasoning, and memory.

The challenge in the future is to embrace AI mindfully, ensuring that we remain active participants in the cognitive process. If we treat AI as a partner rather than a crutch, we can unlock its full potential without sacrificing our own.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI governance through the lens of magical realism

AI today straddles the line between the extraordinary and the mundane, a duality that evokes the spirit of magical realism—a literary genre where the impossible blends seamlessly with the real. Speaking at the 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, Jovan Kurbalija proposed that we might better understand the complexities of AI governance by viewing it through this narrative lens.

Like Gabriel García Márquez’s floating characters or Salman Rushdie’s prophetic protagonists, AI’s remarkable feats—writing novels, generating art, mimicking human conversation—are increasingly accepted without question, despite their inherent strangeness.

Kurbalija argues that AI, much like the supernatural in literature, doesn’t merely entertain; it reveals and shapes profound societal realities. Algorithms quietly influence politics, reshape economies, and even redefine relationships.

Just as magical realism uses the extraordinary to comment on power, identity, and truth, AI forces us to confront new ethical dilemmas: Who owns AI-created content? Can consent be meaningfully given to machines? And does predictive technology amplify societal biases?

The risks of AI—job displacement, misinformation, surveillance—are akin to the symbolic storms of magical realism: always present, always shaping the backdrop. Governance, then, must walk a fine line between stifling innovation and allowing unchecked technological enchantment.

Kurbalija warns against ‘black magic’ policy manipulation cloaked in humanitarian language and urges regulators to focus on real-world impacts while resisting the temptation of speculative fears. Ultimately, AI isn’t science fiction—it’s magical realism in motion.

As we build policies and frameworks to govern it, we must ensure this magic serves humanity, rather than distort our sense of what is real, ethical, and just. In this unfolding story, the challenge is not only technological, but deeply human.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Path forward for global digital cooperation debated at IGF 2025

At the 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, policymakers, civil society, and digital stakeholders gathered to chart the future of global internet governance through the WSIS+20 review. With a high-level UN General Assembly meeting scheduled for December, co-facilitators from Kenya and Albania emphasised the need to update the World Summit on the Information Society (WSIS) framework while preserving its original, people-centred vision.

They underscored the importance of inclusive consultations, highlighting a new multistakeholder sounding board and upcoming joint sessions to enhance dialogue between governments and broader communities. The conversation revolved around the evolving digital landscape and how WSIS can adapt to emerging technologies like AI, data governance, and digital public infrastructure.

While some participants favoured WSIS as the primary global framework, others advocated for closer synergy with the Global Digital Compact (GDC), stressing the importance of coordination to avoid institutional duplication. Despite varied views, there was widespread consensus that the existing WSIS action lines, being technology-neutral, can remain relevant by accommodating new innovations.

Speakers from the government, private sector, and civil society reiterated the call to permanently secure the IGF’s mandate, praising its unique ability to foster open, inclusive dialogue without the pressure of binding negotiations. They pointed to IGF’s historical success in boosting internet connectivity and called for more tangible outputs to influence policymaking.

National-level participation, especially from developing countries, women, youth, and marginalised communities, was identified as crucial for meaningful engagement.

The session ended on a hopeful note, with participants expressing a shared commitment to a more inclusive and equitable digital future. As the December deadline looms, the global community faces the task of turning shared principles into concrete action, ensuring digital governance mechanisms remain cooperative, adaptable, and genuinely representative of all voices.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

IGF leadership panel explores future of digital governance

As the Internet Governance Forum (IGF) prepares to mark its 20th anniversary, members of the IGF Leadership Panel gathered in Norway to present a strategic vision for strengthening the forum’s institutional role and ensuring greater policy impact.

The session explored proposals to make the IGF a permanent UN institution, improve its output relevance for policymakers, and enhance its role in implementing outcomes from WSIS+20 and the Global Digital Compact.

While the tone remained largely optimistic, Nobel Peace Prize laureate Maria Ressa voiced a more urgent appeal, calling for concrete action in a rapidly deteriorating information ecosystem.

Speakers emphasized the need for a permanent and better-resourced IGF. Vint Cerf, Chair of the Leadership Panel, reflected on the evolution of internet governance, arguing that ‘we must maintain enthusiasm for computing’s positive potential whilst addressing problems’.

He acknowledged growing threats like AI-driven disruption and information pollution, which risk undermining democratic governance and economic fairness online. Maria Fernanda Garza and Lise Fuhr echoed the call, urging for the IGF to be integrated into the UN structure with sustainable funding and measurable performance metrics. Fuhr commended Norway’s effort to bring 16 ministers from the Global South to the meeting, framing it as a model for future inclusive engagement.

 Indoors, Restaurant, Adult, Female, Person, Woman, Cafeteria, Boy, Male, Teen, Man, Wristwatch, Accessories, Jewelry, Necklace, People, Glasses, Urban, Face, Head, Cup, Food, Food Court, Lucky Fonz III, Judy Baca, Roy Hudd, Lisa Palfrey, Ziba Mir-Hosseini, Mareen von Römer, Kim Shin-young, Lídia Jorge

A significant focus was placed on integrating IGF outcomes with the WSIS+20 and Global Digital Compact processes. Amandeep Singh Gill noted that these two tracks are ‘complementary’ and that existing WSIS architecture should be leveraged to avoid duplication. He emphasized that budget constraints limit the creation of new bodies, making it imperative for the IGF to serve as the core platform for implementation and monitoring.

Garza compared the IGF’s role to a ‘canary in the coal mine’ for digital policy, urging better coordination with National and Regional Initiatives (NRIs) to translate global goals into local impact.

Participants discussed the persistent challenge of translating IGF discussions into actionable outputs. Carol Roach emphasized the need to identify target audiences and tailor outputs using formats such as executive briefs, toolkits, and videos. Lan Xue added,’ to be policy-relevant, the IGF must evolve from a space of dialogue to a platform of strategic translation’.

He proposed launching policy trackers, aligning outputs with global policy calendars, and appointing liaison officers to bridge the gap between IGF and forums such as the G20, UNGA, and ITU.

Inclusivity emerged as another critical theme. Panellists underscored the importance of engaging underrepresented regions through financial support, capacity-building, and education. Fuhr highlighted the value of internet summer schools and grassroots NRIs, while Gill stressed that digital sovereignty is now a key concern in the Global South. ‘The demand has shifted’, he said, ‘from content consumption to content creation’.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Maria Ressa closed the session with an impassioned call for immediate action. She warned that the current information environment contributes to global conflict and democratic erosion, stating that ‘without facts, no truth, no trust. Without trust, you cannot govern’. Citing recent wars and digital manipulation, she urged the IGF community to move from reflection to implementation. ‘Online violence is real-world violence’, she said. ‘We’ve talked enough. Now is the time to act.’

Despite some differences in vision, the session revealed a strong consensus on key issues: the need for institutional evolution, enhanced funding, better policy translation, and broader inclusion. Bertrand de la Chapelle, however, cautioned against making the IGF a conventional UN body, instead proposing a ‘constitutional moment’ in 2026 to consider more flexible institutional reforms.

The discussion demonstrated that while the IGF remains a trusted forum for inclusive dialogue, its long-term relevance depends on its ability to produce concrete outcomes and adapt to a volatile digital environment. As Vint Cerf reminded participants in closing, ‘this is an opportunity to make this a better environment than it already is and to contribute more to our global digital society’.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Tower of Babel reimagined: IGF 2025 experiment highlights language barriers in internet governance

At the 2025 Internet Governance Forum in Lillestrøm, Norway, an unconventional session titled ‘Tower of Babel Chaos’ challenged the norm of using English as the default language in global digital policy discussions. Moderator Virginia Paque, Senior Policy Editor of Diplo and the only native English speaker among the participants, suspended English as the session’s required language and encouraged attendees to define internet governance and interact in their own native tongues.

That move sparked both confusion and revelation as participants experienced firsthand the communicative fragmentation caused by linguistic diversity. The experiment led to the spontaneous clustering of speakers into language groups and highlighted the isolation of individuals whose languages—such as Maltese, Samoan, Cape Verdean Creole, and Chichewa—had no other representation.

Participants reported feelings ranging from curiosity to frustration, underlining the practical importance of shared language in international settings. Yet, some also discovered unexpected bridges through linguistic overlap or body language, hinting at the potential for cross-cultural communication even in chaotic conditions.

 Stage, Indoors, Theater, Architecture, Building, Cinema, Lighting, Auditorium, Hall, Person, Electronics, Screen, People

AI emerged as a potential remedy. Ken Huang from Lingo AI noted that while AI can process thousands of languages, its effectiveness is currently limited by a lack of diverse datasets, making it default to English and other dominant tongues. Others emphasised that while technology offers hope—like real-time translation tools—it cannot guarantee equitable inclusion for all linguistic groups, particularly under-resourced languages.

The session ultimately balanced idealism with pragmatism. While many acknowledged the convenience of English as a global lingua franca, others argued for providing multiple language options with simultaneous interpretation, as practised by institutions like the UN.

The discussion underscored the political, cultural, and technological complexities of multilingualism in internet governance, and concluded with a shared recognition: fostering a more inclusive digital dialogue means embracing both innovation and linguistic diversity.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.