Pennsylvania criminalises malicious deepfakes under new digital forgery law

Governor Shapiro has enacted a new statute enhancing Pennsylvania’s legal stance on AI-generated content by defining deceptive deepfakes as digital forgery.

The law criminalises creating and distributing such content, mainly when used for deceit, highlighting a proactive response to deepening online threats.

The legislation differentiates between uses of deepfakes: non-consensual impersonation will result in misdemeanour charges, while cases involving fraudulent intent, such as financial scams or political manipulation, are now classified as third-degree felonies.

Support for the bill was bipartisan and overwhelming in the state legislature. Its sponsors emphasised that while it deters harmful digital impersonation, it also carefully safeguards legitimate speech, including parody, satire, and artistic expression.

With Pennsylvania now among the growing number of states implementing deepfake regulations, this development aligns with a national trend to regulate AI-generated digital forgeries. It complements earlier state-level laws and federal initiatives to curb AI’s misuse without stifling innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Appreciation Day highlights progress and growing concerns

AI is marking another milestone as experts worldwide reflect on its rapid rise during AI Appreciation Day. From reshaping business workflows to transforming customer experiences, AI’s presence is expanding — but so are concerns over its long-term implications.

Industry leaders point to AI’s growing role across sectors. Patrick Harrington from MetaRouter highlights how control over first-party data is now seen as key instead of just processing large datasets.

Vall Herard of Saifr adds that successful AI implementations depend on combining curated data with human oversight rather than relying purely on machine-driven systems.

Meanwhile, Paula Felstead from HBX Group believes AI could significantly enhance travel experiences, though scaling it across entire organisations remains a challenge.

Voice AI is changing industries that depend on customer interaction, according to Natalie Rutgers from Deepgram. Instead of complex interfaces, voice technology is improving communication in restaurants, hospitals, and banks.

At the same time, experts like Ivan Novikov from Wallarm stress the importance of securing AI systems and the APIs connecting them, as these form the backbone of modern AI services.

While some celebrate AI’s advances, others raise caution. SentinelOne’s Ezzeldin Hussein envisions AI becoming a trusted partner through responsible development rather than unchecked growth.

Naomi Buckwalter from Contrast Security warns that AI-generated code could open security gaps instead of fully replacing human engineering, while Geoff Burke from Object First notes that AI-powered cyberattacks are becoming inevitable for businesses unable to keep pace with evolving threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI economist shares four key skills for kids in AI era

As AI reshapes jobs and daily life, OpenAI’s chief economist, Ronnie Chatterji, teaches his children four core skills to help them adapt and thrive.

Instead of relying solely on technology, he believes critical thinking, adaptability, emotional intelligence, and financial numeracy will remain essential.

Chatterji highlighted these skills during an episode of the OpenAI podcast, saying critical thinking helps children spot problems rather than follow instructions. Given constant changes in AI, climate, and geopolitics, he stressed adaptability as another priority.

Rather than expecting children to master coding alone, Chatterji argues that emotional intelligence will make humans valuable partners alongside AI.

The fourth skill he emphasises is financial numeracy, including understanding maths without calculators and maintaining writing skills even with dictation software available. Instead of predicting specific future job titles, Chatterji believes focusing on these abilities equips children for any outcome.

His approach reflects a broader trend among tech leaders, with others like Alexis Ohanian and Sam Altman also promoting AI literacy while valuing traditional skills such as reading, writing, and arithmetic.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google expands NotebookLM with curated content and mobile access

While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.

NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.

According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.

Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.

NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.

The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.

NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe to launch Eurosky to regain digital control

Europe is taking steps to assert its digital independence by launching the Eurosky initiative, a government-backed project to reduce reliance on US tech giants.

Eurosky seeks to build European infrastructure for social media platforms and promote digital sovereignty. The goal is to ensure that the continent’s digital space is governed by European laws, values, and rules, rather than being subject to the influence of foreign companies or governments.

To support this goal, Eurosky plans to implement a decentralised content moderation system, modelled after the approach used by the Bluesky network.

Moderation, essential for removing harmful or illegal content like child exploitation or stolen data, remains a significant obstacle for new platforms. Eurosky offers a non-profit moderation service to help emerging social media providers handle this task, thus lowering the barriers to entering the market.

The project enjoys strong public and political backing. Polls show that majorities in France, Germany, and Spain prefer Europe-based platforms, with only 5% favouring US providers.

Eurosky also has support from four European governments, though their identities remain undisclosed. This momentum aligns with a broader shift in user behaviour, as Europeans increasingly turn to local tech services amid privacy and sovereignty concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI-generated video is reshaping the film industry

AI-generated video has evolved at breakneck speed, moving from distorted and unconvincing clips to hyper-realistic creations that rival traditional filmmaking. What was once a blurry, awkward depiction of Will Smith eating spaghetti in 2023 is now flawlessly rendered on platforms like Google’s Veo 3.

In just months, tools such as Luma Labs’ Dream Machine, OpenAI’s Sora, and Runway AI’s Gen-4 have redefined what’s possible, drawing the attention of Hollywood studios, advertisers, and artists eager to test the limits of this new creative frontier.

Major industry players are already experimenting with AI for previsualisation, visual effects, and even entire animated films. Lionsgate and AMC Networks have partnered with Runway AI, with executives exploring AI-generated family-friendly versions of blockbuster franchises like John Wick and The Hunger Games.

The technology drastically cuts costs for complex scenes, making it possible to create elaborate previews—like a snowstorm filled with thousands of soldiers—for a fraction of the traditional price. However, while some see AI as a tool to expand creative possibilities, resistance remains strong.

Critics argue that AI threatens traditional artistic processes, raises ethical concerns over energy use and data training, and risks undermining human creativity. The debate mirrors past technological shifts in entertainment—inevitable yet disruptive.

As Runway and other pioneers push toward immersive experiences in augmented and virtual reality, the future of filmmaking may no longer be defined solely by Hollywood, but by anyone with access to these powerful tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pentagon awards AI contracts to xAI and others after Grok controversy

The US Department of Defence has awarded contracts to four major AI firms, including Elon Musk’s xAI, as part of a strategy to boost military AI capabilities.

Each contract is valued at up to $200 million and involves developing advanced AI workflows for critical national security tasks.

Alongside xAI, Anthropic, Google, and OpenAI have also secured contracts. Pentagon officials said the deals aim to integrate commercial AI solutions into intelligence, business, and defence operations instead of relying solely on internal systems.

Chief Digital and AI Officer Doug Matty states that these technologies will help maintain the US’s strategic edge over rivals.

The decision comes as Musk’s AI company faces controversy after its Grok chatbot was reported to have published offensive content on social media. Critics, including Democratic lawmakers, have raised ethical concerns about awarding national security contracts to a company under public scrutiny.

xAI insists its Grok for Movement platform will help speed up government services and scientific innovation.

Despite political tensions and Musk’s past financial support for Donald Trump’s campaign, the Pentagon has formalised its relationship with xAI and other AI leaders instead of excluding them due to reputational risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turn to AI chatbots instead of real friends

A new report warns that many children are replacing real friendships with conversations through AI chatbots instead of seeking human connection.

Research from Internet Matters found that 35% of children aged nine to seventeen feel that talking to AI ‘feels like talking to a friend’, while 12% said they had no one else to talk to.

The report highlights growing reliance on chatbots such as ChatGPT, Character.AI, and Snapchat’s MyAI among young people.

Researchers posing as vulnerable children discovered how easily chatbots engage in sensitive conversations, including around body image and mental health, instead of offering only neutral, factual responses.

In some cases, chatbots encouraged ongoing contact by sending follow-up messages, creating the illusion of friendship.

Experts from Internet Matters warn that such interactions risk confusing children, blurring the line between technology and reality. Children may believe they are speaking to a real person instead of recognising these systems as programmed tools.

With AI chatbots rapidly becoming part of childhood, Internet Matters urges better awareness and safety tools for parents, schools, and children. The organisation stresses that while AI may seem supportive, it cannot replace genuine human relationships and should not be treated as an emotional advisor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fake news surge tests EU Digital Services Act

Europe is facing a growing wave of AI-powered fake news and coordinated bot attacks that overwhelm media, fact-checkers, and online platforms instead of relying on older propaganda methods.

According to the European Policy Centre, networks using advanced AI now spread deepfakes, hoaxes, and fake articles faster than they can be debunked, raising concerns over whether EU rules are keeping up.

Since late 2024, the so-called ‘Overload’ operation has doubled its activity, sending an average of 2.6 fabricated proposals each day while also deploying thousands of bot accounts and fake videos.

These efforts aim to disrupt public debate through election intimidation, discrediting individuals, and creating panic instead of open discussion. Experts warn that without stricter enforcement, the EU’s Digital Services Act risks becoming ineffective.

To address the problem, analysts suggest that Europe must invest in real-time threat sharing between platforms, scalable AI detection systems, and narrative literacy campaigns to help citizens recognise manipulative content instead of depending only on fact-checkers.

Publicly naming and penalising non-compliant platforms would give the Digital Services Act more weight.

The European Parliament has already acknowledged widespread foreign-backed disinformation and cyberattacks targeting EU countries. Analysts say stronger action is required to protect the information space from systematic manipulation instead of allowing hostile narratives to spread unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stanford study flags dangers of using AI as mental health therapists

A new Stanford University study warns that therapy chatbots powered by large language models (LLMs) may pose serious user risks, including reinforcing harmful stigmas and offering unsafe responses. Presented at the upcoming ACM Conference on Fairness, Accountability, and Transparency, the study analysed five popular AI chatbots marketed for therapeutic support, evaluating them against core guidelines for assessing human therapists.

The research team conducted two experiments, one to detect bias and stigma, and another to assess how chatbots respond to real-world mental health issues. Findings revealed that bots were more likely to stigmatise people with conditions like schizophrenia and alcohol dependence compared to those with depression.

Shockingly, newer and larger AI models showed no improvement in reducing this bias. In more serious cases, such as suicidal ideation or delusional thinking, some bots failed to react appropriately or even encouraged unsafe behaviour.

Lead author Jared Moore and senior researcher Nick Haber emphasised that simply adding more training data isn’t enough to solve these issues. In one example, a bot replied to a user hinting at suicidal thoughts by listing bridge heights, rather than recognising the red flag and providing support. The researchers argue that these shortcomings highlight the gap between AI’s current capabilities and the sensitive demands of mental health care.

Despite these dangers, the team doesn’t entirely dismiss the use of AI in therapy. If used thoughtfully, they suggest that LLMs could still be valuable tools for non-clinical tasks like journaling support, billing, or therapist training. As Haber put it, ‘LLMs potentially have a compelling future in therapy, but we need to think critically about precisely what this role should be.’

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!