ChatGPT accused of enabling fake document creation

Concerns over digital security have intensified after reports revealed that OpenAI’s ChatGPT has been used to generate fake identification cards.

The incident follows the recent introduction of a popular Ghibli-style feature, which led to a sharp rise in usage and viral image generation across social platforms.

Among the fakes circulating online were forged versions of India’s Aadhaar ID, created with fabricated names, photos, and even QR codes.

While the Ghibli release helped push ChatGPT past 150 million active users, the tool’s advanced capabilities have now drawn criticism.

Some users demonstrated how the AI could replicate Aadhaar and PAN cards with surprising accuracy, even using images of well-known figures like OpenAI CEO Sam Altman and Tesla’s Elon Musk. The ease with which these near-perfect replicas were produced has raised alarms about identity theft and fraud.

The emergence of AI-generated IDs has reignited calls for clearer AI regulation and transparency. Critics are questioning how AI systems have access to the formatting of official documents, with accusations that sensitive datasets may be feeding model development.

As generative AI continues to evolve, pressure is mounting on both developers and regulators to address the growing risk of misuse.

For more information on these topics, visit diplomacy.edu.

Blockchain app ARK fights to keep human creativity ahead of AI

Nearly 20 years after his AI career scare, screenwriter Ed Bennett-Coles and songwriter Jamie Hartman have developed ARK, a blockchain app designed to safeguard creative work from AI exploitation.

The platform lets artists register ownership of their ideas at every stage, from initial concept to final product, using biometric security and blockchain verification instead of traditional copyright systems.

ARK aims to protect human creativity in an AI-dominated world. ‘It’s about ring-fencing the creative process so artists can still earn a living,’ Hartman told AFP.

The app, backed by Claritas Capital and BMI, uses decentralised blockchain technology instead of centralised systems to give creators full control over their intellectual property.

Launching summer 2025, ARK challenges AI’s ‘growth at all costs’ mentality by emphasising creative journeys over end products.

Bennett-Coles compares AI content to online meat delivery, efficient but soulless, while human artistry resembles a grandfather’s butcher trip, where the experience matters as much as the result.

The duo hopes their solution will inspire industries to modernise copyright protections before AI erodes them completely.

For more information on these topics, visit diplomacy.edu.

FBI and INTERPOL investigate Oracle Health data breach

Oracle Health has reportedly suffered a data breach that compromised sensitive patient information stored by American hospitals.

The cyberattack, discovered in February 2025, involved threat actors using stolen customer credentials to access an old Cerner server that had not yet migrated to the Oracle Cloud. Oracle acquired healthcare tech company Cerner in 2022 for $28.3 billion.

In notifications sent to affected customers, Oracle acknowledged that data had been downloaded by unauthorised users. The FBI is said to be investigating the incident and exploring whether ransom demands are involved. Oracle has yet to publicly comment on the breach.

The news comes amid growing cybersecurity concerns. A recent report from Horizon3.ai revealed that over half of IT professionals delay critical software patches, leaving organisations vulnerable. Meanwhile, OpenAI has boosted its bug bounty rewards to encourage more proactive security research.

In a broader crackdown on cybercrime, INTERPOL recently arrested over 300 suspects in seven African countries for online scams, seizing devices, properties, and other assets linked to more than 5,000 victims.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Sam Altman responds to Miyazaki’s AI animation concerns

The recent viral trend of AI-generated Ghibli-style images has taken the internet by storm. Using OpenAI’s GPT-4o image generator, users have been transforming photos, from historic moments to everyday scenes, into Studio Ghibli-style renditions.

A trend like this has caught the attention of notable figures, including celebrities and political personalities, sparking both excitement and controversy.

While some praise the trend for democratising art, others argue that it infringes on copyright and undermines the efforts of traditional artists. The debate intensified when Hayao Miyazaki, the co-founder of Studio Ghibli, became a focal point.

In a 2016 documentary, Miyazaki expressed his disdain for AI in animation, calling it ‘an insult to life itself’ and warning that humanity is losing faith in its creativity.

OpenAI’s CEO, Sam Altman, recently addressed these concerns, acknowledging the challenges posed by AI in art but defending its role in broadening access to creative tools. Altman believes that technology empowers more people to contribute, benefiting society as a whole, even if it complicates the art world.

Miyazaki’s comments and Altman’s response highlight a growing divide in the conversation about AI and creativity. As the debate continues, the future of AI in art remains a contentious issue, balancing innovation with respect for traditional artistic practices.

For more information on these topics, visit diplomacy.edu.

Google blends AI mode with Lens

Google is enhancing its experimental AI Mode by combining the visual power of Google Lens with the conversational intelligence of Gemini, offering users a more dynamic way to search.

Instead of typing queries alone, users can now upload photos or take snapshots with their smartphone to receive more insightful answers.

The new feature moves beyond traditional reverse image search. For instance, you could snap a photo of a mystery kitchen tool and ask, ‘What is this, and how do I use it?’, receiving not only a helpful explanation but links to buy it and even video demonstrations.

Rather than focusing on a single object, AI Mode can interpret entire scenes, offering context-aware suggestions.

Take a photo of a bookshelf, a meal, or even a cluttered drawer, and AI Mode will identify items and describe how they relate to each other. It might suggest recipes using the ingredients shown, help identify a misplaced phone charger, or recommend the order to read your books.

Behind the scenes, the system runs multiple AI agents to analyse each element, providing layered, tailored responses.

Although other platforms like ChatGPT also support image recognition, Google’s strength lies in its decades of search data and visual indexing. Currently, the feature is accessible to Google One AI Premium subscribers or those enrolled in Search Labs via the Google mobile app.

For more information on these topics, visit diplomacy.edu.

Meta rolls out restricted teen accounts across platforms

Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September

The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon. 

These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.

Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them. 

Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited. 

These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.

On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages. 

Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.

The initiative follows increasing pressure on social media platforms to address concerns around teen mental health. 

In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments. 

Some states have even mandated parental consent for teen access to social platforms.

Meta reports that over 54 million Instagram accounts have migrated to Teen Accounts. 

According to the company, 97% of users aged 13 to 15 keep the default protections in place. 

A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.

As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises AI-generated nude deepfakes of minors

New Jersey has become the first US state to criminalise the creation and sharing of AI-generated nude images of minors, following a high-profile campaign led by 14-year-old Francesca Mani. The US legislation, signed into law on 2 April by Governor Phil Murphy, allows victims to sue perpetrators for up to $1,000 per image and includes criminal penalties of up to five years in prison and fines of up to $30,000.

Mani launched her campaign after discovering that boys at her school had used an AI “nudify” website to target her and other girls. Refusing to accept the school’s minimal disciplinary response, she called for lawmakers to take decisive action against such deepfake abuses. Her efforts gained national attention, including a feature on 60 Minutes, and helped drive the new legal protections.

The law defines deepfakes as media that convincingly depicts someone doing something they never actually did. It also prohibits the use of such technology for election interference or defamation. Although the law’s focus is on malicious misuse, questions remain about whether exemptions will be made for legitimate uses in film, tech, or education sectors.

For more information on these topics, visit diplomacy.edu.

Trump administration pushes for pro-AI shift in US federal agencies

The White House announced on Monday a shift in how US federal agencies will approach AI, prioritising innovation over the stricter regulatory framework previously established under President Biden. 

A new memorandum from the Office of Management and Budget instructs agencies to appoint chief AI officers and craft policies to expand the use of AI technologies across government operations.

This pivot includes repealing two Biden-era directives emphasising transparency and safeguards against AI misuse. 

The earlier rules required federal agencies to implement protective measures for civil rights and limit unchecked acquisition of AI tools. 

These protections have now been replaced with a call for a more ‘forward-leaning and pro-innovation’ stance, removing what the current administration views as excessive bureaucratic constraints.

Federal agencies are now expected to develop AI strategies within six months. These plans must identify barriers to responsible AI implementation and improve how the technology is used enterprise-wide. 

The administration also encouraged the development of specific policies for generative AI, emphasising maximising the use of American-made solutions and enhancing interoperability between systems.

The policy change is part of President Trump’s broader rollback of previous AI governance, including his earlier revocation of a 2023 executive order signed by Biden that required developers to disclose sensitive training data. 

The new framework aims to streamline AI procurement processes and eliminate what the administration labels unnecessary reporting burdens while still maintaining basic privacy protections.

Federal agencies have already begun integrating AI into their operations. The Federal Aviation Administration, for example, has applied machine learning to analyse safety reports and identify emerging aviation risks. 

Under the new guidelines, such initiatives are expected to accelerate, signalling a broader federal embrace of AI across sectors.

For more information on these topics, visit diplomacy.edu.

Russia fines Telegram over extremist content

A Moscow court has fined the messaging platform Telegram 7 million roubles (approximately $80,000) for failing to remove content allegedly promoting terrorist acts and inciting anti-government protests, according to TASS (Russian state news agency).

The court ruled that Telegram did not comply with legal obligations to take down materials deemed extremist, including calls to sabotage railway systems in support of Ukrainian forces and to overthrow the Russian government.

The judgement cited specific Telegram channels accused of distributing such content. Authorities argue that these channels played a role in encouraging public unrest and potentially supporting hostile actions against the Russian state.

The decision adds to the long-standing tension between Russia’s media watchdogs and Telegram, which remains one of the most widely used messaging platforms across Russia and neighbouring countries.

Telegram has not stated in response to the fine, and it is unclear whether the company plans to challenge the court’s ruling. 

The platform was founded by Russian-born entrepreneur Pavel Durov and is currently headquartered in Dubai, boasting close to a billion users globally. 

Telegram’s decentralised nature and encrypted messaging features have made it popular among users seeking privacy, but it has also drawn criticism from governments citing national security concerns.

Durov himself returned to Dubai in March after months in France following his 2024 arrest linked to accusations that Telegram was used in connection with fraud, money laundering, and the circulation of illegal content.

Although he has denied any wrongdoing, the incident has further strained the company’s relationship with authorities in Russia.

This latest legal action reflects Russia’s ongoing crackdown on digital platforms accused of facilitating dissent or undermining state control.

With geopolitical tensions still high, especially surrounding the conflict in Ukraine, platforms like Telegram face increasing scrutiny and legal pressure in multiple jurisdictions.

Senator Warner warns TikTok deal deadline extension breaks the law

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, has criticised President Donald Trump’s recent move to extend the deadline for ByteDance to divest TikTok’s US operations. 

Warner argued that the 75-day extension violates the law passed in 2024, which mandates a complete separation between TikTok’s American entity and its Chinese parent company due to national security concerns.

The deal currently under consideration would allow ByteDance to retain a significant equity stake and maintain an operational role in the new US-based company. 

According to Warner, this arrangement fails to satisfy the legal requirement of eliminating Chinese influence over TikTok’s US operations. 

He emphasised that any legitimate divestiture must include a complete technological and organisational break, preventing ByteDance from accessing user data or source code.

The White House and TikTok have not issued statements in response to Warner’s criticism. In its second term, Trump’s administration has stated it is in contact with four groups regarding a potential TikTok acquisition. 

However, no agreement has been finalised, and China has yet to publicly support a sale of TikTok’s US assets, one of the primary obstacles to completing the deal.

Under the 2024 law, ByteDance was required to divest TikTok’s US business by 19 January or face a ban

Trump, who retook office on 20 January, chose not to enforce the ban immediately and instead signed an executive order extending the deadline. 

The Justice Department further complicated the issue when it told Apple and Google that the law would not be enforced, allowing the app to remain available for download.

As the deadline extension continues to stir controversy, lawmakers like Warner insist that national security and legislative integrity are at stake.

For more information on these topics, visit diplomacy.edu.