Former OpenAI staff challenge company’s shift to for-profit model

​A group of former OpenAI employees, supported by Nobel laureates and AI experts, has urged the attorneys general of California and Delaware to block the company’s proposed transition from a nonprofit to a for-profit structure.

They argue that such a shift could compromise OpenAI’s founding mission to develop artificial general intelligence (AGI) that benefits all of humanity, potentially prioritising profit over public safety and accountability, not just in the US, but globally.

The coalition, including notable figures like economists Oliver Hart and Joseph Stiglitz, and AI pioneers Geoffrey Hinton and Stuart Russell, expressed concerns that the restructuring would reduce nonprofit oversight and increase investor influence.

They fear this change could lead to diminished ethical safeguards, especially as OpenAI advances toward creating AGI. OpenAI responded by stating that any structural changes would aim to ensure broader public benefit from AI advancements.

The company plans to adopt a public benefit corporation model while maintaining a nonprofit arm to uphold its mission. The final decision rests with the state authorities, who are reviewing the proposed restructuring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI partners with major news outlets

OpenAI has signed multiple content-sharing deals with major media outlets, including Politico, Vox, Wired, and Vanity Fair, allowing their content to be featured in ChatGPT.

As part of the deal with The Washington Post, ChatGPT will display summaries, quotes, and links to the publication’s original reporting in response to relevant queries. OpenAI has secured similar partnerships with over 20 news publishers and 160 outlets in 20 languages.

The Washington Post’s head of global partnerships, Peter Elkins-Williams, emphasised the importance of meeting audiences where they are, ensuring ChatGPT users have access to impactful reporting.

OpenAI’s media partnerships head, Varun Shetty, noted that more than 500 million people use ChatGPT weekly, highlighting the significance of these collaborations in providing timely, trustworthy information to users.

OpenAI has worked to avoid criticism related to copyright infringement, having previously faced legal challenges, particularly from the New York Times, over claims that chatbots were trained on millions of articles without permission.

While OpenAI sought to dismiss these claims, a US district court allowed the case to proceed, intensifying scrutiny over AI’s use of news content.

Despite these challenges, OpenAI continues to form agreements with leading publications, such as Hearst, Condé Nast, Time magazine, and Vox Media, helping ensure their journalism reaches a wider audience.

Meanwhile, other publications have pursued legal action against AI companies like Cohere for allegedly using their content without consent to train AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI films are now eligible for the Oscar awards

The Academy of Motion Picture Arts and Sciences has officially made films that incorporate AI eligible for Oscars, reflecting AI’s growing influence in cinema. Updated rules confirm that the use of generative AI or similar tools will neither help nor harm a film’s chances of nomination.

These guidelines, shaped with input from the Academy’s Science and Technology Council, aim to keep human creativity at the forefront, despite the increasing presence of digital tools in production.

Recent Oscar-winning films have already embraced AI. Adrien Brody’s performance in The Brutalist was enhanced using AI to refine his Hungarian accent, while Emilia Perez, a musical that claimed an award, used voice-cloning technology to support its cast.

Such tools can convincingly replicate voices and visual styles, making them attractive to filmmakers instead of relying solely on traditional methods, but not without raising industry-wide concerns.

The 2023 Hollywood strikes highlighted the tension between artistic control and automation. Writers and actors protested the threat posed by AI to their livelihoods, leading to new agreements that limit the use of AI-generated content and protect individuals’ likenesses.

Actress Susan Sarandon voiced fears about unauthorised use of her image, and Scarlett Johansson echoed concerns about digital impersonation.

Despite some safeguards, many in the industry remain wary. Animators argue that AI lacks the emotional nuance needed for truly compelling storytelling, and Rokit Flix’s co-founder Jonathan Kendrick warned that AI might help draft scenes, but can’t deliver the depth required for an Oscar-worthy film.

Alongside the AI rules, the Academy also introduced a new voting requirement. Members must now view every nominated film in a category before casting their final vote, to encourage fairer decisions in this shifting creative environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russian hackers target NGOs with fake video calls

Hackers linked to Russia are refining their techniques to infiltrate Microsoft 365 accounts, according to cybersecurity firm Volexity.

Their latest strategy targets non-governmental organisations (NGOs) associated with Ukraine by exploiting OAuth, a protocol used for app authorisation without passwords.

Victims are lured into fake video calls through apps like Signal or WhatsApp and tricked into handing over OAuth codes, which attackers then use to access Microsoft 365 environments.

The campaign, first detected in March, involved messages claiming to come from European security officials proposing meetings with political representatives. Instead of legitimate video links, these messages directed recipients to OAuth code generators.

Once a code was shared, attackers could gain entry into accounts containing sensitive data. Staff at human rights organisations were especially targeted due to their work on Ukraine-related issues.

Volexity attributed the scheme to two threat actors, UTA0352 and UTA0355, though it did not directly connect them to any known Russian advanced persistent threat groups.

A previous attack from the same actors used Microsoft Device Code Authentication, usually reserved for connecting smart devices, instead of traditional login methods. Both campaigns show a growing sophistication in social engineering tactics.

Given the widespread use of Microsoft 365 tools like Outlook and Teams, experts urge organisations to heighten awareness among staff.

Rather than trusting unsolicited messages on encrypted apps, users should remain cautious when prompted to click links or enter authentication codes, as these could be cleverly disguised attempts to breach secure systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adyen services disrupted by cyber blitz

Adyen fell victim to three coordinated DDoS attacks on Monday evening, severely disrupting debit card and online payments.

The first surge of malicious traffic struck at around 7 pm, with follow‑up waves at 8:35 pm and 11:35 pm, overloading Adyen’s servers and hindering transactions.

Customers reported failures at checkouts in shops, restaurants and on e‑commerce sites until Adyen announced at 3:40 am that all its services were back online.

The firm attributed the interruptions to ‘limited availability’ caused by the data floods and reassured merchants that normal operations had resumed.

Handling nearly €1.3 trillion in payments last year, Adyen serves high‑profile clients such as Meta, Uber, eBay, HelloFresh and Spotify.

While the precise economic impact of the outages remains unclear, the episode highlights the vulnerability of global payment networks to cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram CEO says privacy matters more than market share

Pavel Durov, CEO of Telegram, has vowed that the company would rather exit markets like France than implement encryption backdoors.

Lawmakers in the European Union are increasingly proposing backdoors that would allow authorities to bypass encryption. These backdoors would grant access to private messages.

Durov’s statement comes as Telegram faces mounting pressure to comply with regulations that could undermine digital privacy.

In a recent post, Durov emphasised that backdoors, while intended for law enforcement, could be exploited by hackers and foreign agents. He warned that such measures would not only put user data at risk but also drive criminals to use lesser-known apps to avoid detection.

Telegram has long been committed to safeguarding its users’ privacy. It has never disclosed private messages to authorities, even when complying with court orders in certain jurisdictions.

Durov’s comments come at a time when the EU is proposing legislation that could force apps to implement backdoors for police access. Despite some progress, the fight for digital privacy continues.

Durov urges privacy advocates to resist these changes. He stresses that losing encryption would be a major blow to personal safety and freedom.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google spoofed in sophisticated phishing attack

A sophisticated phishing attack recently targeted Google users, exploiting a well-known email authentication method to bypass security measures.

The attackers sent emails appearing to be from Google’s legitimate address, [email protected], and claimed the recipient needed to comply with a subpoena.

The emails contained a link to a Google Sites page, prompting users to log in and revealing a fake legal support page.

What made this phishing attempt particularly dangerous was that it successfully passed both DMARC and DKIM email authentication checks, making it appear entirely genuine to recipients.

In another cyber-related development, Microsoft issued a warning regarding the use of Node.js in distributing malware. Attackers have been using the JavaScript runtime environment to deploy malware through scripts and executables, particularly targeting cryptocurrency traders via malvertising campaigns.

The new technique involves executing JavaScript directly from the command line, making it harder to detect by traditional security tools.

Meanwhile, the US has witnessed a significant change in its disinformation-fighting efforts.

The State Department has closed its Counter Foreign Information Manipulation and Interference group, previously known as the Global Engagement Center, after accusations that it was overreaching in its censorship activities.

The closure, led by Secretary of State Marco Rubio, has sparked criticism, with some seeing it as a victory for foreign powers like Russia and China.

Finally, gig workers face new challenges as the Tech Transparency Project revealed that Facebook groups are being used to trade fake gig worker accounts for platforms like Uber and Lyft.

Sellers offer access to verified accounts, bypassing safety checks, and putting passengers and customers at risk. Despite reports to Meta, many of these groups remain active, with the social media giant’s automated systems failing to curb the activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT search grows rapidly in Europe

ChatGPT search, the web-accessing feature within OpenAI’s chatbot, has seen rapid growth across Europe, attracting an average of 41.3 million monthly active users in the six months leading up to March 31.

It marks a sharp rise from 11.2 million in the previous six-month period, according to a regulatory filing by OpenAI Ireland Limited.

Instead of operating unnoticed, the service must now report this data under the EU’s Digital Services Act (DSA), which defines monthly recipients as users who actively view or interact with the platform.

Should usage cross 45 million, ChatGPT search could be classified as a ‘very large’ online platform and face stricter rules, including transparency obligations, user opt-outs from personalised recommendations, and regular audits.

Failure to follow DSA regulations could lead to serious penalties, up to 6% of OpenAI’s global revenue, or even a temporary ban in the EU for ongoing violations. The law aims to ensure online platforms operate more responsibly and with better oversight in the digital space.

Despite gaining ground, ChatGPT search still lags far behind Google, which handles hundreds of times more queries.

Studies have also raised concerns about the accuracy of AI search tools, with ChatGPT found to misidentify a majority of news articles and occasionally misrepresent licensed content from publishers.

Instead of fully replacing traditional search, these AI tools may still need improvement to become reliable alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta uses AI to spot teens lying about age

Meta has announced it is ramping up efforts to protect teenagers on Instagram by deploying AI to detect users who may have lied about their age. The technology will automatically place suspected underage users into Teen Accounts, even if their profiles state they are adults.

These special accounts come with stricter safety settings designed for users under 16. Those who believe they’ve been misclassified will have the option to adjust their settings manually.

Instead of relying solely on self-reported birthdates, Meta is using its AI to analyse behaviour and signals that suggest a user might be younger than claimed.

While the company has used this technology to estimate age ranges before, it is now applying it more aggressively to catch teens who attempt to bypass the platform’s safeguards. The tech giant insists it’s working to ensure the accuracy of these classifications to prevent mistakes.

Alongside this new AI tool, Meta will also begin sending notifications to parents about their children’s Instagram settings.

These alerts, which are sent only to parents who have Instagram accounts of their own, aim to encourage open conversations at home about the importance of honest age representation online.

Teen Accounts were first introduced last year and are designed to limit access to harmful content, reduce contact from strangers, and promote healthier screen time habits.

Instead of granting unrestricted access, these accounts are private by default, block unsolicited messages, and remind teens to take breaks after prolonged scrolling.

Meta says the goal is to adapt to the digital age and partner with parents to make Instagram a safer space for young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake banking apps leave sellers thousands out of pocket

Scammers are using fake mobile banking apps to trick people into handing over valuable items without receiving any payment.

These apps, which convincingly mimic legitimate platforms, display false ‘successful payment’ screens in person, allowing fraudsters to walk away with goods while the money never arrives.

Victims like Anthony Rudd and John Reddock have lost thousands after being targeted while selling items through social media marketplaces. Mr Rudd handed over £1,000 worth of tools from his Salisbury workshop, only to realise the payment notification was fake.

Mr Reddock, from the UK, lost a £2,000 gold bracelet he had hoped to sell to fund a holiday for his children.

BBC West Investigations found that some of these fake apps, previously removed from the Google Play store, are now being downloaded directly from the internet onto Android phones.

The Chartered Trading Standards Institute described this scam as an emerging threat, warning that in-person fraud is growing more complex instead of fading away.

With police often unable to track down suspects, small business owners like Sebastian Liberek have been left feeling helpless after being targeted repeatedly.

He has lost hundreds of pounds to fake transfers and believes scammers will continue striking, while enforcement remains limited and platforms fail to do enough to stop the spread of fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!