OpenAI partners with major news outlets

OpenAI has signed multiple content-sharing deals with major media outlets, including Politico, Vox, Wired, and Vanity Fair, allowing their content to be featured in ChatGPT.

As part of the deal with The Washington Post, ChatGPT will display summaries, quotes, and links to the publication’s original reporting in response to relevant queries. OpenAI has secured similar partnerships with over 20 news publishers and 160 outlets in 20 languages.

The Washington Post’s head of global partnerships, Peter Elkins-Williams, emphasised the importance of meeting audiences where they are, ensuring ChatGPT users have access to impactful reporting.

OpenAI’s media partnerships head, Varun Shetty, noted that more than 500 million people use ChatGPT weekly, highlighting the significance of these collaborations in providing timely, trustworthy information to users.

OpenAI has worked to avoid criticism related to copyright infringement, having previously faced legal challenges, particularly from the New York Times, over claims that chatbots were trained on millions of articles without permission.

While OpenAI sought to dismiss these claims, a US district court allowed the case to proceed, intensifying scrutiny over AI’s use of news content.

Despite these challenges, OpenAI continues to form agreements with leading publications, such as Hearst, Condé Nast, Time magazine, and Vox Media, helping ensure their journalism reaches a wider audience.

Meanwhile, other publications have pursued legal action against AI companies like Cohere for allegedly using their content without consent to train AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI films are now eligible for the Oscar awards

The Academy of Motion Picture Arts and Sciences has officially made films that incorporate AI eligible for Oscars, reflecting AI’s growing influence in cinema. Updated rules confirm that the use of generative AI or similar tools will neither help nor harm a film’s chances of nomination.

These guidelines, shaped with input from the Academy’s Science and Technology Council, aim to keep human creativity at the forefront, despite the increasing presence of digital tools in production.

Recent Oscar-winning films have already embraced AI. Adrien Brody’s performance in The Brutalist was enhanced using AI to refine his Hungarian accent, while Emilia Perez, a musical that claimed an award, used voice-cloning technology to support its cast.

Such tools can convincingly replicate voices and visual styles, making them attractive to filmmakers instead of relying solely on traditional methods, but not without raising industry-wide concerns.

The 2023 Hollywood strikes highlighted the tension between artistic control and automation. Writers and actors protested the threat posed by AI to their livelihoods, leading to new agreements that limit the use of AI-generated content and protect individuals’ likenesses.

Actress Susan Sarandon voiced fears about unauthorised use of her image, and Scarlett Johansson echoed concerns about digital impersonation.

Despite some safeguards, many in the industry remain wary. Animators argue that AI lacks the emotional nuance needed for truly compelling storytelling, and Rokit Flix’s co-founder Jonathan Kendrick warned that AI might help draft scenes, but can’t deliver the depth required for an Oscar-worthy film.

Alongside the AI rules, the Academy also introduced a new voting requirement. Members must now view every nominated film in a category before casting their final vote, to encourage fairer decisions in this shifting creative environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI eyes Chrome in bid to boost ChatGPT

OpenAI has expressed interest in acquiring Google’s Chrome browser if it were to be made available, viewing it as a potential boost for its AI platform, ChatGPT.

The remarks, made by Nick Turley, head of product for ChatGPT, surfaced during the US Department of Justice’s antitrust trial against Google. The case follows a 2023 ruling that found Google had maintained an illegal monopoly in online search and advertising.

Although Google has shown no intention to sell Chrome and plans to appeal, the DoJ has suggested the move as a remedy to restore competition.

Turley disclosed that OpenAI previously approached Google to use its search technology within ChatGPT, after facing limitations with Microsoft Bing, its current provider.

An email from OpenAI presented in court showed the company proposed using multiple partners, including Google’s search API, to improve the chatbot’s performance. Google, however, declined the request, citing fears of empowering rivals.

Turley confirmed there is currently no partnership with Google and noted that ChatGPT remains years away from answering most queries using its own search system.

The testimony also highlighted OpenAI’s distribution challenges. Turley voiced concerns over being shut out of key access points controlled by major tech firms, such as browsers and app stores.

While OpenAI secured integration with Apple’s iPhones, it has struggled to achieve similar placements on Android devices. Turley argued that forcing Google to share search data with competitors could instead speed up ChatGPT’s development and improve user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russian hackers target NGOs with fake video calls

Hackers linked to Russia are refining their techniques to infiltrate Microsoft 365 accounts, according to cybersecurity firm Volexity.

Their latest strategy targets non-governmental organisations (NGOs) associated with Ukraine by exploiting OAuth, a protocol used for app authorisation without passwords.

Victims are lured into fake video calls through apps like Signal or WhatsApp and tricked into handing over OAuth codes, which attackers then use to access Microsoft 365 environments.

The campaign, first detected in March, involved messages claiming to come from European security officials proposing meetings with political representatives. Instead of legitimate video links, these messages directed recipients to OAuth code generators.

Once a code was shared, attackers could gain entry into accounts containing sensitive data. Staff at human rights organisations were especially targeted due to their work on Ukraine-related issues.

Volexity attributed the scheme to two threat actors, UTA0352 and UTA0355, though it did not directly connect them to any known Russian advanced persistent threat groups.

A previous attack from the same actors used Microsoft Device Code Authentication, usually reserved for connecting smart devices, instead of traditional login methods. Both campaigns show a growing sophistication in social engineering tactics.

Given the widespread use of Microsoft 365 tools like Outlook and Teams, experts urge organisations to heighten awareness among staff.

Rather than trusting unsolicited messages on encrypted apps, users should remain cautious when prompted to click links or enter authentication codes, as these could be cleverly disguised attempts to breach secure systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google spoofed in sophisticated phishing attack

A sophisticated phishing attack recently targeted Google users, exploiting a well-known email authentication method to bypass security measures.

The attackers sent emails appearing to be from Google’s legitimate address, no-reply@accounts.google.com, and claimed the recipient needed to comply with a subpoena.

The emails contained a link to a Google Sites page, prompting users to log in and revealing a fake legal support page.

What made this phishing attempt particularly dangerous was that it successfully passed both DMARC and DKIM email authentication checks, making it appear entirely genuine to recipients.

In another cyber-related development, Microsoft issued a warning regarding the use of Node.js in distributing malware. Attackers have been using the JavaScript runtime environment to deploy malware through scripts and executables, particularly targeting cryptocurrency traders via malvertising campaigns.

The new technique involves executing JavaScript directly from the command line, making it harder to detect by traditional security tools.

Meanwhile, the US has witnessed a significant change in its disinformation-fighting efforts.

The State Department has closed its Counter Foreign Information Manipulation and Interference group, previously known as the Global Engagement Center, after accusations that it was overreaching in its censorship activities.

The closure, led by Secretary of State Marco Rubio, has sparked criticism, with some seeing it as a victory for foreign powers like Russia and China.

Finally, gig workers face new challenges as the Tech Transparency Project revealed that Facebook groups are being used to trade fake gig worker accounts for platforms like Uber and Lyft.

Sellers offer access to verified accounts, bypassing safety checks, and putting passengers and customers at risk. Despite reports to Meta, many of these groups remain active, with the social media giant’s automated systems failing to curb the activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT search grows rapidly in Europe

ChatGPT search, the web-accessing feature within OpenAI’s chatbot, has seen rapid growth across Europe, attracting an average of 41.3 million monthly active users in the six months leading up to March 31.

It marks a sharp rise from 11.2 million in the previous six-month period, according to a regulatory filing by OpenAI Ireland Limited.

Instead of operating unnoticed, the service must now report this data under the EU’s Digital Services Act (DSA), which defines monthly recipients as users who actively view or interact with the platform.

Should usage cross 45 million, ChatGPT search could be classified as a ‘very large’ online platform and face stricter rules, including transparency obligations, user opt-outs from personalised recommendations, and regular audits.

Failure to follow DSA regulations could lead to serious penalties, up to 6% of OpenAI’s global revenue, or even a temporary ban in the EU for ongoing violations. The law aims to ensure online platforms operate more responsibly and with better oversight in the digital space.

Despite gaining ground, ChatGPT search still lags far behind Google, which handles hundreds of times more queries.

Studies have also raised concerns about the accuracy of AI search tools, with ChatGPT found to misidentify a majority of news articles and occasionally misrepresent licensed content from publishers.

Instead of fully replacing traditional search, these AI tools may still need improvement to become reliable alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta uses AI to spot teens lying about age

Meta has announced it is ramping up efforts to protect teenagers on Instagram by deploying AI to detect users who may have lied about their age. The technology will automatically place suspected underage users into Teen Accounts, even if their profiles state they are adults.

These special accounts come with stricter safety settings designed for users under 16. Those who believe they’ve been misclassified will have the option to adjust their settings manually.

Instead of relying solely on self-reported birthdates, Meta is using its AI to analyse behaviour and signals that suggest a user might be younger than claimed.

While the company has used this technology to estimate age ranges before, it is now applying it more aggressively to catch teens who attempt to bypass the platform’s safeguards. The tech giant insists it’s working to ensure the accuracy of these classifications to prevent mistakes.

Alongside this new AI tool, Meta will also begin sending notifications to parents about their children’s Instagram settings.

These alerts, which are sent only to parents who have Instagram accounts of their own, aim to encourage open conversations at home about the importance of honest age representation online.

Teen Accounts were first introduced last year and are designed to limit access to harmful content, reduce contact from strangers, and promote healthier screen time habits.

Instead of granting unrestricted access, these accounts are private by default, block unsolicited messages, and remind teens to take breaks after prolonged scrolling.

Meta says the goal is to adapt to the digital age and partner with parents to make Instagram a safer space for young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake banking apps leave sellers thousands out of pocket

Scammers are using fake mobile banking apps to trick people into handing over valuable items without receiving any payment.

These apps, which convincingly mimic legitimate platforms, display false ‘successful payment’ screens in person, allowing fraudsters to walk away with goods while the money never arrives.

Victims like Anthony Rudd and John Reddock have lost thousands after being targeted while selling items through social media marketplaces. Mr Rudd handed over £1,000 worth of tools from his Salisbury workshop, only to realise the payment notification was fake.

Mr Reddock, from the UK, lost a £2,000 gold bracelet he had hoped to sell to fund a holiday for his children.

BBC West Investigations found that some of these fake apps, previously removed from the Google Play store, are now being downloaded directly from the internet onto Android phones.

The Chartered Trading Standards Institute described this scam as an emerging threat, warning that in-person fraud is growing more complex instead of fading away.

With police often unable to track down suspects, small business owners like Sebastian Liberek have been left feeling helpless after being targeted repeatedly.

He has lost hundreds of pounds to fake transfers and believes scammers will continue striking, while enforcement remains limited and platforms fail to do enough to stop the spread of fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI deploys new safeguards for AI models to curb biothreat risks

OpenAI has introduced a new monitoring system to reduce the risk of its latest AI models, o3 and o4-mini, being misused to create chemical or biological threats.

The ‘safety-focused reasoning monitor’ is built to detect prompts related to dangerous materials and instruct the AI models to withhold potentially harmful advice, instead of providing answers that could aid bad actors.

These newer models represent a major leap in capability compared to previous versions, especially in their ability to respond to prompts about biological weapons. To counteract this, OpenAI’s internal red teams spent 1,000 hours identifying unsafe interactions.

Simulated tests showed the safety monitor successfully blocked 98.7% of risky prompts, although OpenAI admits the system does not account for users trying again with different wording, a gap still covered by human oversight instead of relying solely on automation.

Despite assurances that neither o3 nor o4-mini meets OpenAI’s ‘high risk’ threshold, the company acknowledges these models are more effective at answering dangerous questions than earlier ones like o1 and GPT-4.

Similar monitoring tools are also being used to block harmful image generation in other models, yet critics argue OpenAI should do more.

Concerns have been raised over rushed testing timelines and the lack of a safety report for GPT-4.1, which was launched this week instead of being accompanied by transparency documentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI pushes Grok forward with memory update

Elon Musk’s AI venture, xAI, has introduced a new ‘memory’ feature for its Grok chatbot in a bid to compete more closely with established rivals like ChatGPT and Google’s Gemini.

The update allows Grok to remember details from past conversations, enabling it to provide more personalised responses when asked for advice or recommendations, instead of offering generic answers.

Unlike before, Grok can now ‘learn’ a user’s preferences over time, provided it’s used frequently enough. The move mirrors similar features from competitors, with ChatGPT already referencing full chat histories and Gemini using persistent memory to shape its replies.

According to xAI, the memory is fully transparent. Users can view what Grok has remembered and choose to delete specific entries at any time.

The memory function is currently available in beta on Grok’s website and mobile apps, although not yet accessible to users in the EU or UK.

Instead of being automatically enabled, it can be turned off in the settings menu under Data Controls. Deleting individual memories is also possible via the web chat interface, with Android support expected shortly.

xAI has confirmed it is working on adding memory support to Grok’s version on X. However, this expansion aims to deepen the bot’s integration with users’ digital lives instead of limiting the experience to one platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!