MTN confirms cybersecurity breach and data exposure

MTN Group has confirmed a cybersecurity breach that exposed personal data of some customers in certain markets. The telecom giant assured the public, however, that its core infrastructure remains secure and fully operational.

The breach involved an unknown third party gaining unauthorised access to parts of MTN’s systems, though the company emphasised that critical services, including mobile money and digital wallets, were unaffected.

In a statement released on Thursday, MTN clarified that investigations are ongoing, but no evidence suggests any compromise of its central infrastructure, such as its network, billing, or financial service platforms.

MTN has alerted the law enforcement of South Africa and is collaborating with regulatory bodies in the affected regions.

The company urged customers to take steps to safeguard their data, such as monitoring financial statements, using strong passwords, and being cautious with suspicious communications.

MTN also recommended enabling multi-factor authentication and avoiding sharing sensitive information like PINs or passwords through unsecured channels.

While investigations continue, MTN has committed to providing updates as more details emerge, reiterating its dedication to transparency and customer protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Politeness to AI is about us, not them

In his thought-provoking blog post ‘Politeness in 2025: Why are we so kind to AI?’, Dr Jovan Kurbalija explores why nearly 80% of users in the UK and the USA instinctively say ‘please’ and ‘thank you’ to AI platforms like ChatGPT.

While machines lack feelings, our politeness reveals more about human psychology and cultural habits than the technology itself. For many, courtesy is a deeply ingrained reflex shaped by personality traits such as agreeableness and lifelong social conditioning, extending kindness even to non-sentient entities.

However, not everyone shares this approach. Some users are driven by subtle fears of future AI dominance, using politeness as a safeguard, while others prioritise efficiency, viewing AI purely as a tool undeserving of social niceties.

A rational minority dismisses politeness altogether, recognising AI as nothing more than code. Dr Kurbalija highlights that these varied responses reflect how we perceive and interact with technology, influenced by both evolutionary instincts and modern cognitive biases.

Beyond individual behaviour, Kurbalija points to a deeper issue: our tendency to humanise AI and expect it to behave like us, unlike traditional machines. This blurring of lines between tool and teammate raises important questions about how our perceptions shape AI’s role in society.

Ultimately, he suggests that politeness toward AI isn’t about the machine—it reflects the kind of humans we aspire to be, preserving empathy and grace in an increasingly digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT expands Deep Research to more users

A new feature introduced by ChatGPT in February, called Deep Research, is gradually becoming available across its user base. This includes subscribers on the Plus, Team, and Pro plans, while even those using the free ChatGPT app on iOS and Android can now access a simplified version.

Designed to carry out in-depth reports and analyses within minutes, Deep Research uses OpenAI’s o3 model to perform tasks that would otherwise take people hours to complete.

Instead of limiting access to paid users alone, OpenAI has rolled out a lightweight version powered by its o4-mini AI model for free users. Although responses are shorter, the company insists the quality and depth remain comparable.

The more efficient model also helps reduce costs, while delivering what OpenAI calls ‘nearly as intelligent’ results as the full version.

The feature’s capabilities stretch from suggesting personalised product purchases like cars or TVs, to helping with complex decisions such as choosing a university or analysing market trends.

Free-tier users are currently allowed up to five Deep Research tasks each month, whereas Plus and Team plans get ten full and fifteen lightweight tasks. Pro users enjoy a generous 125 tasks of each version per month, and EDU and Enterprise plans will begin access next week.

Once users hit their full version limit, they’ll be automatically shifted to the lightweight tool instead of losing access altogether. Meanwhile, Google’s GeminiAI offers a similar function for its paying customers, also aiming to deliver quick, human-level research and analysis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under scrutiny in France over digital Ad practices

Meta, the parent company of Facebook, is facing fresh legal backlash in France as 67 French media companies representing over 200 publications filed a lawsuit alleging unfair competition in the digital advertising market. 

The case, brought before the Paris business tribunal, accuses Meta of abusing its dominant position through massive personal data collection and targeted advertising without proper consent.

The case marks the latest legal dispute in a string of EU legal challenges for the tech giant this week. 

Media outlets such as TF1, France TV, BFM TV, and major newspaper groups like Le Figaro, Liberation, and Radio France are among the plaintiffs. 

They argue that Meta’s ad dominance is built on practices that undermine fair competition and jeopardise the sustainability of traditional media.

The French case adds to mounting pressure across the EU. In Spain, Meta is due to face trial over a €551 million complaint filed by over 80 media firms in October. 

Meanwhile, the EU regulators fined Meta and Apple earlier this year for breaching European digital market rules, while online privacy advocates have launched parallel complaints over Meta’s data handling.

Legal firms Scott+Scott and Darrois Villey Maillot Brochier represent the French media alliance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK introduces landmark online safety rules to protect children

The UK’s regulator, Ofcom, has unveiled new online safety rules to provide stronger protections for children, requiring platforms to adjust algorithms, implement stricter age checks, and swiftly tackle harmful content by 25 July or face hefty fines. These measures target sites hosting pornography or content promoting self-harm, suicide, and eating disorders, demanding more robust efforts to shield young users.

Ofcom chief Dame Melanie Dawes called the regulations a ‘gamechanger,’ emphasising that platforms must adapt if they wish to serve under-18s in the UK. While supporters like former Facebook safety officer Prof Victoria Baines see this as a positive step, critics argue the rules don’t go far enough, with campaigners expressing disappointment over perceived gaps, particularly in addressing encrypted private messaging.

The rules, part of the Online Safety Act pending parliamentary approval, include over 40 obligations such as clearer terms of service for children, annual risk reviews, and dedicated accountability for child safety. The NSPCC welcomed the move but urged Ofcom to tighten oversight, especially where hidden online risks remain unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ubisoft under fire for forcing online connection in offline games

French video game publisher Ubisoft is facing a formal privacy complaint from European advocacy group noyb for requiring players to stay online even when enjoying single-player games.

The complaint, lodged with Austria’s data protection authority, accuses Ubisoft of violating EU privacy laws by collecting personal data without consent.

Noyb argues that Ubisoft makes players connect to the internet and log into a Ubisoft account unnecessarily, even when they are not interacting with other users.

Instead of limiting data collection to essential functions, noyb claims the company contacts external servers, including Google and Amazon, over 150 times during gameplay. This, they say, reveals a broader surveillance practice hidden beneath the surface.

Ubisoft, known for blockbuster titles like Assassin’s Creed and Far Cry, has not yet explained why such data collection is needed for offline play.

The complainant who examined the traffic found that Ubisoft gathers login and browsing data and uses third-party tools, practices that, under GDPR rules, require explicit user permission. Instead of offering transparency, Ubisoft reportedly failed to justify these invasive practices.

Noyb is calling on regulators to demand deletion of all data collected without a clear legal basis and to fine Ubisoft €92 million. They argue that consumers, who already pay steep prices for video games, should not have to sacrifice their privacy in the process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI partners with major news outlets

OpenAI has signed multiple content-sharing deals with major media outlets, including Politico, Vox, Wired, and Vanity Fair, allowing their content to be featured in ChatGPT.

As part of the deal with The Washington Post, ChatGPT will display summaries, quotes, and links to the publication’s original reporting in response to relevant queries. OpenAI has secured similar partnerships with over 20 news publishers and 160 outlets in 20 languages.

The Washington Post’s head of global partnerships, Peter Elkins-Williams, emphasised the importance of meeting audiences where they are, ensuring ChatGPT users have access to impactful reporting.

OpenAI’s media partnerships head, Varun Shetty, noted that more than 500 million people use ChatGPT weekly, highlighting the significance of these collaborations in providing timely, trustworthy information to users.

OpenAI has worked to avoid criticism related to copyright infringement, having previously faced legal challenges, particularly from the New York Times, over claims that chatbots were trained on millions of articles without permission.

While OpenAI sought to dismiss these claims, a US district court allowed the case to proceed, intensifying scrutiny over AI’s use of news content.

Despite these challenges, OpenAI continues to form agreements with leading publications, such as Hearst, Condé Nast, Time magazine, and Vox Media, helping ensure their journalism reaches a wider audience.

Meanwhile, other publications have pursued legal action against AI companies like Cohere for allegedly using their content without consent to train AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI films are now eligible for the Oscar awards

The Academy of Motion Picture Arts and Sciences has officially made films that incorporate AI eligible for Oscars, reflecting AI’s growing influence in cinema. Updated rules confirm that the use of generative AI or similar tools will neither help nor harm a film’s chances of nomination.

These guidelines, shaped with input from the Academy’s Science and Technology Council, aim to keep human creativity at the forefront, despite the increasing presence of digital tools in production.

Recent Oscar-winning films have already embraced AI. Adrien Brody’s performance in The Brutalist was enhanced using AI to refine his Hungarian accent, while Emilia Perez, a musical that claimed an award, used voice-cloning technology to support its cast.

Such tools can convincingly replicate voices and visual styles, making them attractive to filmmakers instead of relying solely on traditional methods, but not without raising industry-wide concerns.

The 2023 Hollywood strikes highlighted the tension between artistic control and automation. Writers and actors protested the threat posed by AI to their livelihoods, leading to new agreements that limit the use of AI-generated content and protect individuals’ likenesses.

Actress Susan Sarandon voiced fears about unauthorised use of her image, and Scarlett Johansson echoed concerns about digital impersonation.

Despite some safeguards, many in the industry remain wary. Animators argue that AI lacks the emotional nuance needed for truly compelling storytelling, and Rokit Flix’s co-founder Jonathan Kendrick warned that AI might help draft scenes, but can’t deliver the depth required for an Oscar-worthy film.

Alongside the AI rules, the Academy also introduced a new voting requirement. Members must now view every nominated film in a category before casting their final vote, to encourage fairer decisions in this shifting creative environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI eyes Chrome in bid to boost ChatGPT

OpenAI has expressed interest in acquiring Google’s Chrome browser if it were to be made available, viewing it as a potential boost for its AI platform, ChatGPT.

The remarks, made by Nick Turley, head of product for ChatGPT, surfaced during the US Department of Justice’s antitrust trial against Google. The case follows a 2023 ruling that found Google had maintained an illegal monopoly in online search and advertising.

Although Google has shown no intention to sell Chrome and plans to appeal, the DoJ has suggested the move as a remedy to restore competition.

Turley disclosed that OpenAI previously approached Google to use its search technology within ChatGPT, after facing limitations with Microsoft Bing, its current provider.

An email from OpenAI presented in court showed the company proposed using multiple partners, including Google’s search API, to improve the chatbot’s performance. Google, however, declined the request, citing fears of empowering rivals.

Turley confirmed there is currently no partnership with Google and noted that ChatGPT remains years away from answering most queries using its own search system.

The testimony also highlighted OpenAI’s distribution challenges. Turley voiced concerns over being shut out of key access points controlled by major tech firms, such as browsers and app stores.

While OpenAI secured integration with Apple’s iPhones, it has struggled to achieve similar placements on Android devices. Turley argued that forcing Google to share search data with competitors could instead speed up ChatGPT’s development and improve user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russian hackers target NGOs with fake video calls

Hackers linked to Russia are refining their techniques to infiltrate Microsoft 365 accounts, according to cybersecurity firm Volexity.

Their latest strategy targets non-governmental organisations (NGOs) associated with Ukraine by exploiting OAuth, a protocol used for app authorisation without passwords.

Victims are lured into fake video calls through apps like Signal or WhatsApp and tricked into handing over OAuth codes, which attackers then use to access Microsoft 365 environments.

The campaign, first detected in March, involved messages claiming to come from European security officials proposing meetings with political representatives. Instead of legitimate video links, these messages directed recipients to OAuth code generators.

Once a code was shared, attackers could gain entry into accounts containing sensitive data. Staff at human rights organisations were especially targeted due to their work on Ukraine-related issues.

Volexity attributed the scheme to two threat actors, UTA0352 and UTA0355, though it did not directly connect them to any known Russian advanced persistent threat groups.

A previous attack from the same actors used Microsoft Device Code Authentication, usually reserved for connecting smart devices, instead of traditional login methods. Both campaigns show a growing sophistication in social engineering tactics.

Given the widespread use of Microsoft 365 tools like Outlook and Teams, experts urge organisations to heighten awareness among staff.

Rather than trusting unsolicited messages on encrypted apps, users should remain cautious when prompted to click links or enter authentication codes, as these could be cleverly disguised attempts to breach secure systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!