MTN confirms cybersecurity breach and data exposure

MTN Group has confirmed a cybersecurity breach that exposed personal data of some customers in certain markets. The telecom giant assured the public, however, that its core infrastructure remains secure and fully operational.

The breach involved an unknown third party gaining unauthorised access to parts of MTN’s systems, though the company emphasised that critical services, including mobile money and digital wallets, were unaffected.

In a statement released on Thursday, MTN clarified that investigations are ongoing, but no evidence suggests any compromise of its central infrastructure, such as its network, billing, or financial service platforms.

MTN has alerted the law enforcement of South Africa and is collaborating with regulatory bodies in the affected regions.

The company urged customers to take steps to safeguard their data, such as monitoring financial statements, using strong passwords, and being cautious with suspicious communications.

MTN also recommended enabling multi-factor authentication and avoiding sharing sensitive information like PINs or passwords through unsecured channels.

While investigations continue, MTN has committed to providing updates as more details emerge, reiterating its dedication to transparency and customer protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korean hackers create fake US firms to target crypto developers

North Korea’s Lazarus Group has launched a sophisticated campaign to infiltrate the cryptocurrency industry by registering fake companies in the US and using them to lure developers into downloading malware.

According to a Reuters investigation, these US-registered shell companies, including Blocknovas LLC and Softglide LLC, were set up using false identities and addresses, giving the operation a veneer of legitimacy instead of drawing suspicion.

Once established, the fake firms posted job listings through legitimate platforms like LinkedIn and Upwork to attract developers. Applicants were guided through fake interview processes and instructed to download so-called test assignments.

Instead of harmless software, the files installed malware that enabled the hackers to steal passwords, crypto wallet keys, and other sensitive information.

The FBI has since seized Blocknovas’ domain and confirmed its connection to Lazarus, labelling the campaign a significant evolution in North Korea’s cyber operations.

These attacks were supported by Russian infrastructure, allowing Lazarus operatives to bypass North Korea’s limited internet access.

Tools such as VPNs and remote desktop software enabled them to manage operations, communicate over platforms like GitHub and Telegram, and even record training videos on how to exfiltrate data.

Silent Push researchers confirmed that the campaign has impacted hundreds of developers and likely fed some stolen access to state-aligned espionage units instead of limiting the effort to theft.

Officials from the US, South Korea, and the UN say the revenue from such cyberattacks is funneled into North Korea’s nuclear missile programme. The FBI continues to investigate and has warned that not only the hackers but also those assisting their operations could face serious consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Politeness to AI is about us, not them

In his thought-provoking blog post ‘Politeness in 2025: Why are we so kind to AI?’, Dr Jovan Kurbalija explores why nearly 80% of users in the UK and the USA instinctively say ‘please’ and ‘thank you’ to AI platforms like ChatGPT.

While machines lack feelings, our politeness reveals more about human psychology and cultural habits than the technology itself. For many, courtesy is a deeply ingrained reflex shaped by personality traits such as agreeableness and lifelong social conditioning, extending kindness even to non-sentient entities.

However, not everyone shares this approach. Some users are driven by subtle fears of future AI dominance, using politeness as a safeguard, while others prioritise efficiency, viewing AI purely as a tool undeserving of social niceties.

A rational minority dismisses politeness altogether, recognising AI as nothing more than code. Dr Kurbalija highlights that these varied responses reflect how we perceive and interact with technology, influenced by both evolutionary instincts and modern cognitive biases.

Beyond individual behaviour, Kurbalija points to a deeper issue: our tendency to humanise AI and expect it to behave like us, unlike traditional machines. This blurring of lines between tool and teammate raises important questions about how our perceptions shape AI’s role in society.

Ultimately, he suggests that politeness toward AI isn’t about the machine—it reflects the kind of humans we aspire to be, preserving empathy and grace in an increasingly digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT expands Deep Research to more users

A new feature introduced by ChatGPT in February, called Deep Research, is gradually becoming available across its user base. This includes subscribers on the Plus, Team, and Pro plans, while even those using the free ChatGPT app on iOS and Android can now access a simplified version.

Designed to carry out in-depth reports and analyses within minutes, Deep Research uses OpenAI’s o3 model to perform tasks that would otherwise take people hours to complete.

Instead of limiting access to paid users alone, OpenAI has rolled out a lightweight version powered by its o4-mini AI model for free users. Although responses are shorter, the company insists the quality and depth remain comparable.

The more efficient model also helps reduce costs, while delivering what OpenAI calls ‘nearly as intelligent’ results as the full version.

The feature’s capabilities stretch from suggesting personalised product purchases like cars or TVs, to helping with complex decisions such as choosing a university or analysing market trends.

Free-tier users are currently allowed up to five Deep Research tasks each month, whereas Plus and Team plans get ten full and fifteen lightweight tasks. Pro users enjoy a generous 125 tasks of each version per month, and EDU and Enterprise plans will begin access next week.

Once users hit their full version limit, they’ll be automatically shifted to the lightweight tool instead of losing access altogether. Meanwhile, Google’s GeminiAI offers a similar function for its paying customers, also aiming to deliver quick, human-level research and analysis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ubisoft under fire for forcing online connection in offline games

French video game publisher Ubisoft is facing a formal privacy complaint from European advocacy group noyb for requiring players to stay online even when enjoying single-player games.

The complaint, lodged with Austria’s data protection authority, accuses Ubisoft of violating EU privacy laws by collecting personal data without consent.

Noyb argues that Ubisoft makes players connect to the internet and log into a Ubisoft account unnecessarily, even when they are not interacting with other users.

Instead of limiting data collection to essential functions, noyb claims the company contacts external servers, including Google and Amazon, over 150 times during gameplay. This, they say, reveals a broader surveillance practice hidden beneath the surface.

Ubisoft, known for blockbuster titles like Assassin’s Creed and Far Cry, has not yet explained why such data collection is needed for offline play.

The complainant who examined the traffic found that Ubisoft gathers login and browsing data and uses third-party tools, practices that, under GDPR rules, require explicit user permission. Instead of offering transparency, Ubisoft reportedly failed to justify these invasive practices.

Noyb is calling on regulators to demand deletion of all data collected without a clear legal basis and to fine Ubisoft €92 million. They argue that consumers, who already pay steep prices for video games, should not have to sacrifice their privacy in the process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware decline masks growing threat

A recent drop in reported ransomware attacks might seem encouraging, yet experts warn this is likely misleading. Figures from the NCC Group show a 32% decline in March 2025 compared to the previous month, totalling 600 incidents.

However, this dip is attributed to unusually large-scale attacks in earlier months, rather than an actual reduction in cybercrime. In fact, incidents were up 46% compared with March last year, highlighting the continued escalation in threat activity.

Rather than fading, ransomware groups are becoming more sophisticated. Babuk 2.0 emerged as the most active group in March, though doubts surround its legitimacy. Security researchers believe it may be recycling leaked data from previous breaches, aiming to trick victims instead of launching new attacks.

A tactic like this mirrors behaviours seen after law enforcement disrupted other major ransomware networks, such as LockBit in 2024.

Industrials were the hardest hit, followed by consumer-focused sectors, while North America bore the brunt of geographic targeting.

With nearly half of all recorded attacks occurring in the region, analysts expect North America, especially Canada, to remain a prime target amid rising political tensions and cyber vulnerability.

Meanwhile, cybercriminals are turning to malvertising, malicious code hidden in online advertisements, as a stealthier route of attack. This tactic has gained traction through the misuse of trusted platforms like GitHub and Dropbox, and is increasingly being enhanced with generative AI tools.

Instead of relying solely on technical expertise, attackers now use AI to craft more convincing and complex threats. As these strategies grow more advanced, experts urge organisations to stay alert and prioritise threat intelligence and collaboration to navigate this volatile cyber landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamaica tests AI tools to aid teachers

The Jamaican Ministry of Education is testing AI tools in schools to assist teachers with marking and administrative duties.

Portfolio Minister Senator Dana Morris Dixon announced this during the Jamaica Teachers’ Association (JTA) Education Conference 2025, emphasising that AI would allow teachers to focus more on interacting with students, while AI handles routine tasks like grading.

The Ministry is also preparing to launch the Jamaica Learning Assistant, an AI-powered tool that personalises learning to fit individual students’ preferences, such as stories, humour, or quizzes.

Morris Dixon highlighted that AI is not meant to replace teachers, but to support them in delivering more effective lessons. The technology will allow students to review lessons, explore topics in more depth, and reinforce their understanding outside the classroom.

Looking ahead, the Government plans to open Jamaica’s first state-of-the-art AI lab later this year. The facility will offer a space where both students and teachers can develop technological solutions tailored for schools.

Additionally, the Ministry is distributing over 15,000 laptops, 600 smart boards, and 25,000 vouchers for teachers to subsidise the purchase of personal laptops to further integrate technology into the education system.

JTA President Mark Smith acknowledged the transformative potential of AI, calling it one of the most significant technological breakthroughs in history.

He urged educators to embrace this new paradigm and collaborate with the Ministry and the private sector to advance digital learning initiatives across the island.

The conference, held under the theme ‘Innovations in Education Technology: The Imperative of Change,’ reflects the ongoing push towards modernising education in Jamaica.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI films are now eligible for the Oscar awards

The Academy of Motion Picture Arts and Sciences has officially made films that incorporate AI eligible for Oscars, reflecting AI’s growing influence in cinema. Updated rules confirm that the use of generative AI or similar tools will neither help nor harm a film’s chances of nomination.

These guidelines, shaped with input from the Academy’s Science and Technology Council, aim to keep human creativity at the forefront, despite the increasing presence of digital tools in production.

Recent Oscar-winning films have already embraced AI. Adrien Brody’s performance in The Brutalist was enhanced using AI to refine his Hungarian accent, while Emilia Perez, a musical that claimed an award, used voice-cloning technology to support its cast.

Such tools can convincingly replicate voices and visual styles, making them attractive to filmmakers instead of relying solely on traditional methods, but not without raising industry-wide concerns.

The 2023 Hollywood strikes highlighted the tension between artistic control and automation. Writers and actors protested the threat posed by AI to their livelihoods, leading to new agreements that limit the use of AI-generated content and protect individuals’ likenesses.

Actress Susan Sarandon voiced fears about unauthorised use of her image, and Scarlett Johansson echoed concerns about digital impersonation.

Despite some safeguards, many in the industry remain wary. Animators argue that AI lacks the emotional nuance needed for truly compelling storytelling, and Rokit Flix’s co-founder Jonathan Kendrick warned that AI might help draft scenes, but can’t deliver the depth required for an Oscar-worthy film.

Alongside the AI rules, the Academy also introduced a new voting requirement. Members must now view every nominated film in a category before casting their final vote, to encourage fairer decisions in this shifting creative environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta uses AI to spot teens lying about age

Meta has announced it is ramping up efforts to protect teenagers on Instagram by deploying AI to detect users who may have lied about their age. The technology will automatically place suspected underage users into Teen Accounts, even if their profiles state they are adults.

These special accounts come with stricter safety settings designed for users under 16. Those who believe they’ve been misclassified will have the option to adjust their settings manually.

Instead of relying solely on self-reported birthdates, Meta is using its AI to analyse behaviour and signals that suggest a user might be younger than claimed.

While the company has used this technology to estimate age ranges before, it is now applying it more aggressively to catch teens who attempt to bypass the platform’s safeguards. The tech giant insists it’s working to ensure the accuracy of these classifications to prevent mistakes.

Alongside this new AI tool, Meta will also begin sending notifications to parents about their children’s Instagram settings.

These alerts, which are sent only to parents who have Instagram accounts of their own, aim to encourage open conversations at home about the importance of honest age representation online.

Teen Accounts were first introduced last year and are designed to limit access to harmful content, reduce contact from strangers, and promote healthier screen time habits.

Instead of granting unrestricted access, these accounts are private by default, block unsolicited messages, and remind teens to take breaks after prolonged scrolling.

Meta says the goal is to adapt to the digital age and partner with parents to make Instagram a safer space for young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Footnotes to bring crowd-sourced context to TikTok

TikTok is trialling a new feature called Footnotes in the United States, allowing users to add context to videos that may be misleading. The move mirrors the Community Notes system used by X, though TikTok will continue its own fact-checking programme in parallel.

Eligible adult users in the United States can apply to contribute Footnotes, and they will also be able to rate the helpfulness of others’ contributions.

Footnotes considered useful will appear publicly on TikTok, with wider users then able to vote on their value. The platform’s head of operations, Adam Presser, said the feature is designed to help users better understand complex topics, ongoing events, or content involving potentially misleading statistics.

The initiative builds on TikTok’s existing tools, including content labels, search banners, and partnerships with third-party fact-checkers such as AFP.

The announcement comes as TikTok’s parent company, ByteDance, continues negotiations with the US government to avoid a potential ban.

Talks over a sale have reportedly stalled amid rising tensions and new tariffs between Washington and Beijing.

While other tech giants such as Meta have scaled back fact-checking in favour of community-based moderation, TikTok is taking a combined approach to ensure greater content accuracy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!