SK Telecom investigates data breach after cyberattack

South Korean telecom leader SK Telecom has confirmed a cyberattack that compromised customer data following a malware infection.

The breach was detected on 19 April, prompting an immediate internal investigation and response. Authorities, including the Korea Internet Security Agency, have been alerted.

Personal information of South Korean customers was accessed during the attack, although the extent of the breach remains under review. In response, SK Telecom is offering a complimentary SIM protection service, hinting at potential SIM swapping risks linked to the leaked data.

The infected systems were quickly isolated and the malware removed. While no group has claimed responsibility, concerns remain over possible state-sponsored involvement, as telecom providers are frequent targets for cyberespionage.

It is currently unknown whether ransomware played a role in the incident. Investigations are ongoing as officials continue to assess the scope and origin of the breach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russian hackers target NGOs with fake video calls

Hackers linked to Russia are refining their techniques to infiltrate Microsoft 365 accounts, according to cybersecurity firm Volexity.

Their latest strategy targets non-governmental organisations (NGOs) associated with Ukraine by exploiting OAuth, a protocol used for app authorisation without passwords.

Victims are lured into fake video calls through apps like Signal or WhatsApp and tricked into handing over OAuth codes, which attackers then use to access Microsoft 365 environments.

The campaign, first detected in March, involved messages claiming to come from European security officials proposing meetings with political representatives. Instead of legitimate video links, these messages directed recipients to OAuth code generators.

Once a code was shared, attackers could gain entry into accounts containing sensitive data. Staff at human rights organisations were especially targeted due to their work on Ukraine-related issues.

Volexity attributed the scheme to two threat actors, UTA0352 and UTA0355, though it did not directly connect them to any known Russian advanced persistent threat groups.

A previous attack from the same actors used Microsoft Device Code Authentication, usually reserved for connecting smart devices, instead of traditional login methods. Both campaigns show a growing sophistication in social engineering tactics.

Given the widespread use of Microsoft 365 tools like Outlook and Teams, experts urge organisations to heighten awareness among staff.

Rather than trusting unsolicited messages on encrypted apps, users should remain cautious when prompted to click links or enter authentication codes, as these could be cleverly disguised attempts to breach secure systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google spoofed in sophisticated phishing attack

A sophisticated phishing attack recently targeted Google users, exploiting a well-known email authentication method to bypass security measures.

The attackers sent emails appearing to be from Google’s legitimate address, no-reply@accounts.google.com, and claimed the recipient needed to comply with a subpoena.

The emails contained a link to a Google Sites page, prompting users to log in and revealing a fake legal support page.

What made this phishing attempt particularly dangerous was that it successfully passed both DMARC and DKIM email authentication checks, making it appear entirely genuine to recipients.

In another cyber-related development, Microsoft issued a warning regarding the use of Node.js in distributing malware. Attackers have been using the JavaScript runtime environment to deploy malware through scripts and executables, particularly targeting cryptocurrency traders via malvertising campaigns.

The new technique involves executing JavaScript directly from the command line, making it harder to detect by traditional security tools.

Meanwhile, the US has witnessed a significant change in its disinformation-fighting efforts.

The State Department has closed its Counter Foreign Information Manipulation and Interference group, previously known as the Global Engagement Center, after accusations that it was overreaching in its censorship activities.

The closure, led by Secretary of State Marco Rubio, has sparked criticism, with some seeing it as a victory for foreign powers like Russia and China.

Finally, gig workers face new challenges as the Tech Transparency Project revealed that Facebook groups are being used to trade fake gig worker accounts for platforms like Uber and Lyft.

Sellers offer access to verified accounts, bypassing safety checks, and putting passengers and customers at risk. Despite reports to Meta, many of these groups remain active, with the social media giant’s automated systems failing to curb the activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT search grows rapidly in Europe

ChatGPT search, the web-accessing feature within OpenAI’s chatbot, has seen rapid growth across Europe, attracting an average of 41.3 million monthly active users in the six months leading up to March 31.

It marks a sharp rise from 11.2 million in the previous six-month period, according to a regulatory filing by OpenAI Ireland Limited.

Instead of operating unnoticed, the service must now report this data under the EU’s Digital Services Act (DSA), which defines monthly recipients as users who actively view or interact with the platform.

Should usage cross 45 million, ChatGPT search could be classified as a ‘very large’ online platform and face stricter rules, including transparency obligations, user opt-outs from personalised recommendations, and regular audits.

Failure to follow DSA regulations could lead to serious penalties, up to 6% of OpenAI’s global revenue, or even a temporary ban in the EU for ongoing violations. The law aims to ensure online platforms operate more responsibly and with better oversight in the digital space.

Despite gaining ground, ChatGPT search still lags far behind Google, which handles hundreds of times more queries.

Studies have also raised concerns about the accuracy of AI search tools, with ChatGPT found to misidentify a majority of news articles and occasionally misrepresent licensed content from publishers.

Instead of fully replacing traditional search, these AI tools may still need improvement to become reliable alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta uses AI to spot teens lying about age

Meta has announced it is ramping up efforts to protect teenagers on Instagram by deploying AI to detect users who may have lied about their age. The technology will automatically place suspected underage users into Teen Accounts, even if their profiles state they are adults.

These special accounts come with stricter safety settings designed for users under 16. Those who believe they’ve been misclassified will have the option to adjust their settings manually.

Instead of relying solely on self-reported birthdates, Meta is using its AI to analyse behaviour and signals that suggest a user might be younger than claimed.

While the company has used this technology to estimate age ranges before, it is now applying it more aggressively to catch teens who attempt to bypass the platform’s safeguards. The tech giant insists it’s working to ensure the accuracy of these classifications to prevent mistakes.

Alongside this new AI tool, Meta will also begin sending notifications to parents about their children’s Instagram settings.

These alerts, which are sent only to parents who have Instagram accounts of their own, aim to encourage open conversations at home about the importance of honest age representation online.

Teen Accounts were first introduced last year and are designed to limit access to harmful content, reduce contact from strangers, and promote healthier screen time habits.

Instead of granting unrestricted access, these accounts are private by default, block unsolicited messages, and remind teens to take breaks after prolonged scrolling.

Meta says the goal is to adapt to the digital age and partner with parents to make Instagram a safer space for young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake banking apps leave sellers thousands out of pocket

Scammers are using fake mobile banking apps to trick people into handing over valuable items without receiving any payment.

These apps, which convincingly mimic legitimate platforms, display false ‘successful payment’ screens in person, allowing fraudsters to walk away with goods while the money never arrives.

Victims like Anthony Rudd and John Reddock have lost thousands after being targeted while selling items through social media marketplaces. Mr Rudd handed over £1,000 worth of tools from his Salisbury workshop, only to realise the payment notification was fake.

Mr Reddock, from the UK, lost a £2,000 gold bracelet he had hoped to sell to fund a holiday for his children.

BBC West Investigations found that some of these fake apps, previously removed from the Google Play store, are now being downloaded directly from the internet onto Android phones.

The Chartered Trading Standards Institute described this scam as an emerging threat, warning that in-person fraud is growing more complex instead of fading away.

With police often unable to track down suspects, small business owners like Sebastian Liberek have been left feeling helpless after being targeted repeatedly.

He has lost hundreds of pounds to fake transfers and believes scammers will continue striking, while enforcement remains limited and platforms fail to do enough to stop the spread of fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISA extends MITRE’s CVE program for 11 months

The US Cybersecurity and Infrastructure Security Agency (CISA) has extended its contract with the MITRE Corporation to continue operating the Common Vulnerabilities and Exposures (CVE) program for an additional 11 months. The decision was made one day before the existing contract was set to expire.

A CISA spokesperson confirmed that the agency exercised the option period in its $57.8 million contract with MITRE to prevent a lapse in CVE services. The contract, which originally concluded on April 17, includes provisions for optional extensions through March 2026.

‘The CVE Program is invaluable to the cyber community and a priority of CISA,’ the spokesperson stated, expressing appreciation for stakeholder support.

Yosry Barsoum, vice president of MITRE and director of its Center for Securing the Homeland, said that CISA identified incremental funding to maintain operations.

He noted that MITRE remains committed to supporting both the CVE and CWE (Common Weakness Enumeration) programs, and acknowledged the widespread support from government, industry, and the broader cybersecurity community.

The extension follows public concern raised earlier this week after Barsoum issued a letter indicating that program funding was at risk of expiring without renewal.

MITRE officials noted that, in the event of a contract lapse, the CVE program website would eventually go offline and no new CVEs would be published. Historical data would remain accessible via GitHub.

Launched in 1999, the CVE program serves as a central catalogue for publicly disclosed cybersecurity vulnerabilities. It is widely used by governments, private sector organisations, and critical infrastructure operators for vulnerability identification and coordination.

Amid recent uncertainty about the program’s future, a group of CVE Board members announced the formation of a new non-profit organisation — the CVE Foundation — aimed at supporting the long-term sustainability and governance of the initiative.

In a public statement, the group noted that while US government sponsorship had enabled the program’s growth, it also introduced concerns around reliance on a single national sponsor for what is considered a global public good.

The CVE Foundation is intended to provide a neutral, independent structure to ensure continuity and community oversight.

The foundation aims to enhance global governance, eliminate single points of failure in vulnerability management, and reinforce the CVE program’s role as a trusted and collaborative resource. Further information about the foundation’s structure and plans is expected to be released in the coming days.

CISA did not comment on the creation of the CVE Foundation. A MITRE spokesperson indicated the organisation intends to work with federal agencies, the CVE Board, and the cybersecurity community on options for ongoing support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI deploys new safeguards for AI models to curb biothreat risks

OpenAI has introduced a new monitoring system to reduce the risk of its latest AI models, o3 and o4-mini, being misused to create chemical or biological threats.

The ‘safety-focused reasoning monitor’ is built to detect prompts related to dangerous materials and instruct the AI models to withhold potentially harmful advice, instead of providing answers that could aid bad actors.

These newer models represent a major leap in capability compared to previous versions, especially in their ability to respond to prompts about biological weapons. To counteract this, OpenAI’s internal red teams spent 1,000 hours identifying unsafe interactions.

Simulated tests showed the safety monitor successfully blocked 98.7% of risky prompts, although OpenAI admits the system does not account for users trying again with different wording, a gap still covered by human oversight instead of relying solely on automation.

Despite assurances that neither o3 nor o4-mini meets OpenAI’s ‘high risk’ threshold, the company acknowledges these models are more effective at answering dangerous questions than earlier ones like o1 and GPT-4.

Similar monitoring tools are also being used to block harmful image generation in other models, yet critics argue OpenAI should do more.

Concerns have been raised over rushed testing timelines and the lack of a safety report for GPT-4.1, which was launched this week instead of being accompanied by transparency documentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MITRE’s CVE program faces funding expiry, raising cybersecurity concerns

A cornerstone of the global cybersecurity ecosystem is facing an uncertain future. US government funding for MITRE Corporation to operate and maintain the Common Vulnerabilities and Exposures (CVE) program is set to expire, an unprecedented development that could significantly disrupt how security flaws are identified, tracked, and mitigated worldwide.

Launched in 1999, the CVE program has become the de facto international standard for cataloging publicly known software vulnerabilities. Managed by MITRE under sponsorship from the Department of Homeland Security (DHS) and the Cybersecurity and Infrastructure Security Agency (CISA), the program has published over 274,000 CVE records to date.

MITRE has warned that the lapse in funding will not only halt its ability to continue developing and modernizing the CVE system but could also impact related initiatives such as the Common Weakness Enumeration (CWE). These tools are essential for vulnerability classification, secure coding practices, and prioritisation of cybersecurity risks.

While Barsoum noted that the US government is working to find a resolution, the looming gap has already prompted independent action. Cybersecurity firm VulnCheck, which acts as a CVE Numbering Authority (CNA), has preemptively reserved 1,000 CVEs for 2025 in an effort to maintain continuity.

Industry experts warn the consequences could be far-reaching. Despite the challenges, MITRE has affirmed its commitment to the CVE program and its role as a global resource. However, unless a new funding arrangement is secured, the future of this foundational infrastructure remains in question.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google uses AI and human reviews to fight ad fraud

Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.

The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.

Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.

The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.

While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.

Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.

In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.

While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.

The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!