Russian hackers target NGOs with fake video calls

Hackers linked to Russia are refining their techniques to infiltrate Microsoft 365 accounts, according to cybersecurity firm Volexity.

Their latest strategy targets non-governmental organisations (NGOs) associated with Ukraine by exploiting OAuth, a protocol used for app authorisation without passwords.

Victims are lured into fake video calls through apps like Signal or WhatsApp and tricked into handing over OAuth codes, which attackers then use to access Microsoft 365 environments.

The campaign, first detected in March, involved messages claiming to come from European security officials proposing meetings with political representatives. Instead of legitimate video links, these messages directed recipients to OAuth code generators.

Once a code was shared, attackers could gain entry into accounts containing sensitive data. Staff at human rights organisations were especially targeted due to their work on Ukraine-related issues.

Volexity attributed the scheme to two threat actors, UTA0352 and UTA0355, though it did not directly connect them to any known Russian advanced persistent threat groups.

A previous attack from the same actors used Microsoft Device Code Authentication, usually reserved for connecting smart devices, instead of traditional login methods. Both campaigns show a growing sophistication in social engineering tactics.

Given the widespread use of Microsoft 365 tools like Outlook and Teams, experts urge organisations to heighten awareness among staff.

Rather than trusting unsolicited messages on encrypted apps, users should remain cautious when prompted to click links or enter authentication codes, as these could be cleverly disguised attempts to breach secure systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google spoofed in sophisticated phishing attack

A sophisticated phishing attack recently targeted Google users, exploiting a well-known email authentication method to bypass security measures.

The attackers sent emails appearing to be from Google’s legitimate address, no-reply@accounts.google.com, and claimed the recipient needed to comply with a subpoena.

The emails contained a link to a Google Sites page, prompting users to log in and revealing a fake legal support page.

What made this phishing attempt particularly dangerous was that it successfully passed both DMARC and DKIM email authentication checks, making it appear entirely genuine to recipients.

In another cyber-related development, Microsoft issued a warning regarding the use of Node.js in distributing malware. Attackers have been using the JavaScript runtime environment to deploy malware through scripts and executables, particularly targeting cryptocurrency traders via malvertising campaigns.

The new technique involves executing JavaScript directly from the command line, making it harder to detect by traditional security tools.

Meanwhile, the US has witnessed a significant change in its disinformation-fighting efforts.

The State Department has closed its Counter Foreign Information Manipulation and Interference group, previously known as the Global Engagement Center, after accusations that it was overreaching in its censorship activities.

The closure, led by Secretary of State Marco Rubio, has sparked criticism, with some seeing it as a victory for foreign powers like Russia and China.

Finally, gig workers face new challenges as the Tech Transparency Project revealed that Facebook groups are being used to trade fake gig worker accounts for platforms like Uber and Lyft.

Sellers offer access to verified accounts, bypassing safety checks, and putting passengers and customers at risk. Despite reports to Meta, many of these groups remain active, with the social media giant’s automated systems failing to curb the activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT search grows rapidly in Europe

ChatGPT search, the web-accessing feature within OpenAI’s chatbot, has seen rapid growth across Europe, attracting an average of 41.3 million monthly active users in the six months leading up to March 31.

It marks a sharp rise from 11.2 million in the previous six-month period, according to a regulatory filing by OpenAI Ireland Limited.

Instead of operating unnoticed, the service must now report this data under the EU’s Digital Services Act (DSA), which defines monthly recipients as users who actively view or interact with the platform.

Should usage cross 45 million, ChatGPT search could be classified as a ‘very large’ online platform and face stricter rules, including transparency obligations, user opt-outs from personalised recommendations, and regular audits.

Failure to follow DSA regulations could lead to serious penalties, up to 6% of OpenAI’s global revenue, or even a temporary ban in the EU for ongoing violations. The law aims to ensure online platforms operate more responsibly and with better oversight in the digital space.

Despite gaining ground, ChatGPT search still lags far behind Google, which handles hundreds of times more queries.

Studies have also raised concerns about the accuracy of AI search tools, with ChatGPT found to misidentify a majority of news articles and occasionally misrepresent licensed content from publishers.

Instead of fully replacing traditional search, these AI tools may still need improvement to become reliable alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Linguists find new purpose in the age of AI

In his latest blog, part of a series expanding on ‘Don’t Waste the Crisis: How AI Can Help Reinvent International Geneva’, Dr Jovan Kurbalija explores how linguists shift from fearing AI to embracing a new era of opportunity. Geneva, home to over a thousand translators and interpreters, has felt the pressure as AI tools like ChatGPT began automating language tasks.

Yet, rather than rendering linguists obsolete, AI is transforming their role, highlighting the enduring importance of human expertise in bridging syntax and semantics—AI’s persistent blind spot. Dr Kurbalija emphasises that while AI excels at recognising patterns, it often fails to grasp meaning, nuance, and cultural context.

This is where linguists step in, offering critical value by enhancing AI’s understanding of language beyond mere structure. From supporting low-resource languages to ensuring ethical AI outputs in sensitive fields like law and diplomacy, linguists are positioned as key players in shaping responsible and context-aware AI systems.

Calling for adaptation over resistance, Dr Kurbalija advocates for linguists to upskill, specialise in areas where human judgement is irreplaceable, collaborate with AI developers, and champion ethical standards. Rather than facing decline, the linguistic profession is entering a renaissance, where embracing syntax and semantics ensures that AI amplifies human expression instead of diminishing it.

With Geneva’s vibrant multilingual community at the forefront, linguists have a pivotal role in guiding how language and technology evolve together in this new frontier.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta uses AI to spot teens lying about age

Meta has announced it is ramping up efforts to protect teenagers on Instagram by deploying AI to detect users who may have lied about their age. The technology will automatically place suspected underage users into Teen Accounts, even if their profiles state they are adults.

These special accounts come with stricter safety settings designed for users under 16. Those who believe they’ve been misclassified will have the option to adjust their settings manually.

Instead of relying solely on self-reported birthdates, Meta is using its AI to analyse behaviour and signals that suggest a user might be younger than claimed.

While the company has used this technology to estimate age ranges before, it is now applying it more aggressively to catch teens who attempt to bypass the platform’s safeguards. The tech giant insists it’s working to ensure the accuracy of these classifications to prevent mistakes.

Alongside this new AI tool, Meta will also begin sending notifications to parents about their children’s Instagram settings.

These alerts, which are sent only to parents who have Instagram accounts of their own, aim to encourage open conversations at home about the importance of honest age representation online.

Teen Accounts were first introduced last year and are designed to limit access to harmful content, reduce contact from strangers, and promote healthier screen time habits.

Instead of granting unrestricted access, these accounts are private by default, block unsolicited messages, and remind teens to take breaks after prolonged scrolling.

Meta says the goal is to adapt to the digital age and partner with parents to make Instagram a safer space for young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple tries makes climate progress with greener supply chain

Apple has made progress in reducing its environmental impact, according to the company’s own latest environmental progress report.

Its total greenhouse gas emissions dropped by 800,000 metric tons in 2024, marking a 5 percent reduction from the previous year.

Over the last decade, Apple has cut its global emissions by more than 60 percent, an achievement as emissions from other tech firms continue to rise due to the growing demands of AI.

The reduction stems from efforts to use renewable energy, increase recycling, and work with suppliers to cut emissions. Apple reported that its suppliers collectively avoided nearly 24 million metric tons of greenhouse gas emissions last year through cleaner energy and improved efficiency.

The company is also tackling highly potent fluorinated gases used in making semiconductors and displays, with all direct display suppliers and 26 semiconductor partners committing to reducing such emissions by at least 90 percent.

Recycled materials played a larger role in Apple’s products in 2024, making up nearly a quarter of all materials used. Notably, 80 percent of the rare earth elements and most of the tungsten, cobalt, and aluminium used came from recycled sources.

Despite these efforts, Apple still generated 15.3 million metric tons of CO₂ last year, though it aims to reduce emissions by 75 percent from 2015 levels by 2030 and eliminate 90 percent by 2050 to meet international climate goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Footnotes to bring crowd-sourced context to TikTok

TikTok is trialling a new feature called Footnotes in the United States, allowing users to add context to videos that may be misleading. The move mirrors the Community Notes system used by X, though TikTok will continue its own fact-checking programme in parallel.

Eligible adult users in the United States can apply to contribute Footnotes, and they will also be able to rate the helpfulness of others’ contributions.

Footnotes considered useful will appear publicly on TikTok, with wider users then able to vote on their value. The platform’s head of operations, Adam Presser, said the feature is designed to help users better understand complex topics, ongoing events, or content involving potentially misleading statistics.

The initiative builds on TikTok’s existing tools, including content labels, search banners, and partnerships with third-party fact-checkers such as AFP.

The announcement comes as TikTok’s parent company, ByteDance, continues negotiations with the US government to avoid a potential ban.

Talks over a sale have reportedly stalled amid rising tensions and new tariffs between Washington and Beijing.

While other tech giants such as Meta have scaled back fact-checking in favour of community-based moderation, TikTok is taking a combined approach to ensure greater content accuracy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup caught in Dev Mode trademark row

Figma has issued a cease-and-desist letter to Swedish AI startup Loveable over the use of the term ‘Dev Mode,’ a name Figma trademarked in 2023.

Loveable recently introduced its own Dev Mode feature, prompting the design platform to demand the startup stop using the name, citing its established use and intellectual property rights.

Figma’s version of Dev Mode helps bridge the gap between designers and developers, while Loveable’s tool allows users to preview and edit code without linking to GitHub.

Despite their differing functions, Figma insists on protecting the trademark, even though ‘developer mode’ is a widely used phrase across many software platforms. Companies such as Atlassian and Wix have used similar terminology long before Figma obtained the trademark.

The legal move arrives as Figma prepares for an initial public offering, following Adobe’s failed acquisition attempt in 2023. The sudden emphasis on brand protection suggests the company is taking extra care with its intellectual assets ahead of its potential stock market debut.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Temu and Shein to raise US prices due to new tariffs

Fast fashion giants Temu and Shein have warned US shoppers to expect price hikes from next week, as sweeping new tariffs on Chinese imports come into effect under Donald Trump’s trade policy.

Both companies will lose access to the ‘de minimis’ exemption, which has allowed packages under $800 to enter the US duty-free. That change, taking effect from 2 May, will significantly raise costs for low-cost retailers who depend on cheap cross-border shipments.

The tariffs, which now reach up to 145%, are part of Trump’s escalating trade war with China. His revised plans impose a tax of $75 per item, rising to $150 by June, for shipments that were previously exempt.

Shein has told customers its operating expenses have risen and prices will be adjusted from 25 April in an effort to maintain product quality while absorbing the new costs.

In response to the tariffs and likely slowdown in US demand, both companies have also scaled back digital advertising.

According to Sensor Tower, Temu’s average US ad spend across major platforms dropped by 31% over two weeks, while Shein’s spending fell 19%.

The tariffs are expected to reshape fast fashion in the US, though some experts believe prices may still remain competitive compared to domestic alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake banking apps leave sellers thousands out of pocket

Scammers are using fake mobile banking apps to trick people into handing over valuable items without receiving any payment.

These apps, which convincingly mimic legitimate platforms, display false ‘successful payment’ screens in person, allowing fraudsters to walk away with goods while the money never arrives.

Victims like Anthony Rudd and John Reddock have lost thousands after being targeted while selling items through social media marketplaces. Mr Rudd handed over £1,000 worth of tools from his Salisbury workshop, only to realise the payment notification was fake.

Mr Reddock, from the UK, lost a £2,000 gold bracelet he had hoped to sell to fund a holiday for his children.

BBC West Investigations found that some of these fake apps, previously removed from the Google Play store, are now being downloaded directly from the internet onto Android phones.

The Chartered Trading Standards Institute described this scam as an emerging threat, warning that in-person fraud is growing more complex instead of fading away.

With police often unable to track down suspects, small business owners like Sebastian Liberek have been left feeling helpless after being targeted repeatedly.

He has lost hundreds of pounds to fake transfers and believes scammers will continue striking, while enforcement remains limited and platforms fail to do enough to stop the spread of fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!