Online scams rise as Parkin urges Dubai residents to stay vigilant

Dubai’s parking provider, Parkin, has warned residents to stay alert as online scams targeting digital service users continue to rise, urging people to take immediate steps to protect their digital identities.

In an advisory, the company stressed that official entities will never ask users to log in or disclose sensitive information through unsolicited messages, emails, or phone calls. The warning comes amid growing concerns about phishing attempts and other online scams targeting users of digital platforms.

Parkin said residents should exercise caution if they receive unexpected requests for personal details, passwords, or verification codes. Users are strongly advised not to respond to suspicious links, attachments, or messages from unknown sources, which are commonly used in online scams.

The operator also urged the public to verify the authenticity of communications before taking any action. Residents who are unsure about the legitimacy of a message should check official websites or contact customer service channels directly. The advice applies to messages claiming to come from Parkin or other service providers.

Authorities and service providers across the UAE have repeatedly warned that cybercriminals often impersonate trusted organisations in online scams designed to steal sensitive information. Such attacks can lead to identity theft, financial losses, or unauthorised access to personal accounts.

Parkin encouraged residents who receive suspicious communications to report them through official channels so that appropriate action can be taken. The company added that staying vigilant and safeguarding personal data remain essential to preventing online scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Codex Security expands OpenAI’s push into cybersecurity tools

OpenAI has launched Codex Security, an AI-powered application security agent that detects hard-to-find software vulnerabilities and proposes fixes through advanced reasoning. By providing detailed context about a system’s architecture, the tool identifies security risks that are often missed by conventional automation.

The system uses advanced models to analyse repositories, construct project-specific threat models, and prioritise vulnerabilities based on their potential real-world impact. By combining automated validation with system-level context, Codex Security aims to reduce the number of false positives that security teams must review while highlighting high-confidence findings.

Initially developed under the name Aardvark, the tool has been tested in private deployments over the past year. During early use, OpenAI said it uncovered several critical vulnerabilities, including a cross-tenant authentication flaw and a server-side request forgery issue, allowing internal teams to quickly patch affected systems.

The company says improvements during the beta phase significantly reduced noise in vulnerability reports. In some repositories, unnecessary alerts fell by 84 percent, while over-reported severity dropped by more than 90 percent, and false positives declined by more than half.

Codex Security is now rolling out in research preview for ChatGPT Pro, Enterprise, Business, and Edu customers. OpenAI also plans to expand access to open-source maintainers through a dedicated programme that offers security scanning and support to help identify and remediate vulnerabilities across widely used projects.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data breach hits fintech lender Figure exposing nearly 1 million accounts

Fintech lender Figure Technology Solutions has disclosed a data breach after hackers exposed personal information from nearly one million accounts. Details from 967,200 accounts, including names, email addresses, phone numbers, home addresses, and dates of birth, were compromised.

Figure Technology Solutions, founded in 2018, operates a blockchain-based lending platform built on the Provenance blockchain. The company says it has facilitated more than $22 billion in home equity transactions through partnerships with banks, credit unions, and fintech firms. Despite blockchain security claims, attackers reportedly gained access by manipulating a staff member rather than breaking the underlying technology.

‘We recently identified that an employee was socially engineered, and that allowed an actor to download a limited number of files through their account,’ a company spokesperson said. ‘We acted quickly to block the activity and retained a forensic firm to investigate what files were affected. We understand the importance of these matters and are communicating with partners and those impacted as appropriate.’

Security researchers say the data breach follows a pattern used by groups such as ShinyHunters, who impersonate IT support staff and pressure employees into revealing login credentials through convincing phishing portals.

Once access to corporate single sign-on systems, which allow users to log in to multiple internal applications with a single set of credentials, is obtained, attackers can move across multiple internal platforms, often including services linked to major providers such as Microsoft and Google.

Experts warn that the data breach highlights a wider cybersecurity problem: even advanced technologies such as blockchain cannot prevent attacks that target human behaviour. Criminals can use exposed personal information to launch convincing phishing campaigns or financial scams, reinforcing the need for stronger employee training and security awareness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

TikTok rejects end-to-end encryption citing safety concerns

TikTok will not adopt end-to-end encryption for direct messages. The company explained that using this technology could hinder safety teams’ and law enforcement’s efforts to detect harmful content in private messages, which the company believes could make users less safe online.

Encrypted messaging ensures that only the sender and recipient can read a conversation and is widely used across the social media industry. Rivals including Facebook, Instagram, Messenger, and X have adopted the technology, saying protecting private communication is central to user privacy.

The issue has become more sensitive because the platform has long faced scrutiny over possible links between its parent company, ByteDance, and the government of the People’s Republic of China, something the company has repeatedly denied. Reflecting these concerns, earlier this year, US lawmakers ordered the separation of TikTok’s US operations from its global business.

The company told the BBC that encrypted messaging would make it impossible for police and platform safety teams to read direct messages when needed. TikTok emphasised that this decision was made to enhance user protection, with a particular focus on the safety of younger users, and that it sees monitoring capabilities as crucial for addressing harmful behaviour.

Industry analyst Matt Navarra said the platform’s decision to ‘swim against the tide’ is ‘notable’ but presents ‘challenging optics’. He noted, ‘Grooming and harassment risks are present in DMs [direct messages], so TikTok can state it is prioritising proactive safety over privacy absolutism,’ though he added that the decision ‘places TikTok out of alignment with global privacy expectations’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online privacy faces new pressures in the age of social media

Online privacy is eroding as digital services collect ever-growing personal data and surveillance becomes part of daily technology use. The debate has intensified as social media platforms, advertisers, and connected devices expand their ability to track behaviour, preferences, and habits.

Analysts say younger generations have adapted to this reality rather than resisting it. ‘In 2026, online privacy is a luxury, not a right,’ says Thomas Bunting, an analyst at the UK innovation think tank Nesta. He argues many people have grown up accepting data collection as a trade-off for access to online services, noting: ‘We’ve been taught how to deal with it.’

Advocates warn that the erosion of online privacy could have wider social consequences. Cybersecurity expert Prof Alan Woodward from the University of Surrey says the issue goes beyond personal privacy. ‘People should care about online privacy because it shapes who has power over their lives,’ he says, arguing that privacy is ‘about having something to protect: freedom of thought, experimentation, dissent and personal development without permanent surveillance.’

Despite a growing number of privacy tools and regulations, data exposure remains widespread. According to Statista, more than 1.35 billion people were affected by data breaches, hacks, or exposure in 2024 alone. At the same time, more than 160 countries now have privacy legislation, while users regularly encounter cookie consent prompts that govern how their data is collected online.

Experts say frustration with privacy controls reflects a broader ‘privacy paradox’, in which people express concern about data protection but rarely change their behaviour. Cisco’s Consumer Privacy Survey found that while 89% of respondents said they care about privacy, only 38% actively take steps to protect their data.

As philosopher Carissa VĂ©liz notes, the challenge is not simply awareness but a sense of agency: ‘Mostly, people don’t feel like they have control.’ She argues that protecting privacy requires stronger regulation, responsible technology design, and cultural change, adding: ‘It’s about having [access to] the right tech, but also using it.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Chrome Gemini vulnerability allowed camera and file access

A high-severity vulnerability in Chrome’s integrated Gemini AI assistant exposed users to the potential activation of the camera and microphone, local file access, and phishing attacks. The issue, tracked as CVE-2026-0628, was disclosed by Palo Alto Networks’ Unit 42 and patched by Google in January 2026.

Gemini Live operates as a privileged AI panel embedded within the browser, capable of web page summarisation and task automation. To enable multimodal functionality, the panel is granted elevated permissions, including access to screenshots, local files, and device hardware.

Researchers identified inconsistent handling of the declarativeNetRequest API when gemini.google.com was loaded inside the AI side panel rather than a standard browser tab. While extensions could inject JavaScript in both cases, the panel context inherited browser-level privileges.

A malicious extension exploiting this distinction could hijack the trusted panel and execute arbitrary code with elevated access. Potential impacts included silent activation of a camera or microphone, screenshot capture, local file exfiltration, and high-credibility phishing attacks.

Google released a fix on 5 January 2026 following responsible disclosure. Users running the latest version of Chrome are protected, and organisations are advised to ensure updates are applied across all endpoints.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Telekom and Nokia advance open and AI-native RAN

Nokia and Deutsche Telekom have expanded their collaboration to advance cloud-based, disaggregated, and AI-native RAN technologies. The strengthened Innovation Cooperation Program deepens joint work in Cloud RAN, open interfaces, and next-generation solutions.

The partnership builds on years of cooperation focused on open and flexible architectures. Both companies said the expanded effort aims to improve network efficiency, programmability, and long-term operational value for service providers.

Work on Open Fronthaul integration is being intensified following earlier multivendor deployments in Germany linking Nokia baseband units with O-RAN-compliant radios. Additional integrations covering Open Fronthaul and Cloud RAN are progressing within confidential development programmes.

The companies are also advancing O-RAN-aligned management capabilities through open O1 interfaces and deeper integration of configuration management. A vendor-independent Service Management and Orchestration platform remains central to Deutsche Telekom’s multivendor RAN strategy.

Nokia will act as Deutsche Telekom’s strategic co-creation partner for AI-native RAN development. Joint efforts will focus on AI-powered receivers, adaptive beamforming, predictive optimisation, and lab and field validation to support intelligent, autonomous mobile networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic’s Claude climbs past ChatGPT in downloads

App Store charts have shifted sharply in the consumer AI market, with Anthropic’s Claude now surpassing ChatGPT in downloads. The change marks one of the most notable ranking reversals in recent months.

The spike in downloads appears tied to public reaction rather than new product features. App rankings often fluctuate, but this shift coincides with growing debate over how AI companies collaborate with governments.

Anthropic has positioned Claude around strict usage policies, including restrictions on domestic surveillance and lethal autonomous weapons. That stance has resonated with users concerned about the ethical deployment of AI technologies.

Claude’s ascent underscores a more competitive chatbot landscape in which transparency and public confidence are playing an increasingly important role. AI app rankings are becoming increasingly volatile as users are willing to switch platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AT&T data breach settlement wins preliminary approval in $177 million deal

A federal judge in Texas has preliminarily approved a $177 million settlement resolving claims that AT&T failed to safeguard consumer data in two separate breaches. The company denies wrongdoing but agreed to establish compensation funds covering affected customers nationwide.

The agreement creates two non-reversionary funds: $149 million for individuals whose personal data appeared on the dark web, and $28 million for customers whose call and text logs were accessed. It covers a March 2024 breach and a separate incident between May 2022 and early 2023.

Eligible class members may submit claims for cash payments, with amounts depending on the number of valid submissions, and may also receive up to 24 months of credit monitoring. The deadline to opt out or object is 17 October 2025, with a final approval hearing set for 3 December 2025.

Legal and administrative costs, attorneys’ fees, and service awards will be paid from the settlement funds. The case resolves claims brought on behalf of all living US residents whose data was exposed in the two AT&T breaches.

The settlement follows other recent legal challenges facing AT&T, including class actions filed by New York pensioners alleging the company misled investors about the environmental impact of its lead-sheathed cables.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI misuse exposed as OpenAI details global disinformation and scam networks

OpenAI said criminal and state-linked groups misused ChatGPT for disinformation, scams and covert influence. Its latest threat report details coordinated account bans and highlights how AI tools are embedded within broader operational workflows rather than used in isolation.

One investigation linked accounts to Chinese law enforcement engaged in what were described as ‘cyber special operations’. Activities included planning influence campaigns, mass-reporting dissidents and drafting forged materials, with related efforts continuing through other tools despite model refusals.

The report also outlined a Cambodia-based romance scam targeting young men in Indonesia through a fake dating agency. Operators combined manual prompting with automated chatbots to sustain conversations and facilitate financial fraud, leading to account removals.

Separately, accounts tied to Russia’s ‘Rybar’ network used ChatGPT to draft and translate posts distributed across multiple platforms. OpenAI noted that campaign impact depended more on account reach and coordination than on AI-generated content alone.

Across China, Russia and parts of Southeast Asia, actors treated AI as one tool among many, alongside fake profiles, paid advertising and forged documents. OpenAI called for cross-industry vigilance, stressing the need to analyse behavioural patterns across platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!