Trust Wallet urges update after $7 million hack

Trust Wallet has urged users to update its Google Chrome extension after a security breach affecting version 2.68 resulted in the theft of roughly $7 million. The company confirmed it will refund all impacted users and advised downloading version 2.69 immediately.

Mobile users and other browser extension versions were unaffected.

Blockchain security firms revealed that malicious code in version 2.68 harvested wallet mnemonic phrases, sending decrypted credentials to an attacker‑controlled server.

Around $3 million in Bitcoin, $431 in Solana, and more than $3 million in Ethereum were stolen and moved through centralised exchanges and cross‑chain bridges for laundering. Hundreds of users were affected.

Analysts suggest the incident may involve an insider or a nation-state actor, exploiting leaked Chrome Web Store API keys.

Trust Wallet has launched a support process for victims and warned against impersonation scams. CEO Eowyn Chen said the malicious extension bypassed the standard release checks and that investigation and remediation are ongoing.

The incident highlights ongoing security risks for browser-based cryptocurrency wallets and the importance of user vigilance, including avoiding unofficial links and never sharing recovery phrases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI directorates signal Türkiye’s push for AI

Türkiye has announced new measures to expand its AI ecosystem and strengthen public-sector adoption of the technology. The changes were published in the Official Gazette, according to Industry and Technology Minister Mehmet Fatih Kacir.

The Ministry’s Directorate General of National Technology has been renamed the Directorate General of National Technology and AI. The unit will oversee policies on data centres, cloud infrastructure, certification standards, and regulatory processes.

The directorate will also coordinate national AI governance, support startups and research, and promote the ethical and reliable use of AI. Its remit includes expanding data capacity, infrastructure, workforce development, and international cooperation.

Separately, a Public AI Directorate General has been established under the Presidency’s Cybersecurity Directorate. The new body will guide the use of AI across government institutions and lead regulatory work on public-sector AI applications.

Officials say the unit will align national legislation with international frameworks and set standards for data governance and shared data infrastructure. The government aims to position Türkiye as a leading country in the development of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Phishing scam targets India’s drivers in large-scale e-Challan cyberattack

Cybercriminals are exploiting trust in India’s traffic enforcement systems by using fake e-Challan portals to steal financial data from vehicle owners. The campaign relies on phishing websites that closely mimic official government platforms.

Researchers at Cyble Research and Intelligence Labs say the operation marks a shift away from malware towards phishing-based deception delivered through web browsers. More than 36 fraudulent websites have been linked to the campaign, which targets users across India through SMS messages.

Victims receive alerts claiming unpaid traffic fines, often accompanied by warnings of licence suspension or legal action. The messages include links directing users to fake portals displaying fabricated violations and small penalty amounts, with no connection to government databases.

The sites restrict payments to credit and debit cards, prompting users to enter full card details. Investigators found that repeated payment attempts allow attackers to collect multiple sets of sensitive information from a single victim.

Researchers say the infrastructure is shared with broader phishing schemes that impersonate courier services, banks, and transportation platforms. Security experts advise users to verify fines only through official websites and to avoid clicking on links in unsolicited messages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

La Poste suffers DDoS attack as Noname057 claims responsibility

Authorities in France are responding to a significant cyber incident after a pro-Russian hacker group, Noname057, claimed responsibility for a distributed denial-of-service attack on the national postal service, La Poste.

The attack began on 22 December and forced core computer systems offline, delaying parcel deliveries during the busy Christmas period instead of allowing normal operations to continue.

According to reports, standard letter delivery was not affected. However, postal staff lost the ability to track parcels, and customers experienced disruptions when using online payment services connected to La Banque Postale.

Recovery work was still underway several days later, underscoring the increasing reliance of critical services on uninterrupted digital infrastructure.

Noname057 has previously been linked to cyberattacks across Europe, mainly targeting Ukraine and countries seen as supportive of Kyiv instead of neutral states.

Europol led a significant operation against the group earlier in the year, with the US Department of Justice also involved, highlighting growing international coordination against cross-border cybercrime.

The incident has renewed concerns about the vulnerability of essential logistics networks and public-facing services to coordinated cyber disruption. European authorities continue to assess long-term resilience measures to protect citizens and core services from future attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU targets addictive gaming features

Video gaming has become one of Europe’s most prominent entertainment industries, surpassing a niche hobby, with over half the population regularly engaging in it.

As the sector grows, the EU lawmakers are increasingly worried about addictive game design and manipulative features that push players to spend more time and money online.

Much of the concern focuses on loot boxes, where players pay for random digital rewards that resemble gambling mechanics. Studies and parliamentary reports warn that children may be particularly vulnerable, with some lawmakers calling for outright bans on paid loot boxes and premium in-game currencies.

The European Commission is examining how far design choices contribute to digital addiction and whether games are exploiting behavioural weaknesses rather than offering fair entertainment.

Officials say the risk is higher for minors, who may not fully understand how engagement-driven systems are engineered.

The upcoming Digital Fairness Act aims to strengthen consumer protection across online services, rather than leaving families to navigate the risks alone. However, as negotiations continue, the debate over how tightly gaming should be regulated is only just beginning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IMF calls for stronger AI regulation in global securities markets

Regulators worldwide are being urged to adopt stronger oversight frameworks for AI in capital markets after an IMF technical note warned that rapid AI adoption could reshape securities trading while increasing systemic risk.

AI brings major efficiency gains in asset management and high-frequency trading instead of slower, human-led processes, yet opacity, market volatility, cyber threats and model concentration remain significant concerns.

The IMF warns that AI could create powerful data oligopolies where only a few firms can train the strongest models, while autonomous trading agents may unintentionally collude by widening spreads without explicit coordination.

Retail investors also face rising exposure to AI washing, where financial firms exaggerate or misrepresent AI capability, making transparency, accountability and human-in-the-loop review essential safeguards.

Supervisory authorities are encouraged to scale their own AI capacity through SupTech tools for automated surveillance and social-media sentiment monitoring.

The note highlights India as a key case study, given the dominance of algorithmic trading and SEBI’s early reporting requirements for AI and machine learning. The IMF also points to the National Stock Exchange’s use of AI in fraud detection as an emerging-market model for resilient monitoring infrastructure.

The report underlines the need for regulators to prepare for AI-driven market shocks, strengthen governance obligations on regulated entities and build specialist teams capable of understanding model risk instead of reacting only after misconduct or misinformation harms investors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI, digital twins, and intelligent wearables reshape security operations in 2026

Operational success in security technology is increasingly being judged through measurable performance rather than early-stage novelty.

As 2026 approaches, Agentic AI, digital twins and intelligent wearables are moving from research concepts into everyday operational roles, reshaping how security functions are designed and delivered.

Agentic AI is no longer limited to demonstrations. Instead of simple automation, autonomous agents now analyse video feeds, access data and sensor logs to investigate incidents and propose mitigation steps for human approval.

Adoption is accelerating worldwide, particularly in Singapore, where most business leaders already view Agentic AI as essential for maintaining competitiveness. The technology is becoming embedded in workflows rather than used as an experimental add-on.

Digital twins are also reaching maturity. Instead of being static models, they now mirror complex environments such as ports, airports and high-rise estates, allowing organisations to simulate emergencies, plan resource deployment, and optimise systems in real time.

Wearables and AR tools are undergoing a similar shift, acting as intelligent companions that interpret the environment and provide timely guidance, rather than operating as passive recording devices.

The direction of travel is clear. Security work is becoming more predictive, interconnected and immersive.

Organisations most likely to benefit are those that prioritise integration, simulation and augmentation, while measuring outcomes through KPIs such as response speed, false-positive reduction and decision confidence instead of chasing technological novelty.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI chatbots exploited to create nonconsensual bikini deepfakes

Users of popular AI chatbots are generating bikini deepfakes by manipulating photos of fully clothed women, often without consent. Online discussions show how generative AI tools can be misused to create sexually suggestive deepfakes from ordinary images, raising concerns about image-based abuse.

A now-deleted Reddit thread shared prompts for using Google’s Gemini to alter clothing in photographs. One post asked for a woman’s traditional dress to be changed to a bikini. Reddit removed the content and later banned the subreddit over deepfake-related harassment.

Researchers and digital rights advocates warn that nonconsensual deepfakes remain a persistent form of online harassment. Millions of users have visited AI-powered websites designed to undress people in photos. The trend reflects growing harm enabled by increasingly realistic image generation tools.

Most mainstream AI chatbots prohibit the creation of explicit images and apply safeguards to prevent abuse. However, recent advances in image-editing models have made it easier for users to bypass guardrails using simple prompts, according to limited testing and expert assessments.

Technology companies say their policies ban altering a person’s likeness without consent, with penalties including account suspensions. Legal experts argue that deepfakes involving sexualised imagery represent a core risk of generative AI and that accountability must extend to both users and platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated Jesuses spark concern over faith and bias

AI chatbots modelled on Jesus are becoming increasingly popular over Christmas, offering companionship or faith guidance to people who may feel emotionally vulnerable during the holidays.

Several platforms, including Character.AI, Talkie.AI and Text With Jesus, now host simulations claiming to answer questions in the voice of Jesus Christ.

Experts warn that such tools could gradually reshape religious belief and practice. Training data is controlled by a handful of technology firms, which means AI systems may produce homogenised and biased interpretations instead of reflecting the diversity of real-world faith communities.

Users who are young or unfamiliar with AI may also struggle to judge the accuracy or intent behind the answers they receive.

Researchers say AI chatbots are currently used as a supplement rather than a replacement for religious teaching.

However, concern remains that people may begin to rely on AI for spiritual reassurance during sensitive moments. Scholars recommend limiting use over the holidays and prioritising conversations with family, friends or trusted religious leaders instead of seeking emotional comfort from a chatbot.

Experts also urge users to reflect carefully on who designs these systems and why. Fact-checking answers and grounding faith in recognised sources may help reduce the risk of distortion as AI plays a growing role in people’s daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Small businesses battle rising cyber attacks in the US

Many small businesses in the US are facing a sharp rise in cyber attacks, yet large numbers still try to manage the risk on their own.

A recent survey by Guardz found that more than four in ten SMBs have already experienced a cyber incident, while most owners believe the overall threat level is continuing to increase.

Rather than relying on specialist teams, over half of small businesses still leave critical cybersecurity tasks to untrained staff or the owner. Only a minority have a formal incident response plan created with a cybersecurity professional, and more than a quarter do not carry cyber insurance.

Phishing, ransomware and simple employee mistakes remain the most common dangers, with negligence seen as the biggest internal risk.

Recovery times are improving, with most affected firms able to return to normal operations quickly and very few suffering lasting damage.

However, many still fail to conduct routine security assessments, and outdated technology remains a widespread concern. Some SMBs are increasing cybersecurity budgets, yet a significant share still spend very little or do not know how much is being invested.

More small firms are now turning to managed service providers instead of trying to cope alone.

The findings suggest that preparation, professional support and clearly defined response plans can greatly improve resilience, helping organisations reduce disruption and maintain business continuity when an attack occurs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!