Agentic AI, digital twins, and intelligent wearables reshape security operations in 2026

Operational success in security technology is increasingly being judged through measurable performance rather than early-stage novelty.

As 2026 approaches, Agentic AI, digital twins and intelligent wearables are moving from research concepts into everyday operational roles, reshaping how security functions are designed and delivered.

Agentic AI is no longer limited to demonstrations. Instead of simple automation, autonomous agents now analyse video feeds, access data and sensor logs to investigate incidents and propose mitigation steps for human approval.

Adoption is accelerating worldwide, particularly in Singapore, where most business leaders already view Agentic AI as essential for maintaining competitiveness. The technology is becoming embedded in workflows rather than used as an experimental add-on.

Digital twins are also reaching maturity. Instead of being static models, they now mirror complex environments such as ports, airports and high-rise estates, allowing organisations to simulate emergencies, plan resource deployment, and optimise systems in real time.

Wearables and AR tools are undergoing a similar shift, acting as intelligent companions that interpret the environment and provide timely guidance, rather than operating as passive recording devices.

The direction of travel is clear. Security work is becoming more predictive, interconnected and immersive.

Organisations most likely to benefit are those that prioritise integration, simulation and augmentation, while measuring outcomes through KPIs such as response speed, false-positive reduction and decision confidence instead of chasing technological novelty.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-generated Jesuses spark concern over faith and bias

AI chatbots modelled on Jesus are becoming increasingly popular over Christmas, offering companionship or faith guidance to people who may feel emotionally vulnerable during the holidays.

Several platforms, including Character.AI, Talkie.AI and Text With Jesus, now host simulations claiming to answer questions in the voice of Jesus Christ.

Experts warn that such tools could gradually reshape religious belief and practice. Training data is controlled by a handful of technology firms, which means AI systems may produce homogenised and biased interpretations instead of reflecting the diversity of real-world faith communities.

Users who are young or unfamiliar with AI may also struggle to judge the accuracy or intent behind the answers they receive.

Researchers say AI chatbots are currently used as a supplement rather than a replacement for religious teaching.

However, concern remains that people may begin to rely on AI for spiritual reassurance during sensitive moments. Scholars recommend limiting use over the holidays and prioritising conversations with family, friends or trusted religious leaders instead of seeking emotional comfort from a chatbot.

Experts also urge users to reflect carefully on who designs these systems and why. Fact-checking answers and grounding faith in recognised sources may help reduce the risk of distortion as AI plays a growing role in people’s daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT becomes more customisable for tone and style

OpenAI has introduced new Personalisation settings in ChatGPT that allow users to fine-tune warmth, enthusiasm and emoji use. The changes are designed to make conversations feel more natural, instead of relying on a single default tone.

ChatGPT users can set each element to More, Less or Default, alongside existing tone styles such as Professional, Candid and Quirky. The update follows previous adjustments, where OpenAI first dialled back perceived agreeableness, then later increased warmth after users said the system felt overly cold.

Experts have raised concerns that highly agreeable AI could encourage emotional dependence, even as users welcome a more flexible conversational style.

Some commentators describe the feature as empowering, while others question whether customising a chatbot’s personality risks blurring emotional boundaries.

The new tone controls continue broader industry debates about how human-like AI should become. OpenAI hopes that added transparency and user choice will balance personal preference with responsible design, instead of encouraging reliance on a single conversational style.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Small businesses battle rising cyber attacks in the US

Many small businesses in the US are facing a sharp rise in cyber attacks, yet large numbers still try to manage the risk on their own.

A recent survey by Guardz found that more than four in ten SMBs have already experienced a cyber incident, while most owners believe the overall threat level is continuing to increase.

Rather than relying on specialist teams, over half of small businesses still leave critical cybersecurity tasks to untrained staff or the owner. Only a minority have a formal incident response plan created with a cybersecurity professional, and more than a quarter do not carry cyber insurance.

Phishing, ransomware and simple employee mistakes remain the most common dangers, with negligence seen as the biggest internal risk.

Recovery times are improving, with most affected firms able to return to normal operations quickly and very few suffering lasting damage.

However, many still fail to conduct routine security assessments, and outdated technology remains a widespread concern. Some SMBs are increasing cybersecurity budgets, yet a significant share still spend very little or do not know how much is being invested.

More small firms are now turning to managed service providers instead of trying to cope alone.

The findings suggest that preparation, professional support and clearly defined response plans can greatly improve resilience, helping organisations reduce disruption and maintain business continuity when an attack occurs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea fake news law sparks fears for press freedom

A significant debate has erupted in South Korea after the National Assembly passed new legislation aimed at tackling so-called fake news.

The revised Information and Communications Network Act bans the circulation of false or fabricated information online. It allows courts to impose punitive damages up to five times the losses suffered when media outlets or YouTubers intentionally spread disinformation for unjust profit.

Journalists, unions and academics warn that the law could undermine freedom of expression and weaken journalism’s watchdog function instead of strengthening public trust.

Critics argue that ambiguity over who decides what constitutes fake news could shift judgement away from the courts and toward regulators or platforms, encouraging self-censorship and increasing the risk of abusive lawsuits by influential figures.

Experts also highlight the lack of strong safeguards in South Korea against malicious litigation compared with the US, where plaintiffs must prove fault by journalists.

The controversy reflects more profound public scepticism about South Korean media and long-standing reporting practices that sometimes rely on relaying statements without sufficient verification, suggesting that structural reform may be needed instead of rapid, punitive legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nomani investment scam spreads across social media

Fraudulent investment platform Nomani has surged, spreading from Facebook to YouTube. ESET blocked tens of thousands of malicious links this year, mainly in Czech Republic, Japan, Slovakia, Spain, and Poland.

The scam utilises AI-generated videos, branded posts, and social media advertisements to lure victims into fake investments that promise high returns. Criminals then request extra fees or sensitive personal data, and often attempt a secondary scam posing as Europol or INTERPOL.

Recent improvements make Nomani’s AI videos more realistic, using trending news or public figures to appear credible. Campaigns run briefly and misuse social media forms and surveys to harvest information while avoiding detection.

Despite overall growth, detections fell 37% in the second half of 2025, suggesting that scammers are adapting to more stringent law enforcement measures. Meta’s ad platforms earned billions from scams, demonstrating the global reach of Nomani fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI search services face competition probe in Japan

Japan’s competition authority will probe AI search services from major domestic and international tech firms. The investigation aims to identify potential antitrust violations rather than impose immediate sanctions.

The probe is expected to cover LY Corp., Google, Microsoft and AI providers such as OpenAI and Perplexity AI. Concerns centre on how AI systems present and utilise news content within search results.

Legal action by Japanese news organisations alleges unauthorised use of articles by AI services. Regulators are assessing whether such practices constitute abuse of market dominance.

The inquiry builds on a 2023 review of news distribution contracts that warned against the use of unfair terms for publishers. Similar investigations overseas, including within the EU, have guided the commission’s approach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea tightens ID checks with facial verification for phone accounts

Mandatory facial verification will be introduced in South Korea for anyone opening a new mobile phone account, as authorities try to limit identity fraud.

Officials said criminals have been using stolen personal details to set up phone numbers that later support scams such as voice phishing instead of legitimate services.

Major mobile carriers, including LG Uplus, Korea Telecom and SK Telecom, will validate users by matching their faces against biometric data stored in the PASS digital identity app.

Such a requirement expands the country’s identity checks rather than replacing them outright, and is intended to make it harder for fraud rings to exploit stolen data at scale.

The measure follows a difficult year for data security in South Korea, marked by cyber incidents affecting more than half the population.

SK Telecom reported a breach involving all 23 million of its customers and now faces more than $1.5 billion in penalties and compensation.

Regulators also revealed that mobile virtual network operators were linked to 92% of counterfeit phones uncovered in 2024, strengthening the government’s case for tougher identity controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Romania investigates large scale cyber attack on national water body

Authorities in Romania have confirmed a severe ransomware attack on the national water administration ‘Apele Române’, which encrypted around 1,000 IT systems across most regional water basin offices.

Attackers used Microsoft’s BitLocker tool to lock files and then issued a ransom note demanding contact within seven days, although cybersecurity officials continue to reject any negotiation with criminals.

The disruption affected email systems, databases, servers and workstations instead of operational technology, meaning hydrotechnical structures and critical water management systems continued to function safely.

Staff coordinated activity by radio and telephone, and flood defence operations remained in normal working order while investigations and recovery progressed.

National cyber agencies, including the National Directorate of Cyber Security and the Romanian Intelligence Service’s cyber centre, are now restoring systems and moving to include water infrastructure within the state cyber protection framework.

The case underlines how ransomware groups increasingly target essential utilities rather than only private companies, making resilience and identity controls a strategic priority.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea plans huge fines for major data breaches

Prime Minister Kim Min-seok has called for punitive fines of up to 10 percent of company sales for repeated and serious data breaches, as public anger grows over large-scale leaks.

The government is seeking swift legislation to impose stronger sanctions on firms that fail to safeguard personal data, reflecting President Lee Jae Myung’s stance that violations require firm penalties instead of lenient warnings.

Kim said corporate responses to recent breaches had fallen far short of public expectations and stressed that companies must take full responsibility for protecting customer information.

Under the proposed framework, affected individuals would receive clearer notifications that include guidance on their rights to seek damages.

The government of South Korea also plans to strengthen investigative powers through coercive fines for noncompliance, while pursuing rapid reforms aimed at preventing further harm.

The tougher line follows a series of major incidents, including a leak at Shinhan Card that affected around 190,000 merchant records and a large-scale breach at Coupang that exposed the data of 33.7 million users.

Officials have described the Coupang breach as a serious social crisis that has eroded public trust.

Authorities have launched an interagency task force to identify responsibility and ensure tighter data protection across South Korea’s digital economy instead of relying on voluntary company action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!