OpenAI strengthened ChatGPT Atlas with new protections against prompt injection attacks

Protecting AI agents from manipulation has become a top priority for OpenAI after rolling out a major security upgrade to ChatGPT Atlas.

The browser-based agent now includes stronger safeguards against prompt injection attacks, where hidden instructions inside emails, documents or webpages attempt to redirect the agent’s behaviour instead of following the user’s commands.

Prompt injection poses a unique risk because Atlas can carry out actions that a person would normally perform inside a browser. A malicious email or webpage could attempt to trigger data exposure, unauthorised transactions or file deletion.

Criminals exploit the fact that agents process large volumes of content across an almost unlimited online surface.

OpenAI has developed an automated red-team framework that uses reinforcement learning to simulate sophisticated attackers.

When fresh attack patterns are discovered, the models behind Atlas are retrained so that resistance is built into the agent rather than added afterwards. Monitoring and safety controls are also updated using real attack traces.

These new protections are already live for all Atlas users. OpenAI advises people to limit logged-in access where possible, check confirmation prompts carefully and give agents well-scoped tasks instead of broad instructions.

The company argues that proactive defence is essential as agentic AI becomes more capable and widely deployed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots struggle with dialect fairness

Researchers are warning that AI chatbots may treat dialect speakers unfairly instead of engaging with them neutrally. Studies across English and German dialects found that large language models often attach negative stereotypes or misunderstand everyday expressions, leading to discriminatory replies.

A study in Germany tested ten language models using dialects such as Bavarian and Kölsch. The systems repeatedly described dialect speakers as uneducated or angry, and the bias became stronger when the dialect was explicitly identified.

Similar findings emerged elsewhere, including UK council services and AI shopping assistants that struggled with African American English.

Experts argue that such patterns risk amplifying social inequality as governments and businesses rely more heavily on AI. One Indian job applicant even saw a chatbot change his surname to reflect a higher caste, showing how linguistic bias can intersect with social hierarchy instead of challenging it.

Developers are now exploring customised AI models trained with local language data so systems can respond accurately without reinforcing stereotypes.

Researchers say bias can be tuned out of AI if handled responsibly, which could help protect dialect speakers rather than marginalise them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New SIM cards in South Korea now require real-time facial recognition

South Korea has introduced mandatory facial recognition for anyone registering a new SIM card or eSIM, whether in-store or online.

The live scan must match the photo on an official ID so that each phone number can be tied to a verified person instead of relying on paperwork alone.

Existing users are not affected, and the requirement applies only at the moment a number is issued.

The government argues that stricter checks are needed because telecom fraud has become industrialised and relies heavily on illegally registered SIM cards.

Criminal groups have used stolen identity data to obtain large volumes of numbers that can be swapped quickly to avoid detection. Regulators now see SIM issuance as the weakest link and the point where intervention is most effective.

Telecom companies must integrate biometric checks into onboarding, while authorities insist that facial data is used only for real-time verification and not stored. Privacy advocates warn that biometric verification creates new risks because faces cannot be changed if compromised.

They also question whether such a broad rule is proportionate when mobile access is essential for daily life.

The policy places South Korea in a unique position internationally, combining mandatory biometrics with defined legal limits. Its success will be judged on whether fraud meaningfully declines instead of being displaced.

A rule that has become a test case for how far governments should extend biometric identity checks into routine services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Germany considers age limits after Australian social media ban

Digital Minister Karsten Wildberger has indicated support for stricter age limits on social media after Australia banned teenagers under 16 from using major online platforms. He said age restrictions were more than justified and that the policy had clear merit.

Australia’s new rules require companies to remove under 16 user profiles and stop new ones from being created. Officials argued that the measure aims to reduce cyberbullying, grooming and mental health harm instead of relying only on parental supervision.

The European Commission President said she was inspired by the move, although social media companies and civil liberties groups have criticised it.

Germany has already appointed an expert commission to examine child and youth protection in the digital era. The panel is expected to publish recommendations by summer 2025, which could include policies on social media access and potential restrictions on mobile phone use in schools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Korean Air employee data breach exposes 30,000 records after cyberattack

Investigators are examining a major data breach involving Korean Air after personal records for around 30,000 employees were exposed in a cyberattack on a former subsidiary.

An incident that affected KC&D Service, which previously handled in-flight catering before being sold to private equity firm Hahn and Company in 2020.

The leaked information is understood to include employee names and bank account numbers. Korean Air said customer records were not impacted, and emergency security checks were completed instead of waiting for confirmation of the intrusion.

Korean Air also reported the breach to the relevant authorities.

Executives said the company is focusing on identifying the full scope of the breach and who has been affected, while urging KC&D to strengthen controls and prevent any recurrence. Korean Air also plans to upgrade internal data protection measures.

The attack follows a similar case at Asiana Airlines last week, where details of about 10,000 employees were compromised, raising wider concerns over cybersecurity resilience across the aviation sector of South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SK Telecom introduces South Korea’s first hyperscale AI model

The telecommunications firm, SK Telecom, is preparing to unveil A.X K1, Korea’s first hyperscale language model built with 519 billion parameters.

Around 33 billion parameters are activated during inference, so the AI model can keep strong performance instead of demanding excessive computing power. The project is part of a national initiative involving universities and industry partners.

The company expects A.X K1 to outperform smaller systems in complex reasoning, mathematics and multilingual understanding, while also supporting code generation and autonomous AI agents.

At such a scale, the model can operate as a teacher system that transfers knowledge to smaller, domain-specific tools that might directly improve daily services and industrial processes.

Unlike many global models trained mainly in English, A.X K1 has been trained in Korean from the outset so it naturally understands local language, culture and context.

SK Telecom plans to deploy the model through its AI service Adot, which already has more than 10 million subscribers, allowing access via calls, messages, the web and mobile apps.

The company foresees applications in workplace productivity, manufacturing optimisation, gaming dialogue, robotics and semiconductor performance testing.

Research will continue so the model can support the wider AI ecosystem of South Korea, and SK Telecom plans to open-source A.X K1 along with an API to help local developers create new AI agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

La Poste suffers DDoS attack as Noname057 claims responsibility

Authorities in France are responding to a significant cyber incident after a pro-Russian hacker group, Noname057, claimed responsibility for a distributed denial-of-service attack on the national postal service, La Poste.

The attack began on 22 December and forced core computer systems offline, delaying parcel deliveries during the busy Christmas period instead of allowing normal operations to continue.

According to reports, standard letter delivery was not affected. However, postal staff lost the ability to track parcels, and customers experienced disruptions when using online payment services connected to La Banque Postale.

Recovery work was still underway several days later, underscoring the increasing reliance of critical services on uninterrupted digital infrastructure.

Noname057 has previously been linked to cyberattacks across Europe, mainly targeting Ukraine and countries seen as supportive of Kyiv instead of neutral states.

Europol led a significant operation against the group earlier in the year, with the US Department of Justice also involved, highlighting growing international coordination against cross-border cybercrime.

The incident has renewed concerns about the vulnerability of essential logistics networks and public-facing services to coordinated cyber disruption. European authorities continue to assess long-term resilience measures to protect citizens and core services from future attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan climbs global AI readiness ranking

Kazakhstan has risen to 60th place out of 195 countries in the 2025 Government AI Readiness Index, marking a 16-place improvement and highlighting a year of accelerated institutional and policy development.

The ranking, compiled by Oxford Insights, measures governments’ ability to adopt and manage AI across public administration, the economy, and social systems.

At a regional level, Kazakhstan now leads Central Asia in AI readiness. A strong performance in the Public Sector Adoption pillar, with a score of 73.59, reflects the widespread use of digital services, e-government platforms, and a shift toward data-led public service delivery.

The country’s advanced digital infrastructure, high internet penetration, and mature electronic government ecosystem provide a solid foundation for scaling AI nationwide.

Political and governance initiatives have further strengthened Kazakhstan’s position. In 2025, the government enacted its first comprehensive AI law, which covers ethics, safety, and digital innovation.

At the same time, the Ministry of Digital Development, Innovation and Aerospace Industry was restructured into a dedicated Ministry of Artificial Intelligence and Digital Development, signalling the government’s commitment to making AI a central policy priority.

Kazakhstan’s progress demonstrates how a focused policy, infrastructure, and institutional approach can enhance AI readiness, enabling the responsible and effective integration of AI across public and economic sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic AI, digital twins, and intelligent wearables reshape security operations in 2026

Operational success in security technology is increasingly being judged through measurable performance rather than early-stage novelty.

As 2026 approaches, Agentic AI, digital twins and intelligent wearables are moving from research concepts into everyday operational roles, reshaping how security functions are designed and delivered.

Agentic AI is no longer limited to demonstrations. Instead of simple automation, autonomous agents now analyse video feeds, access data and sensor logs to investigate incidents and propose mitigation steps for human approval.

Adoption is accelerating worldwide, particularly in Singapore, where most business leaders already view Agentic AI as essential for maintaining competitiveness. The technology is becoming embedded in workflows rather than used as an experimental add-on.

Digital twins are also reaching maturity. Instead of being static models, they now mirror complex environments such as ports, airports and high-rise estates, allowing organisations to simulate emergencies, plan resource deployment, and optimise systems in real time.

Wearables and AR tools are undergoing a similar shift, acting as intelligent companions that interpret the environment and provide timely guidance, rather than operating as passive recording devices.

The direction of travel is clear. Security work is becoming more predictive, interconnected and immersive.

Organisations most likely to benefit are those that prioritise integration, simulation and augmentation, while measuring outcomes through KPIs such as response speed, false-positive reduction and decision confidence instead of chasing technological novelty.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT may move beyond GPTs as OpenAI develops new Skills feature

OpenAI is said to be testing a new feature for ChatGPT that would mark a shift from Custom GPTs toward a more modular system of Skills.

Reports suggest the project, internally codenamed Hazelnut, will allow users and developers to teach the AI model standalone abilities, workflows and domain knowledge instead of relying only on role-based configurations.

The Skills framework is designed to allow multiple abilities to be combined automatically when a task requires them. The system aims to increase portability across the web version, desktop client and API, while loading instructions only when needed instead of consuming the entire context window.

Support for running executable code is also expected, providing the model with stronger reliability for logic-driven work, rather than relying entirely on generated text.

Industry observers note similarities to Anthropic’s Claude, which already benefits from a skill-like structure. Further features are expected to include slash-command interactions, a dedicated Skill editor and one-click conversion from existing GPTs.

Market expectations point to an early 2026 launch, signalling a move toward ChatGPT operating as an intelligent platform rather than a traditional chatbot.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!