New alliance between Samsung and SK Telecom accelerates 6G innovation

Samsung Electronics and SK Telecom have taken a significant step toward shaping next-generation connectivity after signing an agreement to develop essential 6G technologies.

Their partnership centres on AI-based radio access networks, with both companies aiming to secure an early lead as global competition intensifies.

Research teams from Samsung and SK Telecom will build and test key components, including AI-based channel estimation, distributed MIMO and AI-driven schedulers.

AI models will refine signals in real-time to improve accuracy, rather than relying on conventional estimation methods. Meanwhile, distributed MIMO will enable multiple antennas to cooperate for reliable, high-speed communication across diverse environments.

The companies believe that AI-enabled schedulers and core networks will manage data flows more efficiently as the number of devices continues to rise.

Their collaboration also extends into the AI-RAN Alliance, where a jointly proposed channel estimation technology has already been accepted as a formal work item, strengthening their shared role in shaping industry standards.

Samsung continues to promote 6G research through its Advanced Communications Research Centre, and recent demonstrations at major industry events highlight the growing momentum behind AI-RAN technology.

Both organisations expect their work to accelerate the transition toward a hyperconnected 6G future, rather than allowing competing ecosystems to dominate early development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and anonymity intensifies online violence against women

Digital violence against women is rising sharply, fuelled by AI, online anonymity, and weak legal protections, leaving millions exposed.

UN Women warns that abuse on digital platforms often spills into real life, threatening women’s safety, livelihoods, and ability to participate freely in public life.

Public figures, journalists, and activists are increasingly targeted with deepfakes, coordinated harassment campaigns, and gendered disinformation designed to silence and intimidate.

One in four women journalists report receiving online death threats, highlighting the urgent scale and severity of the problem.

Experts call for stronger laws, safer digital platforms, and more women in technology to address AI-driven abuse effectively. Investments in education, digital literacy, and culture-change programmes are also vital to challenge toxic online communities and ensure digital spaces promote equality rather than harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Warner Music partners with AI song generator Suno

A landmark agreement has been reached between Warner Music and AI music platform Suno, ending last year’s copyright lawsuit that accused the service of using artists’ work without permission.

Fans can now generate AI-created songs using the voices, names, and likenesses of Warner artists who opt in, offering a new way to engage with music.

The partnership will introduce new licensed AI models, including download limits and paid tiers, to prevent a flood of AI tracks on streaming platforms.

Suno has also acquired the live-music discovery platform Songkick, expanding its digital footprint and strengthening connections between AI music and live events.

Music industry experts say the deal demonstrates how AI innovation can coexist with artists’ rights, as the UK government continues consultations on intellectual property for AI.

Creators and policymakers are advocating opt-in frameworks to ensure artists are fairly compensated when their works are used to train AI models.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots misidentify images they created

Growing numbers of online users are turning to AI chatbots to verify suspicious images, yet many tools are failing to detect fakes they created themselves. AFP found several cases in Asia where AI systems labelled fabricated photos as authentic, including a viral image of former Philippine lawmaker Elizaldy Co.

The failures highlight a lack of genuine visual analysis in current models. Many models are primarily trained on language patterns, resulting to inconsistent decisions even when dealing with images generated by the same generative systems.

Investigations also uncovered similar misidentifications during unrest in Pakistan-administered Kashmir, where AI models wrongly validated synthetic protest images. A Columbia University review reinforced the trend, with seven leading systems unable to verify any of the ten authentic news photos.

Specialists argue that AI may assist professional fact-checkers but cannot replace them. They emphasise that human verification remains essential as AI-generated content becomes increasingly lifelike and continues to circulate widely across social media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI use by US immigration agents sparks concern

A US federal judge has condemned immigration agents in Chicago for using AI to draft use-of-force reports, warning that the practice undermines credibility. Judge Sara Ellis noted that one agent fed a short description and images into ChatGPT before submitting the report.

Body camera footage cited in the ruling showed discrepancies between events recorded and the written narrative. Experts say AI-generated accounts risk inaccuracies in situations where courts rely on an officer’s personal recollection to assess reasonableness.

Researchers argue that poorly supervised AI use could erode public trust and compromise privacy. Some warn that uploading images into public tools relinquishes control of sensitive material, exposing it to misuse.

Police departments across the US are still developing policies for safe deployment of generative tools. Several states now require officers to label AI-assisted reports, while specialists call for stronger guardrails before the technology is applied in high-stakes legal settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Up to 3 million UK jobs at risk from automation by 2035

A new report from NFER warns that up to 3 million low-skilled jobs in the UK could disappear by 2035 due to the growing adoption of automation and AI. Sectors most at risk include trades, machine operations and administrative work, where routine and repetitive tasks dominate.

Economic forecasts remain mixed. The overall UK labour market is expected to grow by 2.3 million jobs by 2035, with gains primarily in professional and managerial roles. Many displaced workers may struggle to find new employment, widening inequality.

The change contrasts with earlier predictions suggesting AI would target higher-skilled jobs such as consultancy or software engineering. Current findings emphasise that manual and lower-skill roles face the most significant short-term disruption from AI.

Policymakers and educators are encouraged to build extensive retraining programmes and foster skills like creativity, communication and digital literacy. Without such efforts, long-term unemployment could become a significant challenge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Copilot will be removed from WhatsApp on 15 January 2026

Microsoft will withdraw Copilot from WhatsApp as of 15 January 2026, following the implementation of new platform rules that ban all LLM chatbots.

The service helped millions of users interact with their AI companion inside an everyday messaging environment, yet the updated policy leaves no option for continued support.

Copilot access will continue on the mobile app, the web portal and Windows, offering fuller functionality instead of the limited experience available on WhatsApp.

Users are encouraged to rely on these platforms for ongoing features such as Copilot Voice, Vision and Mico, which expand everyday use across a broader set of tasks.

Chat history cannot be transferred because WhatsApp operated the service without authentication; therefore, users must manually export their conversations before the deadline. Copilot remains free across supported platforms, although some advanced features require a subscription.

Microsoft is working to ensure a smooth transition and stresses that users can expect a more capable experience after leaving WhatsApp, as development resources now focus on its dedicated environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI transforms enterprise workflows in 2026

Enterprise AI entered a new phase as organisations transitioned from simple, prompt-driven tools to autonomous agents capable to acting within complex workflows.

Leaders now face a reality where agentic systems can accelerate development, improve decision-making, and support employees, yet concerns over unreliable data and inconsistent behaviour still weaken trust.

AI adoption has risen sharply, although many remain cautious about committing fully without stronger safeguards in place.

The next stage will rely on multi-agent models where an orchestrator coordinates specialised agents across departments. Single agents will lose effectiveness if they fail to offer scalable value, as enterprises require communication protocols, unified context, and robust governance.

Agents will increasingly pursue outcomes rather than follow instructions. At the same time, event-driven automation will allow them to detect problems, initiate analysis, and collaborate with other agents without waiting for human prompts. Simulation environments will further accelerate learning and strengthen reliability.

Trusted AI will become a defining competitive factor. Brands will be judged by the quality, personalisation, and relational intelligence of their agents rather than traditional identity markers.

Effective interfaces, transparent governance, and clear metrics for agent adherence will shape customer loyalty and shareholder confidence.

Cybersecurity will shift toward autonomous, self-healing digital immune systems, while advances in spatially aware AI will accelerate robotics and immersive simulations across various industries.

Broader impacts will reshape workplace culture. AI-native engineers will shorten development cycles, while non-technical employees will create personal applications, rather than relying solely on central teams.

Ambient intelligence may push new hardware into the mainstream, and sustainability debates will increasingly focus on water usage in data-intensive AI systems. Governments are preparing to upskill public workforces, and consumer agents will pressure companies to offer better value.

Long-term success will depend on raising AI literacy and selecting platforms designed for scalable, integrated, and agentic operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI may reshape weather and climate modelling

The UK’s Met Office has laid out a strategic plan for integrating AI, specifically machine learning (ML), with traditional physics-based climate and weather models. The aim is to deliver what it calls an ‘optimal blend’ of AI-driven and physics-based forecasting.

To clarify what that blend might look like, the Met Office has defined five distinct approaches. One is the familiar independent physics-based model, which uses physical laws to simulate atmospheric dynamics, trusted but computationally intensive.

At the other end is an independent ML-based model that learns patterns entirely from data, offering far greater speed and scalability.

Between these extremes lie two ‘hybrid’ approaches: hybrid-integrated ML, where ML replaces or enhances parts of the physics model, and hybrid-composite ML, where ML and physics models run separately and feed into each other.

A fifth option is augmented ML, where ML is applied after the model has run to improve its output (for example, downscaling or refining ensemble forecasts).

However, this framework is more than a technical taxonomy; it provides a shared language for scientists, policymakers, and clients to understand how AI and traditional modelling can coexist.

It also helps guide future decisions, for example, allowing gradual adoption of ML in places where it makes sense, while preserving the robustness of well-understood physics methods in critical areas.

The move comes as ML-based weather and climate tools have shown increasing promise. For instance, in 2025, the Met Office published research showing a purely ML-based model achieved seasonal forecasting skill comparable to conventional physics-based methods, but with far lower computing demands.

For digital-policy watchers and climate analysts alike, this signals a shift: forecasting may become more dynamic, scalable and accessible, especially valuable in a changing climate where speed, resolution and adaptability matter as much as theoretical accuracy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake and AI fraud surges despite stable identity-fraud rates

According to the 2025 Identity Fraud Report by verification firm Sumsub, the global rate of identity fraud has declined modestly, from 2.6% in 2024 to 2.2% this year; however, the nature of the threat is changing rapidly.

Fraudsters are increasingly using generative AI and deepfakes to launch what Sumsub calls ‘sophisticated fraud’, attacks that combine synthetic identities, social engineering, device tampering and cross-channel manipulation. These are not mass spam scams: they are targeted, high-impact operations that are far harder to detect and mitigate.

The report reveals a marked increase in deepfake-related schemes, including synthetic-identity fraud (the creation of entirely fake but AI-generated identities) and biometric forgeries designed to bypass identity verification processes. Deepfake-fraud and synthetic-identity attacks now represent a growing share of first-party fraud cases (where the verified ‘user’ is actually the fraudster).

Meanwhile, high-risk sectors such as dating apps, cryptocurrency exchanges and financial services are being hit especially hard. In 2025, romance-style scams involving AI personas and deepfakes accounted for a notable share of fraud cases. Banks, digital-first lenders and crypto platforms report rising numbers of impostor accounts and fraudulent onboarding attempts.

This trend reveals a significant disparity: although headline fraud rates have decreased slightly, each successful AI-powered fraud attempt now tends to be far more damaging, both financially and reputationally. As Sumsub warned, the ‘sophistication shift’ in digital identity fraud means that organisations and users must rethink security assumptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot