AI has dominated discussions at the World Economic Forum in Davos, where IMF managing director Kristalina Georgieva warned that labour markets are already undergoing rapid structural disruption.
According to Georgieva, demand for skills is shifting unevenly, with productivity gains benefiting some workers while younger people and first-time job seekers face shrinking opportunities.
Entry-level roles are particularly exposed as AI systems absorb routine and clerical tasks traditionally used to gain workplace experience.
Georgieva described the effect on young workers as comparable to a labour-market tsunami, arguing that reduced access to foundational roles risks long-term scarring for an entire generation entering employment.
IMF research suggests AI could affect roughly 60 percent of jobs in advanced economies and 40 percent globally, with only about half of exposed workers likely to benefit.
For others, automation may lead to lower wages, slower hiring and intensified pressure on middle-income roles lacking AI-driven productivity gains.
At Davos 2026, Georgieva warned that the rapid, unregulated deployment of AI in advanced economies risks outpacing public policy responses.
Without clear guardrails and inclusive labour strategies, she argued that technological acceleration could deepen inequality rather than supporting broad-based economic resilience.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple has accused the European Commission of preventing it from implementing App Store changes designed to comply with the Digital Markets Act, following a €500 million fine for breaching the regulation.
The company claims it submitted a formal compliance plan in October and has yet to receive a response from EU officials.
In a statement, Apple argued that the Commission requested delays while gathering market feedback, a process the company says lasted several months and lacked a clear legal basis.
The US tech giant described the enforcement approach as politically motivated and excessively burdensome, accusing the EU of unfairly targeting an American firm.
The Commission has rejected those claims, saying discussions with Apple remain ongoing and emphasising that any compliance measures must support genuinely viable alternative app stores.
Officials pointed to the emergence of multiple competing marketplaces after the DMA entered into force as evidence of market demand.
Scrutiny has increased following the decision by SetApp mobile to shut down its iOS app store in February, with the developer citing complex and evolving business terms.
Questions remain over whether Apple’s proposed shift towards commission-based fees and expanded developer communication rights will satisfy EU regulators.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers and free-speech advocates are warning that coordinated swarms of AI agents could soon be deployed to manipulate public opinion at a scale capable of undermining democratic systems.
According to a consortium of academics from leading universities, advances in generative and agentic AI now enable large numbers of human-like bots to infiltrate online communities and autonomously simulate organic political discourse.
Unlike earlier forms of automated misinformation, AI swarms are designed to adapt to social dynamics, learn community norms and exchange information in pursuit of a shared objective.
By mimicking human behaviour and spreading tailored narratives gradually, such systems could fabricate consensus, amplify doubt around electoral processes and normalise anti-democratic outcomes without triggering immediate detection.
Evidence of early influence operations has already emerged in recent elections across Asia, where AI-driven accounts have engaged users with large volumes of unverifiable information rather than overt propaganda.
Researchers warn that information overload, strategic neutrality and algorithmic amplification may prove more effective than traditional disinformation campaigns.
The authors argue that democratic resilience now depends on global coordination, combining technical safeguards such as watermarking and detection tools with stronger governance of political AI use.
Without collective action, they caution that AI-enabled manipulation risks outpacing existing regulatory and institutional defences.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has moved towards regulatory action against Grok, the generative AI chatbot developed by xAI, following allegations that the system was used to generate and distribute sexually exploitative deepfake images.
The country’s Personal Information Protection Commission has launched a preliminary fact-finding review to assess whether violations occurred and whether the matter falls within its legal remit.
The review follows international reports accusing Grok of facilitating the creation of explicit and non-consensual images of real individuals, including minors.
Under the Personal Information Protection Act of South Korea, generating or altering sexual images of identifiable people without consent may constitute unlawful handling of personal data, exposing providers to enforcement action.
Concerns have intensified after civil society groups estimated that millions of explicit images were produced through Grok over a short period, with thousands involving children.
Several governments, including those in the US, Europe and Canada, have opened inquiries, while parts of Southeast Asia have opted to block access to the service altogether.
In response, xAI has introduced technical restrictions preventing users from generating or editing images of real people. Korean regulators have also demanded stronger youth protection measures from X, warning that failure to address criminal content involving minors could result in administrative penalties.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
French President Emmanuel Macron has called for an accelerated legislative process to introduce a nationwide ban on social media for children under 15 by September.
Speaking in a televised address, Macron said the proposal would move rapidly through parliament so that explicit rules are in place before the new school year begins.
Macron framed the initiative as a matter of child protection and digital sovereignty, arguing that foreign platforms or algorithmic incentives should not shape young people’s cognitive and emotional development.
He linked excessive social media use to manipulation, commercial exploitation and growing psychological harm among teenagers.
Data from France’s health watchdog show that almost half of teenagers spend between two and five hours a day on their smartphones, with the vast majority accessing social networks daily.
Regulators have associated such patterns with reduced self-esteem and increased exposure to content linked to self-harm, drug use and suicide, prompting legal action by families against major platforms.
The proposal from France follows similar debates in the UK and Australia, where age-based access restrictions have already been introduced.
The French government argues that decisive national action is necessary instead of waiting for a slower Europe-wide consensus, although Macron has reiterated support for a broader EU approach.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Generative phishing techniques are becoming harder to detect as attackers use subtle visual tricks in web addresses to impersonate trusted brands. A new campaign reported by Cybersecurity News shows how simple character swaps create fake websites that closely resemble real ones on mobile browsers.
The phishing attacks rely on a homoglyph technique where the letters ‘r’ and ‘n’ are placed together to mimic the appearance of an ‘m’ in a domain name. On smaller screens, the difference is difficult to spot, allowing phishing pages to appear almost identical to real Microsoft or Marriott login sites.
Cybersecurity researchers observed domains such as rnicrosoft.com being used to send fake security alerts and invoice notifications designed to lure victims into entering credentials. Once compromised, accounts can be hijacked for financial fraud, data theft, or wider access to corporate systems.
Experts warn that mobile browsing increases the risk, as users are less likely to inspect complete URLs before logging in. Directly accessing official apps or typing website addresses manually remains the safest way to avoid falling into these traps.
Security specialists also continue to recommend passkeys, strong, unique passwords, and multi-factor authentication across all major accounts, as well as heightened awareness of domains that visually resemble familiar brands through character substitution.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft confirmed a service disruption affecting Outlook and Microsoft 365 users in the US, with problems first reported on Wednesday afternoon. The outage primarily affected business and enterprise customers nationwide.
In the US, users reported difficulties sending and receiving email, alongside problems accessing services such as Teams, SharePoint and OneDrive. Microsoft said part of its North America infrastructure was failing to process traffic correctly.
Engineers in the US began rebalancing traffic and restoring affected systems to stabilise services. Microsoft said recovery was under way, though full resolution would take additional time.
The incident highlights the reliance of organisations in the US on cloud-based productivity tools. Businesses across the country experienced disruptions extending into the evening as work and communication systems remained unstable.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A multi-faceted phishing campaign is abusing LinkedIn private messages to deliver weaponised malware using DLL sideloading, security researchers have warned. The activity relies on PDFs and archive files that appear trustworthy to bypass conventional security controls.
Attackers contact targets on LinkedIn and send self-extracting archives disguised as legitimate documents. When opened, a malicious DLL is sideloaded into a trusted PDF reader, triggering memory-resident malware that establishes encrypted command-and-control channels.
Using LinkedIn messages increases engagement by exploiting professional trust and bypassing email-focused defences. DLL sideloading allows malicious code to run inside legitimate applications, complicating detection.
The campaign enables credential theft, data exfiltration and lateral movement through in-memory backdoors. Encrypted command-and-control traffic makes containment more difficult.
Organisations using common PDF software or Python tooling face elevated risk. Defenders are advised to strengthen social media phishing awareness, monitor DLL loading behaviour and rotate credentials where compromise is suspected.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.
The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.
Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.
Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Police in Japan have arrested a man accused of creating and selling non-consensual deepfake pornography using AI tools. The Tokyo Metropolitan Police Department said thousands of manipulated images of female celebrities were distributed through paid websites.
Investigators in Japan allege the suspect generated hundreds of thousands of images over two years using freely available generative AI software. Authorities say the content was promoted on social media before being sold via subscription platforms.
The arrest follows earlier cases in Japan and reflects growing concern among police worldwide. In South Korea, law enforcement has reported hundreds of arrests linked to deepfake sexual crimes, while cases have also emerged in the UK.
European agencies, including Europol, have also coordinated arrests tied to AI-generated abuse material. Law enforcement bodies say the spread of accessible AI tools is forcing rapid changes in forensic investigation and in the handling of digital evidence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!