AI has dominated discussions at the World Economic Forum in Davos, where IMF managing director Kristalina Georgieva warned that labour markets are already undergoing rapid structural disruption.
According to Georgieva, demand for skills is shifting unevenly, with productivity gains benefiting some workers while younger people and first-time job seekers face shrinking opportunities.
Entry-level roles are particularly exposed as AI systems absorb routine and clerical tasks traditionally used to gain workplace experience.
Georgieva described the effect on young workers as comparable to a labour-market tsunami, arguing that reduced access to foundational roles risks long-term scarring for an entire generation entering employment.
IMF research suggests AI could affect roughly 60 percent of jobs in advanced economies and 40 percent globally, with only about half of exposed workers likely to benefit.
For others, automation may lead to lower wages, slower hiring and intensified pressure on middle-income roles lacking AI-driven productivity gains.
At Davos 2026, Georgieva warned that the rapid, unregulated deployment of AI in advanced economies risks outpacing public policy responses.
Without clear guardrails and inclusive labour strategies, she argued that technological acceleration could deepen inequality rather than supporting broad-based economic resilience.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UN agencies have issued a stark warning over the accelerating risks AI poses to children online, citing rising cases of grooming, deepfakes, cyberbullying and sexual extortion.
A joint statement published on 19 January urges urgent global action, highlighting how AI tools increasingly enable predators to target vulnerable children with unprecedented precision.
Recent data underscores the scale of the threat, with technology-facilitated child abuse cases in the US surging from 4,700 in 2023 to more than 67,000 in 2024.
During the COVID-19 pandemic, online exploitation intensified, particularly affecting girls and young women, with digital abuse frequently translating into real-world harm, according to officials from the International Telecommunication Union.
Governments are tightening policies, led by Australia’s social media ban for under-16s, as the UK, France and Canada consider similar measures. UN agencies urged tech firms to prioritise child safety and called for stronger AI literacy across society.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple has accused the European Commission of preventing it from implementing App Store changes designed to comply with the Digital Markets Act, following a €500 million fine for breaching the regulation.
The company claims it submitted a formal compliance plan in October and has yet to receive a response from EU officials.
In a statement, Apple argued that the Commission requested delays while gathering market feedback, a process the company says lasted several months and lacked a clear legal basis.
The US tech giant described the enforcement approach as politically motivated and excessively burdensome, accusing the EU of unfairly targeting an American firm.
The Commission has rejected those claims, saying discussions with Apple remain ongoing and emphasising that any compliance measures must support genuinely viable alternative app stores.
Officials pointed to the emergence of multiple competing marketplaces after the DMA entered into force as evidence of market demand.
Scrutiny has increased following the decision by SetApp mobile to shut down its iOS app store in February, with the developer citing complex and evolving business terms.
Questions remain over whether Apple’s proposed shift towards commission-based fees and expanded developer communication rights will satisfy EU regulators.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has moved towards regulatory action against Grok, the generative AI chatbot developed by xAI, following allegations that the system was used to generate and distribute sexually exploitative deepfake images.
The country’s Personal Information Protection Commission has launched a preliminary fact-finding review to assess whether violations occurred and whether the matter falls within its legal remit.
The review follows international reports accusing Grok of facilitating the creation of explicit and non-consensual images of real individuals, including minors.
Under the Personal Information Protection Act of South Korea, generating or altering sexual images of identifiable people without consent may constitute unlawful handling of personal data, exposing providers to enforcement action.
Concerns have intensified after civil society groups estimated that millions of explicit images were produced through Grok over a short period, with thousands involving children.
Several governments, including those in the US, Europe and Canada, have opened inquiries, while parts of Southeast Asia have opted to block access to the service altogether.
In response, xAI has introduced technical restrictions preventing users from generating or editing images of real people. Korean regulators have also demanded stronger youth protection measures from X, warning that failure to address criminal content involving minors could result in administrative penalties.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
French President Emmanuel Macron has called for an accelerated legislative process to introduce a nationwide ban on social media for children under 15 by September.
Speaking in a televised address, Macron said the proposal would move rapidly through parliament so that explicit rules are in place before the new school year begins.
Macron framed the initiative as a matter of child protection and digital sovereignty, arguing that foreign platforms or algorithmic incentives should not shape young people’s cognitive and emotional development.
He linked excessive social media use to manipulation, commercial exploitation and growing psychological harm among teenagers.
Data from France’s health watchdog show that almost half of teenagers spend between two and five hours a day on their smartphones, with the vast majority accessing social networks daily.
Regulators have associated such patterns with reduced self-esteem and increased exposure to content linked to self-harm, drug use and suicide, prompting legal action by families against major platforms.
The proposal from France follows similar debates in the UK and Australia, where age-based access restrictions have already been introduced.
The French government argues that decisive national action is necessary instead of waiting for a slower Europe-wide consensus, although Macron has reiterated support for a broader EU approach.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Generative phishing techniques are becoming harder to detect as attackers use subtle visual tricks in web addresses to impersonate trusted brands. A new campaign reported by Cybersecurity News shows how simple character swaps create fake websites that closely resemble real ones on mobile browsers.
The phishing attacks rely on a homoglyph technique where the letters ‘r’ and ‘n’ are placed together to mimic the appearance of an ‘m’ in a domain name. On smaller screens, the difference is difficult to spot, allowing phishing pages to appear almost identical to real Microsoft or Marriott login sites.
Cybersecurity researchers observed domains such as rnicrosoft.com being used to send fake security alerts and invoice notifications designed to lure victims into entering credentials. Once compromised, accounts can be hijacked for financial fraud, data theft, or wider access to corporate systems.
Experts warn that mobile browsing increases the risk, as users are less likely to inspect complete URLs before logging in. Directly accessing official apps or typing website addresses manually remains the safest way to avoid falling into these traps.
Security specialists also continue to recommend passkeys, strong, unique passwords, and multi-factor authentication across all major accounts, as well as heightened awareness of domains that visually resemble familiar brands through character substitution.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A consortium of 10 central European banks has established a new company, Qivalis, to develop and issue a euro-pegged stablecoin, targeting a launch in the second half of 2026, subject to regulatory approval.
The initiative seeks to offer a European alternative to US dollar-dominated digital payment systems and strengthen the region’s strategic autonomy in digital finance.
The participating banks include BNP Paribas, ING, UniCredit, KBC, Danske Bank, SEB, Caixabank, DekaBank, Banca Sella, and Raiffeisen Bank International, with BNP Paribas joining after the initial announcement.
Former Coinbase Germany chief executive Jan-Oliver Sell will lead Qivalis as CEO, while former NatWest chair Howard Davies has been appointed chair. The Amsterdam-based company plans to build a workforce of up to 50 employees over the next two years.
Initial use cases will focus on crypto trading, enabling fast, low-cost payments and settlements, with broader applications planned later. The project emerges as stablecoins grow rapidly, led by dollar-backed tokens, while limited € alternatives drive regulatory interest and ECB engagement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Oklahoma lawmakers have introduced Senate Bill 2064, proposing a legal framework that allows businesses, state employees, and residents to receive payments in Bitcoin without designating it as legal tender.
The bill recognises Bitcoin as a financial instrument, aligning with constitutional limits while enabling its voluntary use across payroll, procurement, and private transactions.
Under the proposal, state employees could opt to receive wages in Bitcoin, US dollars, or a combination of both at the start of each pay period. Payments would be settled at prevailing market rates and deposited into either self-hosted wallets or approved custodial accounts.
Vendors contracting with the state could also choose Bitcoin on a per-transaction basis, while crypto-native firms would benefit from reduced regulatory friction.
The legislation instructs the State Treasurer to appoint a payment processor and develop operational rules, with contracts targeted for completion by early 2027.
If approved, the framework would take effect in November 2026, positioning Oklahoma among a small group of US states exploring direct Bitcoin integration into public finance, alongside initiatives already launched in Texas and New Hampshire.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Indonesia is promoting blended finance as a key mechanism to meet the growing investment needs of AI and digital infrastructure. By combining public and private funding, the government aims to accelerate the development of scalable digital systems while aligning investments with sustainability goals and local capacity-building.
The rapid global expansion of AI is driving a sharp rise in demand for computing power and data centres. The government views this trend as both a strategic economic opportunity and a challenge that requires sound financial governance and well-designed policies to ensure long-term national benefits.
International financial institutions and global investors are increasingly supportive of public–private financing models. Such partnerships are seen as essential for mobilising large-scale, long-term capital and supporting the sustainable development of AI-related infrastructure in developing economies.
To attract sustained investment, the government is improving the overall investment climate through regulatory simplification, licensing reforms, integration of the Online Single Submission system, and incentives such as tax allowances and tax holidays. These measures are intended to support advanced technology sectors that require significant and continuous capital outlays.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A growing unease among writers is emerging as AI tools reshape how language is produced and perceived. Long-established habits, including the use of em dashes and semicolons, are increasingly being viewed with suspicion as machine-generated text becomes more common.
The concern is not opposition to AI itself, but the blurring of boundaries between human expression and automated output. Writers whose work was used to train large language models without consent say stylistic traits developed over decades are now being misread as algorithmic authorship.
Academic and editorial norms are also shifting under this pressure. Teaching practices that once valued rhythm, voice, and individual cadence are increasingly challenged by stricter stylistic rules, sometimes framed as safeguards against sloppy or machine-like writing rather than as matters of taste or craft.
At the same time, productivity tools embedded into mainstream software continue to intervene in the writing process, offering substitutions and revisions that prioritise clarity and efficiency over nuance. Such interventions risk flattening language and discouraging the idiosyncrasies that define human authorship.
As AI becomes embedded in publishing, education, and professional writing, the debate is shifting from detection to preservation. Many writers warn that protecting human voice and stylistic diversity is essential, arguing that affectless, uniform prose would erode creativity and trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!