South Korea launches labour–government body to address AI automation pressures

A new consultative body has been established in South Korea to manage growing anxiety over AI and rapid industrial change.

The Ministry of Employment and Labour joined forces with the Korean Confederation of Trade Unions to create a regular channel for negotiating how workplaces should adapt as robots and AI systems become more widespread across key industries.

The two sides will meet monthly to seek agreement on major labour issues. The union argued for a human-centred transition instead of a purely technological one, urging the government to strengthen protections for workers affected by restructuring and AI-powered production methods.

Officials in South Korea responded by promising that policy decisions will reflect direct input gathered from employees on the ground.

Concerns heightened after Hyundai Motor confirmed plans to mass-produce Atlas humanoid robots by 2028 and introduce them across its assembly lines. The project forms part of the company’s ambition to build a ‘physical AI’ future where machines perform risky or repetitive tasks in place of humans.

The debate intensified as new labour statistics showed a sharp decline in employment within professional and scientific technical services, where AI deployment is suspected of reducing demand for new hires.

KCTU warned that industrial transformation could widen inequality unless government policy prioritises people over profit.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Russia tightens controls as Telegram faces fresh restrictions

Authorities in Russia have tightened their grip on Telegram after the state regulator Roskomnadzor introduced new measures accusing the platform of failing to curb fraud and safeguard personal data.

Users across the country have increasingly reported slow downloads and disrupted media content since January, with complaints rising sharply early in the week. Although officials initially rejected claims of throttling, industry sources insist that download speeds have been deliberately reduced.

Telegram’s founder, Pavel Durov, argues that Roskomnadzor is trying to steer people toward Max rather than allowing open competition. Max is a government-backed messenger widely viewed by critics as a tool for surveillance and political control.

While text messages continue to load normally for most, media content such as videos, images and voice notes has become unreliable, particularly on mobile devices. Some users report that only the desktop version performs without difficulty.

The slowdown is already affecting daily routines, as many Russians rely on Telegram for work communication and document sharing, much as workplaces elsewhere rely on Slack rather than email.

Officials also use Telegram to issue emergency alerts, and regional leaders warn that delays could undermine public safety during periods of heightened military activity.

Pressure on foreign platforms has grown steadily. Restrictions on voice and video calls were introduced last summer, accompanied by claims that criminals and hostile actors were using Telegram and WhatsApp.

Meanwhile, Max continues to gain users, reaching 70 million monthly accounts by December. Despite its rise, it remains behind Telegram and WhatsApp, which still dominate Russia’s messaging landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI adoption leaves workers exhausted as a new study reveals rising workloads

Researchers from UC Berkeley’s Haas School of Business examined how AI shapes working habits inside a mid-sized technology firm, and the outcome raised concerns about employee well-being.

Workers embraced AI voluntarily because the tools promised faster results instead of lighter schedules. Over time, staff absorbed extra tasks and pushed themselves beyond sustainable limits, creating a form of workload creep that drained energy and reduced job satisfaction.

Once the novelty faded, employees noticed that AI had quietly intensified expectations. Engineers reported spending more time correcting AI-generated material passed on by colleagues, while many workers handled several tasks at once by combining manual effort with multiple automated agents.

Constant task-switching gave a persistent sense of juggling responsibilities, which lowered the quality of their focus.

These researchers also found that AI crept into personal time, with workers prompting tools during breaks, meetings, or moments intended for rest.

As a result, the boundaries between professional and private time weakened, leaving many employees feeling less refreshed and more pressured to keep up with accelerating workflows.

The study argues that AI increased the density of work rather than reducing it, undermining promises that automation would ease daily routines.

Evidence from other institutions reinforces the pattern, with many firms reporting little or no productivity improvement from AI. Researchers recommend clearer company-level AI guidelines to prevent overuse and protect staff from escalating workloads driven by automation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Facebook boosts user creativity with new Meta AI animations

Meta has introduced a new group of Facebook features that rely on Meta AI to expand personal expression across profiles, photos and Stories.

Users gain the option to animate their profile pictures, turning a still image into a short motion clip that reflects their mood instead of remaining static. Effects such as waves, confetti, hearts and party hats offer simple tools for creating a more playful online presence.

The update also includes Restyle, a tool that reimagines Stories and Memories through preset looks or AI-generated prompts. Users may shift an ordinary photograph into an illustrated, anime or glowy aesthetic, or adjust lighting and colour to match a chosen theme instead of limiting themselves to basic filters.

Facebook will highlight Memories that work well with the Restyle function to encourage wider use.

Feed posts receive a change of their own through animated backgrounds that appear gradually across accounts. People can pair text updates with visual backdrops such as ocean waves or falling leaves, creating messages that stand out instead of blending into the timeline.

Seasonal styles will arrive throughout the year to support festive posts and major events.

Meta aims to encourage more engaging interactions by giving users easy tools for playful creativity. The new features are designed to support expressive posts that feel more personal and more visually distinctive, helping users craft share-worthy moments across the platform.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Olympic ice dancers performing to AI-generated music spark controversy

The Olympic ice dance format combines a themed rhythm dance with a free dance. For the 2026 season, skaters must draw on 1990s music and styles. While most competitors chose recognisable tracks, the Czech siblings used a hybrid soundtrack blending AC/DC with an AI-generated music piece.

Katerina Mrazkova and Daniel Mrazek, ice dancers from Czechia, made their Olympic debut using a rhythm dance soundtrack that included AI-generated music, a choice permitted under current competition rules but one that quickly drew attention.

The International Skating Union lists the rhythm dance music as ‘One Two by AI (of 90s style Bon Jovi)’ alongside ‘Thunderstruck’ by AC/DC. Olympic organisers confirmed the use of AI-generated material, with commentators noting the choice during the broadcast.

Criticism of the music selection extends beyond novelty. Earlier versions of the programme reportedly included AI-generated music with lyrics that closely resembled lines from well-known 1990s songs, raising concerns about originality.

The episode reflects wider tensions across creative industries, where generative tools increasingly produce outputs that closely mirror existing works. For the athletes, attention remains on performance, but questions around authorship and creative value continue to surface.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India enforces a three-hour removal rule for AI-generated deepfake content

Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.

Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.

Officials argue that rapid removal is essential as deepfakes grow more convincing and more accessible.

Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.

The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.

Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.

Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU Court opens path for WhatsApp to contest privacy rulings

The Court of Justice of the EU has ruled that WhatsApp can challenge an EDPB decision directly in European courts. Judges confirmed that firms may seek annulment when a decision affects them directly instead of relying solely on national procedures.

A ruling that reshapes how companies defend their interests under the GDPR framework.

The judgment centres on a 2021 instruction from the EDPB to Ireland’s Data Protection Commission regarding the enforcement of data protection rules against WhatsApp.

European regulators argued that only national authorities were formal recipients of these decisions. The court found that companies should be granted standing when their commercial rights are at stake.

By confirming this route, the court has created an important precedent for businesses facing cross-border investigations. Companies will be able to contest EDPB decisions at EU level rather than moving first through national courts, a shift that may influence future GDPR enforcement cases across the Union.

Legal observers expect more direct challenges as organisations adjust their compliance strategies. The outcome strengthens judicial oversight of the EDPB and could reshape the balance between national regulators and EU-level bodies in data protection governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Is AI eroding human intelligence?

The article reflects on the growing integration of AI into daily life, from classrooms to work, and asks whether this shift is making people intellectually sharper or more dependent on machines.

Tools such as ChatGPT, Grok and Perplexity have moved from optional assistants to everyday aids that generate instant answers, summaries and explanations, reducing the time and effort traditionally required for research and deep thinking.

While quantifiable productivity gains are clear, the piece highlights trade-offs: readily available answers can diminish the cognitive struggle that builds critical thinking, problem-solving and independent reasoning.

In education, easy AI responses may weaken students’ engagement in learning unless teachers guide their use responsibly. Some respondents point to creativity and conceptual understanding eroding when AI is used as a shortcut. In contrast, others see it as a democratising tutor that supports learners who otherwise lack resources.

The article also incorporates perspectives from AI systems themselves, which generally frame AI as neither inherently making people smarter nor dumber, but dependent on how it’s used.

It concludes that the impact of AI on human cognition is not predetermined by the technology, but shaped by user choice: whether AI is a partner that augments thinking or a crutch that replaces it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Conversational advertising takes the stage as ChatGPT tests in-chat promotions

Advertising inside ChatGPT marks a shift in where commercial messages appear, not a break from how advertising works. AI systems have shaped search, social media, and recommendations for years, but conversational interfaces make those decisions more visible during moments of exploration.

Unlike search or social formats, conversational advertising operates inside dialogue. Ads appear because users are already asking questions or seeking clarity. Relevance is built through context rather than keywords, changing when information is encountered rather than how decisions are made.

In healthcare and clinical research, this distinction matters. Conversational ads cannot enroll patients directly, but they may raise awareness earlier in patient journeys and shape later discussions with clinicians and care providers.

Early rollout will be limited to free or low-cost ChatGPT tiers, likely skewing exposure towards patients and caregivers. As with earlier platforms, sensitive categories may remain restricted until governance and safeguards mature.

The main risks are organisational rather than technical. New channels will not fix unclear value propositions or operational bottlenecks. Conversational advertising changes visibility, not fundamentals, and success will depend on responsible integration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI model promises faster monoclonal antibody production

Researchers at the University of Oklahoma have developed a machine-learning model that could significantly speed up the manufacturing of monoclonal antibodies, a fast-growing class of therapies used to treat cancer, autoimmune disorders, and other diseases.

The study, published in Communications Engineering, targets delays in selecting high-performing cell lines during antibody production. Output varies widely between Chinese hamster ovary cell clones, forcing manufacturers to spend weeks screening for high yields.

By analysing early growth data, the researchers trained a model to predict antibody productivity far earlier in the process. Using only the first 9 days of data, it forecast production trends through day 16 and identified higher-performing clones in more than 76% of tests.

The model was developed with Oklahoma-based contract manufacturer Wheeler Bio, combining production data with established growth equations. Although further validation is needed, early results suggest shorter timelines and lower manufacturing costs.

The work forms part of a wider US-funded programme to strengthen biotechnology manufacturing capacity, highlighting how AI is being applied to practical industrial bottlenecks rather than solely to laboratory experimentation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!