AI and human love in the digital age debate

AI is increasingly entering intimate areas of human life, including romance and emotional companionship. AI chatbots are now widely used as digital companions, raising broader questions about emotional authenticity and human-machine relationships.

Millions of people use AI companion apps, and studies suggest that a significant share of them describe their relationship with a chatbot as romantic. While users may experience genuine emotions, experts stress that current AI systems do not feel love but generate responses based on patterns in data.

Researchers explain that large language models can simulate empathy and emotional understanding, yet they lack consciousness and subjective experience. Their outputs are designed to imitate human interaction rather than reflect genuine emotion.

Scientific research describes love as deeply rooted in biology. Hormones such as dopamine and oxytocin, along with specific brain regions, shape attraction, attachment, and emotional bonding. These processes are embodied and chemical, which machines do not possess.

Some scholars argue that future AI systems could replicate certain cognitive aspects of attachment, such as loyalty or repeated engagement. However, most agree that replicating human love would likely require consciousness, which remains poorly understood and technically unresolved.

Debate continues over whether conscious AI is theoretically possible. While some researchers believe advanced architectures or neuromorphic computing could move in that direction, no existing system meets the established criteria for consciousness.

In practice, human-AI romantic relationships remain asymmetrical. Chatbots are designed to engage, agree, and provide comfort, which can create dependency or unrealistic expectations about real-world relationships.

Experts therefore emphasise transparency and AI literacy, stressing that users should understand AI companions simulate emotion and do not possess feelings, intentions, or awareness; while these systems can imitate expressions of love, they do not experience it, and the emotional reality remains human even when the interaction is digital.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

eSafety escalates scrutiny of Roblox safety measures

Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.

In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.

Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.

Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.

Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea launches labour–government body to address AI automation pressures

A new consultative body has been established in South Korea to manage growing anxiety over AI and rapid industrial change.

The Ministry of Employment and Labour joined forces with the Korean Confederation of Trade Unions to create a regular channel for negotiating how workplaces should adapt as robots and AI systems become more widespread across key industries.

The two sides will meet monthly to seek agreement on major labour issues. The union argued for a human-centred transition instead of a purely technological one, urging the government to strengthen protections for workers affected by restructuring and AI-powered production methods.

Officials in South Korea responded by promising that policy decisions will reflect direct input gathered from employees on the ground.

Concerns heightened after Hyundai Motor confirmed plans to mass-produce Atlas humanoid robots by 2028 and introduce them across its assembly lines. The project forms part of the company’s ambition to build a ‘physical AI’ future where machines perform risky or repetitive tasks in place of humans.

The debate intensified as new labour statistics showed a sharp decline in employment within professional and scientific technical services, where AI deployment is suspected of reducing demand for new hires.

KCTU warned that industrial transformation could widen inequality unless government policy prioritises people over profit.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Russia tightens controls as Telegram faces fresh restrictions

Authorities in Russia have tightened their grip on Telegram after the state regulator Roskomnadzor introduced new measures accusing the platform of failing to curb fraud and safeguard personal data.

Users across the country have increasingly reported slow downloads and disrupted media content since January, with complaints rising sharply early in the week. Although officials initially rejected claims of throttling, industry sources insist that download speeds have been deliberately reduced.

Telegram’s founder, Pavel Durov, argues that Roskomnadzor is trying to steer people toward Max rather than allowing open competition. Max is a government-backed messenger widely viewed by critics as a tool for surveillance and political control.

While text messages continue to load normally for most, media content such as videos, images and voice notes has become unreliable, particularly on mobile devices. Some users report that only the desktop version performs without difficulty.

The slowdown is already affecting daily routines, as many Russians rely on Telegram for work communication and document sharing, much as workplaces elsewhere rely on Slack rather than email.

Officials also use Telegram to issue emergency alerts, and regional leaders warn that delays could undermine public safety during periods of heightened military activity.

Pressure on foreign platforms has grown steadily. Restrictions on voice and video calls were introduced last summer, accompanied by claims that criminals and hostile actors were using Telegram and WhatsApp.

Meanwhile, Max continues to gain users, reaching 70 million monthly accounts by December. Despite its rise, it remains behind Telegram and WhatsApp, which still dominate Russia’s messaging landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI adoption leaves workers exhausted as a new study reveals rising workloads

Researchers from UC Berkeley’s Haas School of Business examined how AI shapes working habits inside a mid-sized technology firm, and the outcome raised concerns about employee well-being.

Workers embraced AI voluntarily because the tools promised faster results instead of lighter schedules. Over time, staff absorbed extra tasks and pushed themselves beyond sustainable limits, creating a form of workload creep that drained energy and reduced job satisfaction.

Once the novelty faded, employees noticed that AI had quietly intensified expectations. Engineers reported spending more time correcting AI-generated material passed on by colleagues, while many workers handled several tasks at once by combining manual effort with multiple automated agents.

Constant task-switching gave a persistent sense of juggling responsibilities, which lowered the quality of their focus.

These researchers also found that AI crept into personal time, with workers prompting tools during breaks, meetings, or moments intended for rest.

As a result, the boundaries between professional and private time weakened, leaving many employees feeling less refreshed and more pressured to keep up with accelerating workflows.

The study argues that AI increased the density of work rather than reducing it, undermining promises that automation would ease daily routines.

Evidence from other institutions reinforces the pattern, with many firms reporting little or no productivity improvement from AI. Researchers recommend clearer company-level AI guidelines to prevent overuse and protect staff from escalating workloads driven by automation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Facebook boosts user creativity with new Meta AI animations

Meta has introduced a new group of Facebook features that rely on Meta AI to expand personal expression across profiles, photos and Stories.

Users gain the option to animate their profile pictures, turning a still image into a short motion clip that reflects their mood instead of remaining static. Effects such as waves, confetti, hearts and party hats offer simple tools for creating a more playful online presence.

The update also includes Restyle, a tool that reimagines Stories and Memories through preset looks or AI-generated prompts. Users may shift an ordinary photograph into an illustrated, anime or glowy aesthetic, or adjust lighting and colour to match a chosen theme instead of limiting themselves to basic filters.

Facebook will highlight Memories that work well with the Restyle function to encourage wider use.

Feed posts receive a change of their own through animated backgrounds that appear gradually across accounts. People can pair text updates with visual backdrops such as ocean waves or falling leaves, creating messages that stand out instead of blending into the timeline.

Seasonal styles will arrive throughout the year to support festive posts and major events.

Meta aims to encourage more engaging interactions by giving users easy tools for playful creativity. The new features are designed to support expressive posts that feel more personal and more visually distinctive, helping users craft share-worthy moments across the platform.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Olympic ice dancers performing to AI-generated music spark controversy

The Olympic ice dance format combines a themed rhythm dance with a free dance. For the 2026 season, skaters must draw on 1990s music and styles. While most competitors chose recognisable tracks, the Czech siblings used a hybrid soundtrack blending AC/DC with an AI-generated music piece.

Katerina Mrazkova and Daniel Mrazek, ice dancers from Czechia, made their Olympic debut using a rhythm dance soundtrack that included AI-generated music, a choice permitted under current competition rules but one that quickly drew attention.

The International Skating Union lists the rhythm dance music as ‘One Two by AI (of 90s style Bon Jovi)’ alongside ‘Thunderstruck’ by AC/DC. Olympic organisers confirmed the use of AI-generated material, with commentators noting the choice during the broadcast.

Criticism of the music selection extends beyond novelty. Earlier versions of the programme reportedly included AI-generated music with lyrics that closely resembled lines from well-known 1990s songs, raising concerns about originality.

The episode reflects wider tensions across creative industries, where generative tools increasingly produce outputs that closely mirror existing works. For the athletes, attention remains on performance, but questions around authorship and creative value continue to surface.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India enforces a three-hour removal rule for AI-generated deepfake content

Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.

Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.

Officials argue that rapid removal is essential as deepfakes grow more convincing and more accessible.

Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.

The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.

Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.

Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU Court opens path for WhatsApp to contest privacy rulings

The Court of Justice of the EU has ruled that WhatsApp can challenge an EDPB decision directly in European courts. Judges confirmed that firms may seek annulment when a decision affects them directly instead of relying solely on national procedures.

A ruling that reshapes how companies defend their interests under the GDPR framework.

The judgment centres on a 2021 instruction from the EDPB to Ireland’s Data Protection Commission regarding the enforcement of data protection rules against WhatsApp.

European regulators argued that only national authorities were formal recipients of these decisions. The court found that companies should be granted standing when their commercial rights are at stake.

By confirming this route, the court has created an important precedent for businesses facing cross-border investigations. Companies will be able to contest EDPB decisions at EU level rather than moving first through national courts, a shift that may influence future GDPR enforcement cases across the Union.

Legal observers expect more direct challenges as organisations adjust their compliance strategies. The outcome strengthens judicial oversight of the EDPB and could reshape the balance between national regulators and EU-level bodies in data protection governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Is AI eroding human intelligence?

The article reflects on the growing integration of AI into daily life, from classrooms to work, and asks whether this shift is making people intellectually sharper or more dependent on machines.

Tools such as ChatGPT, Grok and Perplexity have moved from optional assistants to everyday aids that generate instant answers, summaries and explanations, reducing the time and effort traditionally required for research and deep thinking.

While quantifiable productivity gains are clear, the piece highlights trade-offs: readily available answers can diminish the cognitive struggle that builds critical thinking, problem-solving and independent reasoning.

In education, easy AI responses may weaken students’ engagement in learning unless teachers guide their use responsibly. Some respondents point to creativity and conceptual understanding eroding when AI is used as a shortcut. In contrast, others see it as a democratising tutor that supports learners who otherwise lack resources.

The article also incorporates perspectives from AI systems themselves, which generally frame AI as neither inherently making people smarter nor dumber, but dependent on how it’s used.

It concludes that the impact of AI on human cognition is not predetermined by the technology, but shaped by user choice: whether AI is a partner that augments thinking or a crutch that replaces it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!