AI users spend 40% of saved time fixing errors

A recent study from Workday reveals that 40% of the time saved by AI in the workplace is spent correcting errors, highlighting a growing productivity paradox. Frequent AI users are bearing the brunt, often double- or triple-checking outputs to ensure accuracy.

Despite widespread adoption- 87% of employees report using AI at least a few times per week, and 85% save one to seven hours weekly-much of that time is redirected to fixing low-quality results rather than achieving net gains in productivity.

The findings suggest that AI can increase workloads rather than streamline operations if not implemented carefully.

Experts argue that AI should enhance human work rather than replace it. Employees need tools that handle complex tasks reliably, allowing teams to focus on creativity, judgment, and strategic decision-making.

Upskilling staff to manage AI effectively is critical to realising sustainable productivity benefits.

The study also highlights the risk of organisations prioritising speed over quality. Many AI tools place trust and accuracy responsibilities on employees, creating hidden costs and risks for decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK users can now disable Shorts autoplay with new YouTube feature

YouTube has introduced a new parental control for users in the United Kingdom that lets parents and guardians disable Shorts autoplay and continuous scrolling, addressing concerns about addictive viewing patterns and excessive screen time among children.

The feature gives families greater control over how the short-form video feed behaves, allowing users to turn off the infinite-scroll experience that keeps viewers engaged longer.

The update comes amid broader efforts by tech platforms to provide tools that support healthier digital habits, especially for younger users. YouTube says the control can help parents set limits without entirely removing access to Shorts content.

The roll-out is initially targeted at UK audiences, with the company indicating feedback will guide potential expansion. YouTube’s new off-switch reflects growing industry awareness of screen-time impacts and regulatory scrutiny around digital wellbeing features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom probes AI companion chatbot over age checks

Ofcom has opened an investigation into Novi Ltd over age checks on its AI companion chatbot. The probe focuses on duties under the Online Safety Act.

Regulators will assess whether children can access pornographic content without effective age assurance. Sanctions could include substantial fines or business disruption measures under the UK’s Online Safety Bill.

In a separate case, Ofcom confirmed enforcement pressure led Snapchat to overhaul its illegal content risk assessment. Revised findings now require stronger protections for UK users.

Ofcom said accurate risk assessments underpin online safety regulation. Platforms must match safeguards to real world risks, particularly when AI and children are concerned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Regulators press on with Grok investigations in Britain and Canada

Britain and Canada are continuing regulatory probes into xAI’s Grok chatbot, signalling that official scrutiny will persist despite the company’s announcement of new safeguards. Authorities say concerns remain over the system’s ability to generate explicit and non-consensual images.

xAI said it had updated Grok to block edits that place real people in revealing clothing and restricted image generation in jurisdictions where such content is illegal. The company did not specify which regions are affected by the new limits.

Reuters testing found Grok was still capable of producing sexualised images, including in Britain. Social media platform X and xAI did not respond to questions about how effective the changes have been.

UK regulator Ofcom said its investigation remains ongoing, despite welcoming xAI’s announcement. A privacy watchdog in Canada also confirmed it is expanding an existing probe into both X and xAI.

Pressure is growing internationally, with countries including France, India, and the Philippines raising concerns. British Technology Secretary Liz Kendall said the Online Safety Act gives the government tools to hold platforms accountable for harmful content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe highlights legal frameworks for AI fairness

The Council of Europe recently hosted an online event to examine the challenges posed by algorithmic discrimination and explore ways to strengthen governance frameworks for AI and automated decision-making (ADM) systems.

Two new publications were presented, focusing on legal protections against algorithmic bias and policy guidelines for equality bodies and human rights institutions.

Algorithmic bias has been shown to exacerbate existing social inequalities. In employment, AI systems trained on historical data may unfairly favour male candidates or disadvantage minority groups.

Public authorities also use AI in law enforcement, migration, welfare, justice, education, and healthcare, where profiling, facial recognition, and other automated tools can carry discriminatory risks. Private-sector applications in banking, insurance, and personnel services similarly raise concerns.

Legal frameworks such as the EU AI Act (2024/1689) and the Council of Europe’s Framework Convention on AI, human rights, democracy, and the rule of law aim to mitigate these risks. The publications review how regulations protect against algorithmic discrimination and highlight remaining gaps.

National equality bodies and human rights structures play a key role in monitoring AI/ADM systems, ensuring compliance, and promoting human rights-based deployment.

The webinar highlighted practical guidance and examples for applying EU and Council of Europe rules to public sector AI initiatives, fostering more equitable and accountable systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SRB GDPR case withdrawn from EU court

A high-profile EU court case on pseudonymised data has ended without a final ruling. The dispute involved the Single Resolution Board and the European Data Protection Supervisor.

The case focused on whether pseudonymised opinions qualify as personal data under the GDPR. Judges were also asked to assess reidentification risks and notification duties.

After intervention by the Court of Justice of the European Union, the matter returned to the General Court. Both parties later withdrew the case, leaving no binding judgement.

Legal experts say the CJEU’s guidance continues to shape enforcement practice. Regulators are expected to reflect those principles in updated EU pseudonymisation guidelines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s social media age limit prompts restrictions on millions of under-16 accounts

Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.

Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.

Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.

Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.

eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.

Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.

Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.

Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware gang Everest claims data breach at Nissan Motor Corporation

Nissan Motor Corporation has been listed on the dark web by the Everest ransomware group, which is threatening to release allegedly stolen data within days unless a ransom is paid. The group claims to have exfiltrated around 900 gigabytes of company files.

Everest published sample screenshots showing folders linked to marketing, sales, dealer orders, warranty analysis, and internal communications. Many of the files appear to relate to Nissan’s operations in Canada, although some dealer records reference the United States.

Nissan has not issued a public statement about the alleged breach. The company has been contacted for comment, but no confirmation has been provided regarding the nature or scale of the incident.

Everest began as a ransomware operation in 2020 but is now believed to focus on gaining and selling network access using stolen credentials, insider recruitment, and remote access tools. The group is thought to be Russian-speaking and continues to recruit affiliates through its leak site.

The Nissan listing follows recent claims by Everest involving Chrysler and ASUS. In those cases, the group said it had stolen large volumes of personal and corporate data, with ASUS later confirming a supplier breach involving camera source code.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!