Generative AI is not reducing workloads as widely expected but intensifying them, according to new workplace research. Findings suggest productivity gains are being offset by expanding responsibilities and longer working hours.
An eight-month study at a US tech firm found employees worked faster, took on broader tasks, and extended working hours. AI tools enabled staff to take on duties beyond their roles, including coding, research, and technical problem-solving.
Researchers identified three pressure points driving intensification: task expansion, blurred work-life boundaries, and increased multitasking. Workers used AI during breaks and off-hours while juggling parallel tasks, increasing cognitive load.
Experts warn that the early productivity surge may mask burnout, fatigue, and declining work quality. Organisations are now being urged to establish structured ‘AI practices’ to regulate usage, protect focus, and maintain sustainable productivity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cisco has announced a major update to its AI Defense platform as enterprise AI evolves from chat tools into autonomous agents. The company says AI security priorities are shifting from controlling outputs to protecting complex agent-driven systems.
The update strengthens end-to-end AI supply chain security by scanning third-party models, datasets, and tools used in development workflows. New inventory features help organisations track provenance and governance across AI resources.
Cisco has also expanded algorithmic red teaming through an upgraded AI Validation interface. The system enables adaptive multi-turn testing and aligns security assessments with NIST, MITRE, and OWASP frameworks.
Runtime protections now reflect the growing autonomy of AI agents. Cisco AI Defense inspects agent-to-tool interactions in real time, adding guardrails to prevent data leakage and malicious task execution.
Cisco says the update responds to the rapid operationalisation of AI across enterprises. The company argues that effective AI security now requires continuous visibility, automated testing, and real-time controls that scale with autonomy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Organised by the UN Office of Counter-Terrorism in partnership with the Republic of Korea’s UN mission, the dialogue will take place at UN Headquarters in New York. Discussions will bring together policymakers, technology experts, civil society representatives, and youth stakeholders.
A central milestone will be the launch of the first UN Practice Guide on Artificial Intelligence and Preventing and Countering Violent Extremism. The guide offers human rights-based advice on responsible AI use, addressing ethical, governance, and operational risks.
Officials warn that AI-generated content, deepfakes, and algorithmic amplification are accelerating extremist narratives online. Responsibly governed AI tools could enhance early detection, research, and community prevention efforts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has warned Meta that it may have breached EU antitrust rules by restricting third-party AI assistants from operating on WhatsApp. A Statement of Objections outlines regulators’ preliminary view that the policy could distort competition in the AI assistant market.
The probe centres on updated WhatsApp Business terms announced in October 2025 and enforced from January 2026. Under the changes, rival general-purpose AI assistants were effectively barred from accessing the platform, leaving Meta AI as the only integrated assistant available to users.
Regulators argue that WhatsApp serves as a critical gateway for consumers AI access AI services. Excluding competitors could reinforce Meta’s dominance in communication applications while limiting market entry and expansion opportunities for smaller AI developers.
Interim measures are now under consideration to prevent what authorities describe as potentially serious and irreversible competitive harm. Meta can respond before any interim measures are imposed, while the broader antitrust probe continues.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload.
The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies.
Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs.
Policymakers hoped the new framework would reduce bureaucracy and modernise the sector. The emerging assessment now suggests that greater coordination at the EU level may introduce extra layers of compliance at a time when regulators seek clarity and flexibility.
The debate has intensified as governments push for faster network deployment and more predictable governance. The prospect of heavier administrative tasks could slow progress rather than deliver the streamlined system originally promised.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nigeria has been advised to develop its coal reserves to benefit from the rapidly expanding global AI economy. A policy organisation said the country could capture part of the projected $650 billion AI investment by strengthening its energy supply capacity.
AI infrastructure requires vast and reliable electricity to power data centres and advanced computing systems. Technology companies worldwide are increasing energy investments as competition intensifies and demand for computing power continues to grow rapidly.
Nigeria holds nearly five billion metric tonnes of coal, offering a significant opportunity to support global energy needs. Experts warned that failure to develop these resources could result in major economic losses and missed industrial growth.
The organisation also proposed creating a national corporation to convert coal into high-value energy and industrial products. Analysts stressed that urgent government action is needed to secure Nigeria’s position in the emerging AI-driven economy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Learnovate Centre, a global innovation hub focused on the future of work and learning at Trinity College Dublin, is spearheading a community of practice on responsible AI in learning, bringing together educators, policymakers, institutional leaders and sector specialists to discuss safe, effective and compliant uses of AI in educational settings.
This initiative aims to help practitioners interpret emerging policy frameworks, including EU AI Act requirements, share practical insights and align AI implementation with ethical and pedagogical principles.
One of the community’s early activities includes virtual meetings designed to build consensus around AI norms in teaching, compliance strategies and knowledge exchange on real-world implementation.
Participants come from diverse education domains, including schools, higher and vocational education and training, as well as representatives from government and unions, reflecting a broader push to coordinate AI adoption across the sector.
Learnovate plays a wider role in AI and education innovation, supporting research, summits and collaborative programmes that explore AI-powered tools for personalised learning, upskilling and ethical use cases.
It also partners with start-ups and projects (such as AI platforms for teachers and learners) to advance practical solutions that balance innovation with safeguards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Tech companies competing in AI are increasingly expecting employees to work longer weeks to keep pace with rapid innovation. Some start-ups openly promote 70-hour schedules, presenting intense effort as necessary to launch products faster and stay ahead of rivals.
Investors and founders often believe that extended working hours improve development speed and increase the chances of securing funding. Fast growth and fierce global competition have made urgency a defining feature of many AI workplaces.
However, research shows productivity rises only up to a limit before fatigue reduces efficiency and focus. Experts warn that excessive workloads can lead to burnout and make it harder for companies to retain experienced professionals.
Health specialists link extended working weeks to higher risks of heart disease and stroke. Many experts argue that smarter management and efficient use of technology offer safer and more effective paths to lasting productivity.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Universities are increasingly reorganising around AI, treating AI-based instruction as a proven solution for delivering education more efficiently. This shift reflects a broader belief that AI can reliably replace or reduce human-led teaching, despite growing uncertainty about its actual impact on learning.
Recent research challenges this assumption by re-examining the evidence used to justify AI-driven reforms. A comprehensive re-analysis of AI and learning studies reveals severe publication bias, with positive results published far more frequently than negative or null findings. Once corrected, reported learning gains from AI shrink substantially and may be negligible.
More critically, the research exposes deep inconsistency across studies. Outcomes vary so widely that the evidence cannot predict whether AI will help or harm learning in a given context, and no educational level, discipline, or AI application shows consistent benefits.
By contrast, human-mediated teaching remains a well-established foundation of learning. Decades of research demonstrate that understanding develops through interaction, adaptation, and shared meaning-making, leading the article to conclude that AI in education remains an open question, while human instruction remains the known constant.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI chatbots are not yet capable of providing reliable health advice, according to new research published in the journal Nature Medicine. Findings show users gain no greater diagnostic accuracy from chatbots than from traditional internet searches.
Researchers tested nearly 1,300 UK participants using ten medical scenarios, ranging from minor symptoms to conditions requiring urgent care. Participants were assigned to use either OpenAI’s GPT-4o, Meta’s Llama 3, Command R+, or a standard search engine to assess symptoms and determine next steps.
Chatbot users identified their condition about one-third of the time, with only 45 percent selecting the correct medical response. Performance levels matched those relying solely on search engines, despite AI systems scoring highly on medical licensing benchmarks.
Experts attributed the gap to communication failures. Users often provided incomplete information or misinterpreted chatbot guidance.
Researchers and bioethicists warned that growing reliance on AI for medical queries could pose public health risks without professional oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!