HP reveals advanced AI devices and workflow tools at Imagine 2026

HP has announced a broad set of AI-focused products and workplace tools at HP Imagine 2026, presenting the update as part of a wider effort to simplify work across PCs, collaboration devices, security systems, and workflow platforms.

In a press release published on 24 March, HP said the new portfolio includes AI PCs, collaboration tools, workstations, printers, and software intended for hybrid work and on-device AI use.

HP says the update includes a new intelligence layer called HP IQ, which it describes as a system designed to orchestrate work across AI PCs, workplace devices, and meeting spaces through local AI and proximity-based connectivity.

The company also announced new EliteBook devices, workstation updates, and workflow automation changes through its Workforce Experience Platform and Build Workspace capabilities.

Several sections of the release focus on on-device AI. According to the company, HP IQ will debut on the next generation of EliteBook X G2 AI PCs and will support features such as prompt-based assistance, document analysis, note organisation, and meeting support.

The release also says NearSense is intended to help devices discover, connect, and collaborate, including through file sharing and one-click joining of conference room meetings.

Security is another central theme in the release. HP says it has introduced what it describes as the world’s first hardware solution to stop physical TPM bypass attacks, using a cryptographically bound link between the TPM and CPU.

The company also said it is expanding capabilities in HP Wolf Security and introducing HP Wolf Pro Security Next Gen Antivirus, as well as physical intrusion detection designed to protect memory if a device chassis is opened.

The announcement also includes new printers and document tools. HP says the LaserJet Pro 4000 and 4100 series, and the LaserJet Enterprise 5000 and 6000 series, are intended to support AI-powered document processing and quantum-resistant security. The release also highlights scanning shortcuts, editable OCR, reduced management time, and a design intended to improve serviceability.

For higher-performance users, the company says it is launching a new generation of Z workstations and mobile workstations. The release refers to systems such as the Z8 Fury, Max Side Panel for Z8 Fury and Z4 workstations, and updated mobile workstation models. Advanced AI development, visual effects, and simulation workloads are among the uses cited in the announcement.

Beyond enterprise work, the release also extends the same AI and device strategy into gaming. New HyperX and OMEN products are part of the announcement, including desktops, a gaming and modular ecosystem, and expanded AI game support through OMEN Gaming Hub and OMEN AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK tightens sanctions on crypto-linked scam networks

The UK has stepped up its crackdown by sanctioning a crypto marketplace tied to major scam centres in Southeast Asia. Measures aim to disrupt the sale of stolen personal data and limit the financial infrastructure enabling online fraud targeting British victims.

Authorities also targeted operators behind ‘#8 Park’, Cambodia’s largest scam compound, believed to house up to 20,000 trafficked workers. Many individuals forced to run scams were lured with false job offers before being coerced into fraudulent activity under severe threats.

Sanctions extend to key entities and individuals connected to the wider network, including those facilitating crypto laundering and cross-border financial flows. Earlier UK action froze over £1 billion in assets and helped shut down platforms used for laundering illicit funds.

Officials said the measures will isolate these operations from the crypto ecosystem and freeze UK-based assets. The measures come ahead of an international summit in June aimed at strengthening global coordination against illicit finance and digital fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI details Sora 2 safeguards for likeness, audio, and harmful content

OpenAI has published a new overview of the safety measures built into Sora 2 and the Sora app, setting out how the company says it is approaching provenance, likeness protection, teen safeguards, harmful-content filtering, audio controls, and user reporting tools. The Sora team published the note on 23 March 2026.

OpenAI says every video generated with Sora includes visible and invisible provenance signals, and that all videos also embed C2PA metadata. The company adds that many outputs feature visible moving watermarks that include the creator’s name, while internal reverse-image and audio search tools are used to trace videos back to Sora.

A substantial part of the update focuses on likeness and consent. OpenAI says users can upload images of people to generate videos, but only after attesting that they have consent from the people featured and the right to upload the media. OpenAI also says image-to-video generations involving people are subject to stricter safeguards than Sora Characters, and that images including children and young-looking persons face stricter moderation. Shared videos generated from such images will always carry watermarks, according to the company.

OpenAI also sets out controls linked to its characters feature, which it says is intended to give users stronger control over their likeness, including both appearance and voice. According to the company, users can decide who can use their characters, revoke access at any time, and review, delete, or report videos featuring their characters. OpenAI says it also applies additional restrictions designed to limit major changes to a person’s appearance, avoid embarrassing uses, and maintain broadly consistent identity presentation.

Protections for younger users form another part of the update. OpenAI says teen accounts are subject to stronger limitations on mature output, that age-inappropriate or harmful content is filtered from teen feeds, and that adult users cannot initiate direct messages with teens. Parental controls in ChatGPT can also be used to manage teen messaging permissions and to select a non-personalised feed in the app, while default limits apply to continuous scrolling for teens.

OpenAI says harmful-content controls operate at both creation and distribution stages. Prompt and output checks are used across multiple video frames and audio transcripts to block content including sexual material, terrorist propaganda, and self-harm promotion. OpenAI also says it has tightened policies for video generation compared with image generation because of added realism, motion, and audio, while automated systems and human review are used to monitor feed content against its global usage policies.

Audio generation is treated separately in the note. OpenAI says generated speech transcripts are automatically scanned for possible policy violations, and that prompts intended to imitate living artists or existing works are blocked. The company also says it honours takedown requests from creators who believe an output infringes their work.

User controls and recourse are presented as the final layer. OpenAI says users can choose whether to share videos to the feed, remove published content, and report videos, profiles, direct messages, comments, and characters for abuse. Blocking tools are also available, according to the company, to stop other users from viewing a profile or posts, using a character, or contacting someone through direct message.

OpenAI’s post is framed as a product-safety explanation rather than an independent assessment of the effectiveness of the measures in practice. Much of the note describes controls that the company says it has built into Sora 2, but it does not provide external evaluation data in the published summary.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Zimbabwe advances AI national strategy with UNESCO support

Zimbabwe has launched a National Artificial Intelligence Strategy for 2026 to 2030, marking a significant step towards shaping its digital future instead of relying solely on traditional development pathways.

Announced by President Emmerson Mnangagwa in Harare, the strategy sets out a national framework for the responsible use of AI to support innovation, improve public services, and expand economic opportunities across sectors such as agriculture, healthcare, education, finance, and public administration.

The strategy places strong emphasis on building digital infrastructure, developing AI skills, and strengthening research and innovation ecosystems.

Officials highlighted the importance of governance frameworks to ensure that AI systems remain transparent, ethical, and aligned with national priorities instead of advancing without oversight.

The initiative reflects a broader effort to position Zimbabwe within the evolving technological landscape of the fourth industrial revolution while promoting sustainable economic growth.

Development of the strategy was supported by UNESCO, working alongside national institutions and stakeholders from academia, industry, and civil society.

The process was informed by the Artificial Intelligence Readiness Assessment Methodology and aligned with UNESCO Recommendation on the Ethics of Artificial Intelligence, promoting a human-centred approach that prioritises human rights, fairness, and transparency.

Regional initiatives across Southern Africa have also contributed to strengthening AI adoption readiness through similar assessment frameworks.

Looking ahead, Zimbabwe aims to translate the strategy into concrete investments in infrastructure, talent development, and innovation ecosystems.

International partners, including the UN, have expressed support for implementation efforts, emphasising the importance of inclusive growth and equitable access to digital opportunities.

By combining national leadership with international collaboration, Zimbabwe seeks to ensure that AI benefits communities across urban and rural areas rather than widening existing socioeconomic divides.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New AI safety policies target teen protection in apps

OpenAI has released a set of prompt-based safety policies to help developers build safer AI experiences for teenagers. The tools work with the open-weight model gpt-oss-safeguard, turning safety requirements into practical classifiers for real-world use.

The policies address teen risks, including graphic violence, sexual content, harmful body image behaviour, dangerous challenges, roleplay, and age-restricted goods and services. Developers can use them for both real-time filtering and offline content analysis.

The framework was developed with input from organisations such as Common Sense Media and everyone.ai to improve clarity and consistency in teen safety rules. The initiative also responds to long-standing challenges in translating high-level safety goals into precise operational systems.

Open-source availability through the ROOST Model Community allows developers to adapt and expand the policies for different use cases and languages. The framework is a foundational step, not a complete solution, encouraging layered safeguards and ongoing refinement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google sets 2029 deadline for post-quantum cryptography migration

A transition to post-quantum cryptography by 2029 is being led by Google, aiming to secure digital systems against future quantum computing threats instead of relying on existing encryption standards.

The move reflects growing concern that advances in quantum hardware and algorithms could eventually undermine current cryptographic protections, particularly through attacks that store encrypted data today for decryption in the future.

Quantum computers are expected to challenge widely used encryption and digital signature systems, prompting the need for early transition strategies.

Google has updated its threat model to prioritise authentication services, recognising that digital signatures pose a critical vulnerability if not addressed before the arrival of quantum machines capable of cryptanalysis.

The company is encouraging broader industry action to accelerate migration efforts and reduce long-term security risks.

As part of its strategy, Google is integrating post-quantum cryptography into its products and services.

Android 17 will include quantum-resistant digital signature protection aligned with standards developed by the US’s National Institute of Standards and Technology. At the same time, support has already been introduced in Google Chrome and cloud platforms.

These measures aim to bring advanced security technologies directly to users instead of limiting them to experimental environments.

By setting a clear timeline, Google aims to instil urgency and direction across the wider technology sector.

The transition to post-quantum cryptography is expected to become a critical step in maintaining online security, ensuring that digital infrastructure remains resilient as quantum computing capabilities continue to evolve.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI launches a public Safety Bug Bounty programme

OpenAI has introduced a public Safety Bug Bounty programme to identify misuse and safety risks across its AI systems. The initiative expands the company’s existing vulnerability reporting framework by focusing on harms that fall outside traditional security definitions.

The programme covers AI threats such as agentic risks, prompt injection, data exfiltration, and bypassing platform integrity controls. Researchers are encouraged to submit reproducible cases where AI systems perform harmful actions or expose sensitive information.

Unlike standard security reports, the initiative accepts safety issues that pose real-world risk, even if they are not classified as technical vulnerabilities. Dedicated safety and security teams will assess submissions and may be reassigned depending on relevance.

The scheme is open to external researchers and ethical hackers to strengthen AI safety through broader collaboration. OpenAI says the approach is intended to improve resilience against evolving misuse as AI systems become more advanced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU strengthens semiconductor strategy through Chips Act dialogue

Executive Vice-President Henna Virkkunen will host a high-level dialogue in Brussels to assess the implementation of the European Chips Act Regulation and gather industry feedback ahead of its planned revision.

Stakeholders from across the semiconductor ecosystem are expected to exchange views and present recommendations to shape future policy direction.

An initiative that forms part of the broader strategy led by the European Commission to reinforce technological sovereignty and competitiveness, rather than relying heavily on external suppliers.

The Chips Act seeks to strengthen Europe’s semiconductor ecosystem, improve supply chain resilience, and reduce strategic dependencies in critical technologies.

The dialogue follows a public consultation and call for evidence conducted in autumn 2025, with findings set to inform the upcoming legislative revision.

Industry representatives will provide direct input through a report outlining challenges, opportunities, and proposed policy adjustments, contributing to a more targeted and effective framework for semiconductor development.

Looking ahead, the revision of the Chips Act will be integrated into a wider Technological Sovereignty package designed to boost the capacity of Europe’s digital industries.

By combining stakeholder engagement with policy reform, the European Commission aims to ensure that semiconductor innovation and production can expand across the EU rather than remain constrained by reliance on external suppliers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New UK rules target foreign influence and crypto donations

The UK government has announced sweeping reforms to political donations, introducing a £100,000 annual cap on contributions from overseas electors. The move targets concerns that individuals living abroad could exert disproportionate financial influence on domestic politics.

Cryptocurrency donations have also been banned with immediate effect, reflecting fears over anonymity and the difficulty of tracing funds. Authorities warn that digital assets risk enabling untraceable political funding until stronger regulation is in place.

Both measures will apply retrospectively, requiring political parties and candidates to return any unlawful donations within 30 days once the legislation takes effect. Enforcement action may follow for non-compliance, signalling a stricter approach to financial oversight.

Reforms stem from the Rycroft Review, which highlighted vulnerabilities in the UK’s electoral system linked to foreign interference. Further changes, including stronger Electoral Commission powers and tighter donor checks, are expected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot