Advanced Linux malware framework VoidLink likely built with AI

Security researchers from Check Point have uncovered VoidLink. This advanced and modular Linux malware framework has been developed predominantly with AI assistance, likely by a single individual rather than a well-resourced threat group.

VoidLink’s development process, exposed due to the developer’s operational security (OPSEC) failures, indicates that AI models were used not just for parts of the code but to orchestrate the entire project plan, documentation and implementation.

According to analysts, the malware framework reached a functional state in under a week with more than 88,000 lines of code, compressing what would traditionally take weeks or months into days.

While no confirmed in-the-wild attacks have yet been reported, researchers caution that the advent of AI-assisted malware represents a significant cybersecurity shift, lowering the barrier to creating sophisticated threats and potentially enabling widespread future misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK launches software security ambassadors scheme

The UK government has launched the Software Security Ambassadors Scheme to promote stronger software security practices nationwide. The initiative is led by the Department for Science, Innovation and Technology and the National Cyber Security Centre.

In the UK, participating organisations commit to championing the new Software Security Code of Practice within their industries. Signatories agree to lead by example through secure development, procurement and advisory practices, while sharing lessons learned to strengthen national cyber resilience.

The scheme aims to improve transparency and risk management across UK digital supply chains. Software developers are encouraged to embed security throughout the whole lifecycle, while buyers are expected to incorporate security standards into procurement processes.

Officials say the approach supports the UK’s broader economic and security goals by reducing cyber risks and increasing trust in digital technologies. The government believes that better security practices will help UK businesses innovate safely and withstand cyber incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Jeff Bezos enters satellite broadband race

Blue Origin, founded by Jeff Bezos, has announced plans to launch a global satellite internet network called TeraWave in the US. The project aims to deploy more than 5,400 satellites to deliver high-speed data services.

In the US, TeraWave will target data centres, businesses and government users rather than households. Blue Origin says the system could reach speeds of up to 6 terabits per second, exceeding the speeds of current commercial satellite services.

The move positions the US company as a direct rival to Starlink, SpaceX’s satellite internet service. Starlink already operates thousands of satellites and focuses heavily on consumer internet access across the US and beyond.

Blue Origin plans to begin launching TeraWave satellites from the US by the end of 2027. The announcement adds to the intensifying competition in satellite communications as demand for global connectivity continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI becomes mainstream in UK auto buying behaviour, survey shows

A recent survey reported by AM-Online reveals that approximately 66 per cent of UK car buyers use artificial intelligence in some form as part of their vehicle research and buying process.

AI applications cited include chatbots for questions and comparisons, recommendation systems for model selection, and virtual advisors that help consumers weigh options based on preferences and budget.

Industry commentators suggest that this growing adoption reflects broader digital transformation trends in automotive retail, with dealerships and manufacturers increasingly deploying AI technologies to personalise sales experiences, streamline research and nurture leads.

The integration of AI tools is seen as boosting customer engagement and efficiency, but it also raises questions about privacy and data protection, transparency and the future role of human sales advisors as digital tools become more capable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon One Medical launches health AI assistant

One Medical has launched a Health AI assistant in its mobile app, offering personalised health guidance at any time. The tool uses verified medical records to support everyday healthcare decisions.

Patients can use the assistant to explain lab results, manage prescriptions, and book virtual or in-person appointments. Clinical safeguards ensure users are referred to human clinicians when medical judgement is required.

Powered by Amazon Bedrock, the assistant operates under HIPAA-compliant privacy standards and avoids selling personal health data. Amazon says clinician and member feedback will shape future updates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Davos roundtable calls for responsible AI growth

Leaders from the tech industry, academia, and policy circles met at a TIME100 roundtable in Davos, Switzerland, on 21 January to discuss how to pursue rapid AI progress without sacrificing safety and accountability. The conversation, hosted by TIME CEO Jessica Sibley, focused on how AI should be built, governed, and used as it becomes more embedded in everyday life.

A major theme was the impact of AI-enabled technology on children. Jonathan Haidt, an NYU Stern professor and author of The Anxious Generation, argued that the key issue is not total avoidance but the timing and habits of exposure. He suggested children do not need smartphones until at least high school, emphasising that delaying access can help protect brain development and executive function.

Yoshua Bengio, a professor at the Université de Montréal and founder of LawZero, said responsible innovation depends on a deeper scientific understanding of AI risks and stronger safeguards built into systems from the start. He pointed to two routes, consumer and societal demand for ‘built-in’ protections, and government involvement that could include indirect regulation through liability frameworks, such as requiring insurance for AI developers and deployers.

Participants also challenged the idea that geopolitical competition should justify weaker guardrails. Bengio argued that even rivals share incentives to prevent harmful outcomes, such as AI being used for cyberattacks or the development of biological weapons, and said coordination between major powers is possible, drawing a comparison to Cold War-era cooperation on nuclear risk reduction.

The roundtable linked AI risks to lessons from social media, particularly around attention-driven business models. Bill Ready, CEO of Pinterest, said engagement optimisation can amplify divisions and ‘prey’ on negative human impulses, and described Pinterest’s shift away from maximising view time toward maximising user outcomes, even if it hurts short-term metrics.

Several speakers argued that today’s alignment approach is too reactive. Stanford computer scientist Yejin Choi warned that models trained on the full internet absorb harmful patterns and then require patchwork fixes, urging exploration of systems that learn moral reasoning and human values more directly from the outset.

Kay Firth-Butterfield, CEO of Good Tech Advisory, added that wider AI literacy, shaped by input from workers, parents, and other everyday users, should underpin future certification and trust in AI tools.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Microsoft restores Exchange and Teams after Microsoft 365 disruption

The US tech giant, Microsoft, investigated a service disruption affecting Exchange Online, Teams and other Microsoft 365 services after users reported access and performance problems.

An incident that began late on Wednesday affected core communication tools used by enterprises for daily operations.

Engineers initially focused on diagnosing the fault, with Microsoft indicating that a potential third-party networking issue may have interfered with access to Outlook and Teams.

During the disruption, users experienced intermittent connectivity failures, latency and difficulties signing in across parts of the Microsoft 365 ecosystem.

Microsoft later confirmed that service access had been restored, although no detailed breakdown of the outage scope was provided.

The incident underlined the operational risks associated with cloud productivity platforms and the importance of transparency and resilience in enterprise digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic releases new constitution shaping Claude’s AI behaviour

Anthropic has published a new constitution for its AI model Claude, outlining the values, priorities, and behavioural principles designed to guide its development. Released under a Creative Commons licence, the document aims to boost transparency while shaping Claude’s learning and reasoning.

The constitution plays a central role in training, guiding how Claude balances safety, ethics, compliance, and helpfulness. Rather than rigid rules, the framework explains core principles, enabling AI systems to generalise and apply nuanced judgment.

Anthropic says this approach supports more responsible decision-making while improving adaptability.

The updated framework also enables Claude to refine its own training through synthetic data generation and self-evaluation. Using the constitution in training helps future Claude models align behaviour with human values while maintaining safety and oversight.

Anthropic described the constitution as a living document that will evolve alongside AI capabilities. External feedback and ongoing evaluation will guide updates to strengthen alignment, transparency, and responsible AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU urged to accelerate AI deployment under new Apply AI strategy

European policymakers are calling for urgent action to accelerate AI deployment across the EU, particularly among SMEs and scale-ups, as the bloc seeks to strengthen its position in the global AI race.

Backing the European Commission’s Apply AI Strategy, the European Economic and Social Committee said Europe must prioritise trust, reliability, and human-centric design as its core competitive advantages.

The Committee warned that slow implementation, fragmented national approaches, and limited private investment are hampering progress. While the strategy promotes an ‘AI first’ mindset, policymakers stressed the need to balance innovation with strong safeguards for rights and freedoms.

Calls were also made for simpler access to funding, lighter administrative requirements, and stronger regional AI ecosystems. Investment in skills, inclusive governance, and strategic procurement were identified as key pillars for scaling trustworthy AI and strengthening Europe’s digital sovereignty.

Support for frontier AI development was highlighted as essential for reducing reliance on foreign models. Officials argued that building advanced, sovereign AI systems aligned with European values could enable competitive growth across sectors such as healthcare, finance, and industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

From chips to jobs: Huang’s vision for AI at Davos 2026

AI is evolving into a foundational economic system rather than a standalone technology, according to NVIDIA chief executive Jensen Huang, who described AI as a five-layer infrastructure spanning energy, hardware, data centres, models and applications.

Speaking at the World Economic Forum in Davos, Huang argued that building and operating each layer is triggering what he called the most significant infrastructure expansion in human history, with job creation stretching from power generation and construction to cloud operations and software development.

Investment patterns suggest a structural shift instead of a speculative cycle. Venture capital funding in 2025 reached record levels, largely flowing into AI-native firms across healthcare, manufacturing, robotics and financial services.

Huang stressed that the application layer will deliver the most significant economic return as AI moves from experimentation to core operational use across industries.

Concerns around job displacement were framed as misplaced. AI automates tasks rather than replacing professional judgement, enabling workers to focus on higher-value activities.

In healthcare, productivity gains from AI-assisted diagnostics and documentation are already increasing demand for radiologists and nurses rather than reducing headcount, as improved efficiency enables institutions to treat more patients.

Huang positioned AI as critical national infrastructure, urging governments to develop domestic capabilities aligned with local language, culture and industrial strengths.

He described AI literacy as an essential skill, comparable to leadership or management, while arguing that accessible AI tools could narrow global technology divides rather than widen them.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!