Generative AI and the continued importance of cybersecurity fundamentals

The introduction of generative AI (GenAI) is influencing developments in cybersecurity across industries.

AI-powered tools are being integrated into systems such as end point detection and response (EDR) platforms and security operations centres (SOCs), while threat actors are reportedly exploring ways to use GenAI to automate known attack methods.

While GenAI presents new capabilities, common cybersecurity vulnerabilities remain a primary concern. Issues such as outdated patching, misconfigured cloud environments, and limited incident response readiness are still linked to most breaches.

Cybersecurity researchers have noted that GenAI is often used to scale familiar techniques rather than create new attack methods.

Social engineering, privilege escalation, and reconnaissance remain core tactics, with GenAI accelerating their execution. There are also indications that some GenAI systems can be manipulated to reveal sensitive data, particularly when not properly secured or configured.

Security experts recommend maintaining strong foundational practices such as access control, patch management, and configuration audits. These measures remain critical, regardless of the integration of advanced AI tools.

Some organisations may prioritise tool deployment over training, but research suggests that incident response skills are more effective when developed through practical exercises. Traditional awareness programmes may not sufficiently prepare personnel for real-time decision-making.

Some companies implement cyber drills that simulate attacks under realistic conditions to address this. These exercises can help teams practise protocols, identify weaknesses in workflows, and evaluate how systems perform under pressure. Such drills are designed to complement, not replace, other security measures.

Although GenAI is expected to continue shaping the threat landscape, current evidence suggests that most breaches stem from preventable issues. Ongoing training, configuration management, and response planning efforts remain central to organisational resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft’s collaboration is near breaking point

The once-celebrated partnership between OpenAI and Microsoft is now under severe strain as disputes over control and strategic direction threaten to dismantle their alliance.

OpenAI’s move toward a for-profit model has placed it at odds with Microsoft, which has invested billions and provided exclusive access to Azure infrastructure.

Microsoft’s financial backing and technical involvement have granted it a powerful voice in OpenAI’s operations. However, OpenAI now appears determined to gain independence, even if it risks severing ties with the tech giant.

Negotiations are ongoing, but the growing rift could reshape the trajectory of generative AI development if the collaboration collapses.

Amid tensions, Microsoft evaluates alternative options, including developing AI tools and working with rivals like Meta and xAI.

Such a pivot suggests Microsoft is preparing for a future beyond OpenAI, potentially ending its exclusive access to upcoming models and intellectual property.

A breakdown could have industry-wide repercussions. OpenAI may struggle to secure the estimated $40 billion in fresh funding it seeks, especially without Microsoft’s support.

At the same time, the rivalry could accelerate competition across the AI sector, prompting others to strengthen or redefine their positions in the race for dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025 opens in Norway with focus on inclusive digital governance

Norway will host the 20th annual Internet Governance Forum (IGF) from 23 to 27 June 2025 in a hybrid format, with the main venue set at Nova Spektrum in Lillestrøm, just outside Oslo.

This milestone event marks two decades of the UN-backed forum that brings together diverse stakeholders to discuss how the internet should be governed for the benefit of all.

The overarching theme, Building Governance Together, strongly emphasises inclusivity, democratic values, and sustainable digital cooperation.

With participation expected from governments, the private sector, civil society, academia, and international organisations, IGF 2025 will continue to promote multistakeholder dialogue on critical topics, including digital trust, cybersecurity, AI, and internet access.

A key feature will be the IGF Village, where companies and organisations will showcase technologies and products aligned with global internet development and governance.

Norway’s Minister of Digitalisation and Public Governance, Karianne Oldernes Tung, underlined the significance of this gathering in light of current geopolitical tensions and the forthcoming WSIS+20 review later in 2025.

Reaffirming Norway’s support for the renewal of the IGF mandate at the UN General Assembly, Minister Tung called for unity and collaborative action to uphold an open, secure, and inclusive internet. The forum aims to assess progress and help shape the next era of digital policy.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

DeepSeek under fire for alleged military ties and export control evasion

The United States has accused Chinese AI startup DeepSeek of assisting China’s military and intelligence services while allegedly seeking to evade export controls to obtain advanced American-made semiconductors.

The claims, made by a senior US State Department official speaking anonymously to Reuters, add to growing concerns over the global security risks posed by AI.

DeepSeek, based in Hangzhou, China, gained international attention earlier this year after claiming its AI models rivalled those of leading United States firms like OpenAI—yet at a fraction of the cost.

However, US officials now say that the firm has shared data with Chinese surveillance networks and provided direct technological support to the People’s Liberation Army (PLA). According to the official, DeepSeek has appeared in over 150 procurement records linked to China’s defence sector.

The company is also suspected of transmitting data from foreign users, including Americans, through backend infrastructure connected to China Mobile, a state-run telecom operator. DeepSeek has not responded publicly to questions about these privacy or security issues.

The official further alleges that DeepSeek has been trying to access Nvidia’s restricted H100 AI chips by creating shell companies in Southeast Asia and using foreign data centres to run AI models on US-origin hardware remotely.

While Nvidia maintains it complies with export restrictions and has not knowingly supplied chips to sanctioned parties, DeepSeek is said to have secured several H100 chips despite the ban.

US officials have yet to place DeepSeek on a trade blacklist, though the company is under scrutiny. Meanwhile, Singapore has already charged three men with fraud in investigating the suspected illegal movement of Nvidia chips to DeepSeek.

Questions have also been raised over the credibility of DeepSeek’s technological claims. Experts argue that the reported $5.58 million spent on training their flagship models is unrealistically low, especially given the compute scale typically required to match OpenAI or Meta’s performance.

DeepSeek has remained silent amid the mounting scrutiny. Still, with the US-China tech race intensifying, the firm could soon find itself at the centre of new trade sanctions and geopolitical fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers gain control of tesla charger Through firmware downgrade

Tesla’s popular Wall Connector home EV charger was compromised at the January 2025 Pwn2Own Automotive competition, revealing how attackers could gain full control via the charging cable.

The Tesla Wall Connector Gen 3, a widely deployed residential AC charger delivering up to 22 kW, was exploited through a novel attack that used the physical charging connector as the main entry point.

The vulnerability allowed researchers to execute arbitrary code, potentially giving access to private networks in homes, hotels, or businesses.

Researchers from Synacktiv discovered that Tesla vehicles can update the Wall Connector’s firmware via the charging cable using a proprietary, undocumented protocol.

By simulating a Tesla car and exploiting Single-Wire CAN (SWCAN) communications over the Control Pilot line, the team downgraded the firmware to an older version with exposed debug features.

Using a custom USB-CAN adapter and a Raspberry Pi to emulate vehicle behaviour, they accessed the device’s setup Wi-Fi credentials and triggered a buffer overflow in the debug shell, ultimately gaining remote code execution.

The demonstration ended with a visual cue — the charger’s LED blinking — but the broader implication is access to internal networks and potential lateral movement across connected systems.

Tesla has since addressed the vulnerability by introducing anti-downgrade measures in newer firmware versions. The Pwn2Own event remains instrumental in exposing critical flaws in automotive and EV infrastructure, pushing manufacturers toward stronger security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon CEO warns staff to embrace AI or face job losses

Amazon CEO Andy Jassy has warned staff that they must embrace AI or risk losing their jobs.

In a memo shared publicly, Jassy said generative AI and intelligent agents are already transforming workflows at Amazon, and this shift will inevitably reduce the number of corporate roles in the coming years.

According to Jassy, AI will allow Amazon to operate more efficiently by automating specific roles and reallocating talent to new areas. He acknowledged that it’s difficult to predict the exact outcome but clarified that the corporate workforce will shrink as AI adoption expands across the company.

Those hoping to remain at Amazon will need to upskill quickly. Jassy stressed the need for employees to stay curious and proficient with AI tools to boost their productivity and remain valuable in an increasingly automated environment.

Amazon is not alone in the trend.

BT Group is restructuring to eliminate tens of thousands of roles. At the same time, other corporate leaders, including those at LVMH and ManPower, have echoed concerns that AI’s most significant disruption may be within human resources.

Executives now see AI as a tech shift and a workforce transformation demanding retraining and redefinition of roles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SoftBank plans $1 trillion AI and robotics park in Arizona

SoftBank founder Masayoshi Son is planning what could become his most audacious venture yet: a $1 trillion AI and robotics industrial park in Arizona.

Dubbed ‘Project Crystal Land’, the initiative aims to recreate a high-tech manufacturing hub reminiscent of China’s Shenzhen, focused on AI-powered robots and next-gen automation.

Son is courting global tech giants — including Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung — to join the vision, though none have formally committed.

The plan hinges on support from federal and state governments, with SoftBank already discussing possible tax breaks with US officials, including Commerce Secretary Howard Lutnick.

While TSMC is already investing $165 billion in Arizona facilities, sources suggest Son’s project has not altered the chipmaker’s current roadmap. SoftBank hopes to attract semiconductor and AI hardware leaders to power the park’s infrastructure.

Son has also approached SoftBank Vision Fund portfolio companies to participate, including robotics startup Agile Robots.

The park may serve as a production hub for emerging tech firms, complementing SoftBank’s broader investments, such as a potential $30 billion stake in OpenAI, a $6.5 billion acquisition of Ampere Computing, and funding for Stargate, a global data centre venture with OpenAI, Oracle, and MGX.

While the vision is still early, Project Crystal Land could radically shift US high-tech manufacturing. Son’s strategy relies heavily on project-based financing, allowing extensive infrastructure builds with minimal upfront capital.

As SoftBank eyes long-term AI growth and increased investor confidence, whether this futuristic park will become a reality — or another of Son’s high-stakes dreams remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act challenges 68% of European businesses, AWS report finds

As AI becomes integral to digital transformation, European businesses struggle to adapt to new regulations like the EU AI Act.

A report commissioned by AWS and Strand Partners revealed that 68% of surveyed companies find the EU AI Act difficult to interpret, with compliance absorbing around 40% of IT budgets.

Businesses unsure of regulatory obligations are expected to invest nearly 30% less in AI over the coming year, risking a slowdown in innovation across the continent.

The EU AI Act, effective since August 2024, introduces a phased risk-based framework to regulate AI in the EU. Some key provisions, including banned practices and AI literacy rules, are already enforceable.

Over the next year, further requirements will roll out, affecting AI system providers, users, distributors, and non-EU companies operating within the EU. The law prohibits exploitative AI applications and imposes strict rules on high-risk systems while promoting transparency in low-risk deployments.

AWS has reaffirmed its commitment to responsible AI, which is aligned with the EU AI Act. The company supports customers through initiatives like AI Service Cards, its Responsible AI Guide, and Bedrock Guardrails.

AWS was the first primary cloud provider to receive ISO/IEC 42001 certification for its AI offerings and continues to engage with the EU institutions to align on best practices. Amazon’s AI Ready Commitment also offers free education on responsible AI development.

Despite the regulatory complexity, AWS encourages its customers to assess how their AI usage fits within the EU AI Act and adopt safeguards accordingly.

As compliance remains a shared responsibility, AWS provides tools and guidance, but customers must ensure their applications meet the legal requirements. The company updates customers as enforcement advances and new guidance is issued.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France 24 partners with Mediagenix to streamline on-demand programming

Mediagenix has entered a collaboration with French international broadcaster France 24, operated by France Médias Monde, to support its content scheduling modernisation programme.

As part of the upgrade, France 24 will adopt Mediagenix’s AI-powered, cloud-based scheduling solution to manage content across its on-demand platforms. The system promises improved operational flexibility, enabling rapid adjustments to programming in response to major events and shifting editorial priorities.

Pamela David, Engineering Manager for TV and Systems Integration at France Médias Monde, said: ‘This partnership with Mediagenix is a critical part of equipping our France 24 channels with the best scheduling and content management solutions.’

‘The system gives our staff the ultimate flexibility to adjust schedules as major events happen and react to changing news priorities.’

Françoise Semin, Chief Commercial Officer at Mediagenix, added: ‘France Médias Monde is a truly global broadcaster. We are delighted to support France 24’s evolving scheduling needs with our award-winning solution.’

Training for France 24 staff will be provided by Lapins Bleus Formation, based in Paris, ahead of the system’s planned rollout next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Viper Technology sponsors rising AI talent for IOAI 2025 in China

Pakistani student Muhammad Ayan Abdullah has been selected to represent the country at the prestigious International Olympiad in Artificial Intelligence (IOAI), set to take place in Beijing, China, from 2–9 August 2025.

To support his journey, Viper Technology—a leading Pakistani IT hardware manufacturer—has partnered with the Punjab Information Technology Board (PITB) to provide Ayan with its flagship ‘PLUTO AI PC’.

Built locally for advanced AI and machine learning workloads, the high-performance computer reflects Viper’s mission to promote homegrown innovation and empower young tech talent on global platforms.

‘This is part of our commitment to backing the next generation of technology leaders,’ said Faisal Sheikh, Co-Founder and COO of Viper Technology. ‘We are honoured to support Muhammad Ayan Abdullah and showcase the strength of Pakistani talent and hardware.’

The PLUTO AI PC, developed and assembled in Pakistan, is a key part of Viper’s latest AI-focused product line—marking the country’s growing presence in competitive, global technology arenas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!