Generative AI and the continued importance of cybersecurity fundamentals

The introduction of generative AI (GenAI) is influencing developments in cybersecurity across industries.

AI-powered tools are being integrated into systems such as end point detection and response (EDR) platforms and security operations centres (SOCs), while threat actors are reportedly exploring ways to use GenAI to automate known attack methods.

While GenAI presents new capabilities, common cybersecurity vulnerabilities remain a primary concern. Issues such as outdated patching, misconfigured cloud environments, and limited incident response readiness are still linked to most breaches.

Cybersecurity researchers have noted that GenAI is often used to scale familiar techniques rather than create new attack methods.

Social engineering, privilege escalation, and reconnaissance remain core tactics, with GenAI accelerating their execution. There are also indications that some GenAI systems can be manipulated to reveal sensitive data, particularly when not properly secured or configured.

Security experts recommend maintaining strong foundational practices such as access control, patch management, and configuration audits. These measures remain critical, regardless of the integration of advanced AI tools.

Some organisations may prioritise tool deployment over training, but research suggests that incident response skills are more effective when developed through practical exercises. Traditional awareness programmes may not sufficiently prepare personnel for real-time decision-making.

Some companies implement cyber drills that simulate attacks under realistic conditions to address this. These exercises can help teams practise protocols, identify weaknesses in workflows, and evaluate how systems perform under pressure. Such drills are designed to complement, not replace, other security measures.

Although GenAI is expected to continue shaping the threat landscape, current evidence suggests that most breaches stem from preventable issues. Ongoing training, configuration management, and response planning efforts remain central to organisational resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft’s collaboration is near breaking point

The once-celebrated partnership between OpenAI and Microsoft is now under severe strain as disputes over control and strategic direction threaten to dismantle their alliance.

OpenAI’s move toward a for-profit model has placed it at odds with Microsoft, which has invested billions and provided exclusive access to Azure infrastructure.

Microsoft’s financial backing and technical involvement have granted it a powerful voice in OpenAI’s operations. However, OpenAI now appears determined to gain independence, even if it risks severing ties with the tech giant.

Negotiations are ongoing, but the growing rift could reshape the trajectory of generative AI development if the collaboration collapses.

Amid tensions, Microsoft evaluates alternative options, including developing AI tools and working with rivals like Meta and xAI.

Such a pivot suggests Microsoft is preparing for a future beyond OpenAI, potentially ending its exclusive access to upcoming models and intellectual property.

A breakdown could have industry-wide repercussions. OpenAI may struggle to secure the estimated $40 billion in fresh funding it seeks, especially without Microsoft’s support.

At the same time, the rivalry could accelerate competition across the AI sector, prompting others to strengthen or redefine their positions in the race for dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025 opens in Norway with focus on inclusive digital governance

Norway will host the 20th annual Internet Governance Forum (IGF) from 23 to 27 June 2025 in a hybrid format, with the main venue set at Nova Spektrum in Lillestrøm, just outside Oslo.

This milestone event marks two decades of the UN-backed forum that brings together diverse stakeholders to discuss how the internet should be governed for the benefit of all.

The overarching theme, Building Governance Together, strongly emphasises inclusivity, democratic values, and sustainable digital cooperation.

With participation expected from governments, the private sector, civil society, academia, and international organisations, IGF 2025 will continue to promote multistakeholder dialogue on critical topics, including digital trust, cybersecurity, AI, and internet access.

A key feature will be the IGF Village, where companies and organisations will showcase technologies and products aligned with global internet development and governance.

Norway’s Minister of Digitalisation and Public Governance, Karianne Oldernes Tung, underlined the significance of this gathering in light of current geopolitical tensions and the forthcoming WSIS+20 review later in 2025.

Reaffirming Norway’s support for the renewal of the IGF mandate at the UN General Assembly, Minister Tung called for unity and collaborative action to uphold an open, secure, and inclusive internet. The forum aims to assess progress and help shape the next era of digital policy.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

DeepSeek under fire for alleged military ties and export control evasion

The United States has accused Chinese AI startup DeepSeek of assisting China’s military and intelligence services while allegedly seeking to evade export controls to obtain advanced American-made semiconductors.

The claims, made by a senior US State Department official speaking anonymously to Reuters, add to growing concerns over the global security risks posed by AI.

DeepSeek, based in Hangzhou, China, gained international attention earlier this year after claiming its AI models rivalled those of leading United States firms like OpenAI—yet at a fraction of the cost.

However, US officials now say that the firm has shared data with Chinese surveillance networks and provided direct technological support to the People’s Liberation Army (PLA). According to the official, DeepSeek has appeared in over 150 procurement records linked to China’s defence sector.

The company is also suspected of transmitting data from foreign users, including Americans, through backend infrastructure connected to China Mobile, a state-run telecom operator. DeepSeek has not responded publicly to questions about these privacy or security issues.

The official further alleges that DeepSeek has been trying to access Nvidia’s restricted H100 AI chips by creating shell companies in Southeast Asia and using foreign data centres to run AI models on US-origin hardware remotely.

While Nvidia maintains it complies with export restrictions and has not knowingly supplied chips to sanctioned parties, DeepSeek is said to have secured several H100 chips despite the ban.

US officials have yet to place DeepSeek on a trade blacklist, though the company is under scrutiny. Meanwhile, Singapore has already charged three men with fraud in investigating the suspected illegal movement of Nvidia chips to DeepSeek.

Questions have also been raised over the credibility of DeepSeek’s technological claims. Experts argue that the reported $5.58 million spent on training their flagship models is unrealistically low, especially given the compute scale typically required to match OpenAI or Meta’s performance.

DeepSeek has remained silent amid the mounting scrutiny. Still, with the US-China tech race intensifying, the firm could soon find itself at the centre of new trade sanctions and geopolitical fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers gain control of tesla charger Through firmware downgrade

Tesla’s popular Wall Connector home EV charger was compromised at the January 2025 Pwn2Own Automotive competition, revealing how attackers could gain full control via the charging cable.

The Tesla Wall Connector Gen 3, a widely deployed residential AC charger delivering up to 22 kW, was exploited through a novel attack that used the physical charging connector as the main entry point.

The vulnerability allowed researchers to execute arbitrary code, potentially giving access to private networks in homes, hotels, or businesses.

Researchers from Synacktiv discovered that Tesla vehicles can update the Wall Connector’s firmware via the charging cable using a proprietary, undocumented protocol.

By simulating a Tesla car and exploiting Single-Wire CAN (SWCAN) communications over the Control Pilot line, the team downgraded the firmware to an older version with exposed debug features.

Using a custom USB-CAN adapter and a Raspberry Pi to emulate vehicle behaviour, they accessed the device’s setup Wi-Fi credentials and triggered a buffer overflow in the debug shell, ultimately gaining remote code execution.

The demonstration ended with a visual cue — the charger’s LED blinking — but the broader implication is access to internal networks and potential lateral movement across connected systems.

Tesla has since addressed the vulnerability by introducing anti-downgrade measures in newer firmware versions. The Pwn2Own event remains instrumental in exposing critical flaws in automotive and EV infrastructure, pushing manufacturers toward stronger security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon CEO warns staff to embrace AI or face job losses

Amazon CEO Andy Jassy has warned staff that they must embrace AI or risk losing their jobs.

In a memo shared publicly, Jassy said generative AI and intelligent agents are already transforming workflows at Amazon, and this shift will inevitably reduce the number of corporate roles in the coming years.

According to Jassy, AI will allow Amazon to operate more efficiently by automating specific roles and reallocating talent to new areas. He acknowledged that it’s difficult to predict the exact outcome but clarified that the corporate workforce will shrink as AI adoption expands across the company.

Those hoping to remain at Amazon will need to upskill quickly. Jassy stressed the need for employees to stay curious and proficient with AI tools to boost their productivity and remain valuable in an increasingly automated environment.

Amazon is not alone in the trend.

BT Group is restructuring to eliminate tens of thousands of roles. At the same time, other corporate leaders, including those at LVMH and ManPower, have echoed concerns that AI’s most significant disruption may be within human resources.

Executives now see AI as a tech shift and a workforce transformation demanding retraining and redefinition of roles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s Gen Z founders go viral with AI and robotics ‘Hacker House’ in Bengaluru

A viral video has captured the imagination of tech enthusiasts by offering a rare look inside a ‘Hacker House’ in Bengaluru’s HSR Layout, where a group of Gen Z Indian founders are quietly shaping the future of AI and robotics.

Spearheaded by Localhost, the initiative provides young developers aged 16 to 22 with funding, workspace, and a collaborative environment to rapidly build real-world tech products — no media hype, just raw innovation.

The video, shared by Canadian entrepreneur Caleb Friesen, shows teenage coders intensely focused on their projects. From AI-powered noise-cancelling systems and assistive robots to innovative real estate and podcasting tools, each room in the shared house hums with creativity.

The youngest, 16-year-old Harish, stands out for his deep focus, while Suhas Sumukh, who leads the Bengaluru chapter, acts as both a guide and mentor.

Rather than pitch decks and polished PR, what resonated online was the authenticity and dedication. Caleb’s walk-through showed residents too engrossed in their work to acknowledge his arrival.

Viewers responded with admiration, calling it a rare glimpse into ‘the real future of Indian tech’. The video has since crossed 1.4 million views, sparking global curiosity.

At the heart of the movement is Localhost, founded by Kei Hayashi, which helps young developers build fast and learn faster.

As demand grows for similar hacker houses in Mumbai, Delhi, and Hyderabad, the initiative may start a new chapter for India’s startup ecosystem — fuelled by focus, snacks, and a poster of Steve Jobs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!