Helsing in talks for $500 million funding, poised to become Europe’s top AI defence startup

European defence technology startup Helsing is currently in negotiations to secure nearly $500 million from investors in Silicon Valley, including Accel and Lightspeed Venture Partners, amounting to $4.5 billion. This valuation marks a significant increase, tripling the company’s value in less than a year, possibly driven by heightened global conflicts which in turn are prompting a surge in private investments within the military supply sector.

Specialising in AI-based software for defence, Helsing was established in 2021 and works with AI to analyse extensive data from sensors and weapons systems, providing real-time battlefield intelligence to assist military decision-making processes. The company’s software is also contributing to the advancement of AI capabilities for drones in Ukraine.

Sources familiar with the negotiations revealed that Accel and Lightspeed will be new investors in Helsing, potentially joined by General Catalyst, a previous investor in the company. If finalised, this deal would position Helsing as one of Europe’s most valued artificial intelligence startups in terms of worth, at par with Paris-based Mistral, an AI startup that recently secured €600 million at a valuation nearing €6 billion. The reluctance of venture investors to engage with defence tech firms has notably shifted, particularly in the US and Europe, driven by escalating tensions between major powers and the ongoing conflict in Ukraine, leading to increased defence expenditure by nations.

NATO’s recent allocation of its €1 billion ‘innovation fund’ towards European tech firms points towards a notable shift, with Europe rapidly closing the investment gap in defence and dual-use technologies as compared to the US. The evolving landscape of modern warfare, as is the case in the Ukrainian conflict, emphasises the transition towards software-defined technologies over traditional hardware, enabling military forces to enhance strategic capabilities.

Why does it matter?

Helsing has forged partnerships with established defence contractors in Europe, such as Germany’s Rheinmetall and Sweden’s Saab, to integrate AI into existing platforms like fighter jets. Collaborating with Airbus, the startup is also developing AI technologies for application in both manned and unmanned systems.

US Department of Justice charges Russian hacker in cyberattack plot against Ukraine

The US Department of Justice has charged a Russian individual for allegedly conspiring to sabotage Ukrainian government computer systems as part of a broader hacking scheme orchestrated by Russia in anticipation of its unlawful invasion of Ukraine.

In a statement released by US prosecutors in Maryland, it was disclosed that Amin Stigal, aged 22, stands accused of aiding in the establishment of servers used by Russian state-backed hackers to carry out destructive cyber assaults on Ukrainian government ministries in January 2022, a month preceding the Kremlin’s invasion of Ukraine.

The cyber campaign, dubbed ‘WhisperGate,’ employed wiper malware posing as ransomware to intentionally and irreversibly corrupt data on infected devices. Prosecutors asserted that the cyberattacks were orchestrated to instil fear across Ukrainian civil society regarding the security of their government’s systems.

The indictment notes that the Russian hackers pilfered substantial volumes of data during the cyber intrusions, encompassing citizens’ health records, criminal histories, and motor insurance information from Ukrainian government databases. Subsequently, the hackers purportedly advertised the stolen data for sale on prominent cybercrime platforms.

Stigal is moreover charged with assisting hackers affiliated with Russia’s military intelligence unit, the GRU, in targeting Ukraine’s allies, including the United States. US prosecutors highlighted that the Russian hackers repeatedly targeted an unspecified US government agency situated in Maryland between 2021 and 2022 before the invasion, granting jurisdiction to prosecutors in the district to pursue charges against Stigal.

In a subsequent development in October 2022, the same servers arranged by Stigal were reportedly employed by the Russian hackers to target the transportation sector of an undisclosed central European nation, which allegedly provided civilian and military aid to Ukraine post-invasion. The incident aligns with a cyberattack in Denmark during the same period, resulting in widespread disruptions and delays across the country’s railway network.

The US government has announced a $10 million reward for information leading to the apprehension of Stigal, who is currently evading authorities and believed to be in Russia. If convicted, Stigal could face a maximum sentence of five years in prison.

Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities

Apple, Microsoft, and Google are spearheading a technological revolution with their vision of AI smartphones and computers. These advanced devices aim to automate tasks like photo editing and sending birthday wishes, promising a seamless user experience. However, to achieve this level of functionality, these tech giants are seeking increased access to user data.

In this evolving landscape, users are confronted with the decision of whether to share more personal information. Windows computers may capture screenshots of user activities, iPhones could aggregate data from various apps, and Android phones might analyse calls in real time to detect potential scams. The shift towards data-intensive operations raises concerns about privacy and security, as companies require deeper insights into user behaviour to deliver tailored services.

The emergence of OpenAI’s ChatGPT has catalysed a transformation in the tech industry, prompting major players like Apple, Google, and Microsoft to revamp their strategies and invest heavily in AI-driven services. The focus is on creating a dynamic computing interface that continuously learns from user interactions to provide proactive assistance, an essential strategy for the future. While the potential benefits of AI integration are substantial, inherent security risks are associated with the increased reliance on cloud computing and data processing. As AI algorithms demand more computational power, sensitive personal data may need to be transmitted to external servers for analysis. The data transfer to the cloud introduces vulnerabilities, potentially exposing user information to unauthorised access by third parties.

Against this backdrop, tech companies have emphasised their commitment to safeguarding user data, implementing encryption and stringent protocols to protect privacy. As users navigate this evolving landscape of AI-driven technologies, understanding the implications of data sharing and the mechanisms employed to protect privacy is crucial. Apple, Microsoft, and Google are at the forefront of integrating AI into their products and services, each with a unique data privacy and security approach. Apple, for instance, unveiled Apple Intelligence, a suite of AI services integrated into its devices, promising enhanced functionalities like object removal from photos and intelligent text responses. Apple is also revamping its voice assistant, Siri, to enhance its conversational abilities and provide it with access to data from various applications.

The company aims to process AI data locally to minimise external exposure, with stringent measures in place to secure data transmitted to servers. Apple’s commitment to protecting user data differentiates it from other companies that retain data on their servers. However, concerns have been raised about the lack of transparency regarding Siri requests sent to Apple’s servers. Security researcher Matthew Green argued that there are inherent security risks to any data leaving a user’s device for processing in the cloud.

Microsoft has introduced AI-powered features in its new Windows computers called Copilot+ PC, ensuring data privacy and security through a new chip and other technologies. The Recall system enables users to quickly retrieve documents and files by typing casual phrases, with the computer taking screenshots every five seconds for analysis directly on the PC. While Recall offers enhanced functionality, security researchers caution about potential risks if the data is hacked. Google has also unveiled a suite of AI services, including a scam detector for phone calls and an ‘Ask Photos’ feature. The scam detector operates on the phone without Google listening to calls, enhancing user security. However, concerns have been raised about the transparency of Google’s approach to AI privacy, particularly regarding the storage and potential use of personal data for improving its services.

Why does it matter?

As these tech giants continue to innovate with AI technologies, users must weigh the benefits of enhanced functionalities against potential privacy and security risks associated with data processing and storage in the cloud. Understanding how companies handle user data and ensuring transparency in data practices are essential for maintaining control over personal information in the digital age.

CLTR urges UK government to create formal system for managing AI misuse and malfunctions

The UK should implement a system to log misuse and malfunctions in AI to keep ministers informed of alarming incidents, according to a report by the Centre for Long-Term Resilience (CLTR). The think tank, which focuses on responses to unforeseen crises, urges the next government to establish a central hub for recording AI-related episodes across the country, similar to the Air Accidents Investigation Branch.

CLTR highlights that since 2014, news outlets have recorded 10,000 AI ‘safety incidents,’ documented in a database by the Organisation for Economic Co-operation and Development (OECD). These incidents range from physical harm to economic, reputational, and psychological damage. Examples include a deepfake of Labour leader Keir Starmer and Google’s Gemini model depicting World War II soldiers inaccurately. The report’s author, Tommy Shaffer Shane, stresses that incident reporting has been transformative in aviation and medicine but is largely missing in AI regulation.

The think tank recommends the UK government adopt a robust incident reporting regime to manage AI risks effectively. It suggests following the safety protocols of industries like aviation and medicine, as many AI incidents may go unnoticed due to the lack of a dedicated AI regulator. Labour has pledged to introduce binding regulations for advanced AI companies, and CLTR emphasises that such a setup would help the government anticipate and respond quickly to AI-related issues.

Additionally, CLTR advises creating a pilot AI incident database, which could collect episodes from existing bodies such as the Air Accidents Investigation Branch and the Information Commissioner’s Office. The think tank also calls for UK regulators to identify gaps in AI incident reporting and build on the algorithmic transparency reporting standard already in place. An effective incident reporting system would help the Department for Science, Innovation and Technology (DSIT) stay informed and address novel AI-related harms proactively.

Ransomware actors encrypted Indonesia’s national data centre

Hackers have encrypted systems at Indonesia’s national data centre with ransomware, causing disruptions in immigration checks at airports and various public services, according to the country’s communications ministry. The ministry reported that the Temporary National Data Centre (PDNS) systems were infected with Brain Cipher, a new variant of the LockBit 3.0 ransomware.

Communications Minister Budi Arie Setiadi informed that the hackers demanded $8 million for decryption but emphasised that the government would not comply. The attack targeted the Surabaya branch of the national data centre, not the Jakarta location.

The breach risks exposing data from state institutions and local governments. The cyberattack, which began last Thursday, disrupted services such as visa and residence permit processing, passport services, and immigration document management, according to Hinsa Siburian, head of the national cyber agency. The ransomware also impacted online enrollment for schools and universities, prompting an extension of the registration period, as local media reported. Overall, at least 210 local services were disrupted.

Although LockBit ransomware was used, it may have been deployed by a different group, as many use the leaked LockBit 3.0 builder, noted SANS Institute instructor Will Thomas. LockBit was a prolific ransomware operation until its extortion site was shut down in February, but it resurfaced three months later. Cybersecurity analyst Dominic Alvieri also pointed out that the Indonesian government hasn’t been listed on LockBit’s leak site, likely due to typical delays during negotiations. Previously, Indonesia’s data centre has been targeted by hackers, and in 2023, ThreatSec claimed to have breached its systems, stealing sensitive data, including criminal records.

Central banks urged to embrace AI

The Bank for International Settlements (BIS) has advised central banks to harness the benefits of AI while cautioning against its use in replacing human decision-makers. In its first comprehensive report on AI, the BIS highlighted the technology’s potential to enhance real-time data monitoring and improve inflation predictions – capabilities that have become critical following the unforeseen inflation surges during the COVID-19 pandemic and the Ukraine crisis. While AI models could mitigate future risks, their unproven and sometimes inaccurate nature makes them unsuitable as autonomous rate setters, emphasised Cecilia Skingsley of the BIS. Human accountability remains crucial for decisions on borrowing costs, she noted.

The BIS, often termed the central bank for central banks, is already engaged in eight AI-focused projects to explore the technology’s potential. Hyun Song Shin, the BIS’s head of research, stressed that AI should not be seen as a ‘magical’ solution but acknowledged its value in detecting financial system vulnerabilities. However, he also warned of the risks associated with AI, such as new cyber threats and the possibility of exacerbating financial crises if mismanaged.

The widespread adoption of AI could significantly impact labour markets, productivity, and economic growth, with firms potentially adjusting prices more swiftly in response to economic changes, thereby influencing inflation. The BIS has called for the creation of a collaborative community of central banks to share experiences, best practices, and data to navigate the complexities and opportunities presented by AI. That collaboration aims to ensure AI’s integration into financial systems is both effective and secure, promoting resilient and responsive economic governance.

In conclusion, the BIS’s advisory underscores the importance of balancing AI’s promising capabilities with the necessity for human intervention in central banking operations. By fostering an environment for shared knowledge and collaboration among central banks, the BIS seeks to maximise AI benefits while mitigating inherent risks, thereby supporting more robust economic management in the face of technological advancements.

Millions of Americans impacted by debt collector data breach

A massive data breach has hit Financial Business and Consumer Solutions (FBCS), a debt collection agency, affecting millions of Americans. Initially reported in February 2024, the breach was found to have exposed the personal information of around 1.9 million individuals in the US, which later increased to 3 million in June. Compromised data includes full names, Social Security numbers, dates of birth, and driver’s license or ID card numbers. FBCS has notified the affected individuals and relevant authorities.

The breach occurred on 14 February but was discovered by FBCS on 26 February. The company notified the public in late April, explaining that the delay was due to their internal investigation rather than any law enforcement directives. The leaked information could include various personal details such as names, addresses, Social Security numbers, and medical records, though not all affected individuals had all types of data exposed.

FBCS has strengthened its security measures in response to the breach and built a new secure environment. Additionally, they offer those impacted 24 months of free credit monitoring and identity restoration services. The company advises everyone affected to be vigilant about sharing personal information and to monitor their bank accounts for any suspicious activity to protect against potential phishing and identity theft.

USA scrutinise China Mobile, China Telecom, and China Unicom

The Biden administration is scrutinising China Mobile, China Telecom, and China Unicom over concerns that these firms could misuse their access to American data through their US cloud and internet businesses. The Commerce Department is leading the investigation, subpoenaing the state-backed companies and conducting risk analyses on China Mobile and China Telecom. These companies maintain a small US presence, providing services like cloud computing and routing internet traffic, giving them potential access to sensitive data.

The investigation aims to prevent these Chinese firms from exploiting their US presence to aid Beijing, aligning with Washington’s broader strategy to counteract potential threats to national security from Chinese technology companies. The US has previously barred these companies from providing telephone and broadband services. Authorities could block transactions that allow these firms to operate in data centres and manage internet traffic, potentially crippling their remaining US operations.

China’s embassy in Washington has criticised these actions, urging the US to cease suppressing Chinese companies. No evidence has been found that these firms intentionally provided US data to the Chinese government. However, concerns persist about their capabilities to access and potentially misuse data, primarily through Points of Presence (PoPs) and data centres in the US, which could pose significant security risks.

Google enhances Gmail with new AI features

Google is enhancing Gmail with new AI features designed to streamline email management. A new Gemini side panel is being introduced for the web, which is capable of summarising email threads and drafting new emails. Users will receive proactive prompts and can ask freeform questions, utilising Google’s advanced models like Gemini 1.5 Pro. The mobile Gmail app will also feature Gemini’s ability to summarise threads.

However, these upgrades will only be accessible to paid Gemini users. To benefit from these features, one must be a Google Workspace customer with a Gemini Business or Enterprise add-on, a Gemini Education or Education Premium subscriber, or a Google One AI Premium member. Despite their potential usefulness, it’s advised not to depend entirely on these AI tools for critical work, as AI can sometimes produce inaccurate information.

In addition to Gmail, Google is incorporating Gemini features into the side panels of Docs, Sheets, Slides, and Drive. The rollout follows Google’s earlier promises at the I/O conference. Further AI enhancements, including ‘Contextual Smart Reply,’ are expected to arrive for Gmail soon.

Cybersecurity measures ramp up for 2024 Olympics

Next month, athletes worldwide will converge on Paris for the eagerly awaited 2024 Summer Olympics. While competitors prepare for their chance to win coveted medals, organisers are focused on defending against cybersecurity threats. Over the past decade, cyberattacks have become more sophisticated due to the misuse of AI. However, the responsible application of AI offers a promising countermeasure.

Sports organisations are increasingly partnering with AI-driven companies like Visual Edge IT, which specializes in risk reduction. Although Visual Edge IT does not directly work with the Olympics, cybersecurity expert Peter Avery shared insights on how Olympic organisers can mitigate risks. Avery emphasised the importance of robust technical, physical, and administrative controls to protect against cyber threats. He highlighted the need for a comprehensive incident response plan and the necessity of preparing for potential disruptions, such as internet overload and infrastructure attacks.

The advent of AI has revolutionised both productivity and cybercrime. Avery noted that AI allows cybercriminals to automate attacks, making them more efficient and widespread. He stressed that a solid incident response plan and regular simulation exercises are crucial for managing cyber threats. As Avery pointed out, the question is not if a cyberattack will happen but when.

The International Olympic Committee (IOC) also embraces AI responsibly within sports. IOC President Thomas Bach announced the AI plan to identify talent, personalise training, and improve judging fairness. The Summer Olympics in Paris, which run from 26 July to 11 August, will significantly test these cybersecurity and AI initiatives.