The White House and the Department of Homeland Security (DHS) have announced an $11 million initiative to explore and enhance the security of open-source software (OSS) used in critical infrastructure sectors such as healthcare, transportation, and energy production. This effort, known as the Open-Source Software Prevalence Initiative (OSSPI), aims to map out the use of open-source software across these vital areas, enabling the federal government and private sector to bolster national cybersecurity.
The initiative was officially announced by the White House, and further details were shared over the weekend at the DEF CON cybersecurity conference by National Cyber Director Harry Coker. A key component of this initiative is the formation of a public-private working group, set to be established later this year, to develop strategies for enhancing the security of OSS. Although specific details about the initiative are not known yet, the White House released a summary report last year containing a dozen recommendations from the cybersecurity community on areas for federal focus in open source security.
The report outlines several ongoing and planned activities, including:
Securing software package repositories
Strengthening collaboration between the federal government and open-source communities
Expanding the use of Software Bill of Materials (SBOMs)
Enhancing the security of the software supply chain
Establishing an ‘Open-Source Program Office’
Implementing vulnerability severity metrics
Boosting educational initiatives
Phasing out legacy software
While the White House has clarified that it does not intend to penalise underfunded open-source developers, Coker has repeatedly stressed that software manufacturers must be held accountable when they prioritize speed over security. Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly echoed these sentiments at the Black Hat cybersecurity conference, advocating for a software liability regime with clear standards of care and safe harbor provisions for vendors who prioritise secure development practices.
Government agencies in Australia must disclose their use of AI within six months under a new policy effective from 1st September. The policy mandates that agencies prepare a transparency statement detailing their AI adoption and usage, which must be publicly accessible. Agencies must also designate a technology executive responsible for ensuring the policy’s implementation.
The transparency statements, updated annually or after significant changes, will include information on compliance, monitoring effectiveness, and measures to protect the public from potential AI-related harm. Although staff training on AI is strongly encouraged, it is not a mandatory requirement under the new policy.
The policy was developed in response to concerns about public trust, recognising that a lack of transparency and accountability in AI use could hinder its adoption. The government in Australia aims to position itself as a model of safe and responsible AI usage by integrating the new policy with existing frameworks and legislation.
Minister for Finance and the APS, Katy Gallagher, emphasised the importance of the policy in guiding agencies to use AI responsibly, ensuring Australians’ confidence in the government’s application of these technologies.
In July 2024, the world witnessed an unprecedented IT outage caused by a faulty update in software development industry. The incident affected numerous industries worldwide, grounding planes, disrupting medical appointments, and taking broadcasters off the air. Despite the chaos, the impact on the emerging cyber insurance sector remained limited, with most of the estimated $15 billion in damages left uninsured.
In the Financial Times article, experts argue that the outcome might have been different if the outage had lasted longer. Most cyber insurance policies have a delay of around eight hours before coverage begins, and the cause of the disruption was easier to rectify than a full-scale cyberattack. Risk retention strategies and policy limits also helped minimize insurers’ liabilities, with payouts expected to cover less than a fifth of the $5.4 billion in losses reported by Fortune 500 companies (excluding Microsoft), according to the insurer Parametrix.
However, looking ahead, insurers may not always be so fortunate. The cyber insurance market is one of the riskiest in the industry, with limited data available for accurate risk assessment. Surely though, the recent outage will provide valuable insights. For instance, while insurance can only partially address the growing cyber threat, the recent outage may drive increased demand for coverage. Even before the incident, risk managers were alarmed by cyber threats, as highlighted by the Allianz Risk Barometer. The widespread disruption caused by the so-called ‘blue screen of death’ has only strengthened this concern.
The UK’s National Cyber Security Centre (NCSC) recently brought together international and UK government partners, as well as industry leaders, to discuss the role of cyber deception in cyber defense. The event hosted by the NCSC in London underscored the potential of cyber deception technologies, such as digital tripwires, honeytokens, and honeypots, to enhance national cyber defense strategies. The NCSC aims to establish a comprehensive evidence base on the efficacy of these technologies by promoting their widespread deployment across the country. To achieve this, the NCSC invites public and private sector organisations to contribute to this initiative by sharing their experiences and outcomes from deploying these technologies (as defined by the UK NCSC):
Tripwires: Systems designed to detect unauthorised access by interacting with threat actors, such as honeytokens, to disclose their presence within a network.
Honeypots: Systems that allow threat actors to engage with them, providing opportunities to observe and collect data on their tactics, techniques, procedures, capabilities, and infrastructure for threat intelligence purposes.
Breadcrumbs: Digital artifacts strategically placed within a system to lure threat actors into interacting with tripwires or honeypots, aiding in their detection and study.
To build a comprehensive evidence base on the effectiveness of these tools, the NCSC announced several objectives for this large-scale deployment :
5,000 instances of both low and high interaction solutions across the UK internet, covering both IPv4 and IPv6.
20,000 instances of low interaction solutions within internal networks.
200,000 assets of low interaction solutions deployed within cloud environments.
2,000,000 tokens deployed to bolster detection and intelligence-gathering efforts.
To contribute and participate in this consultation, you contact the UK NCSC at thfcd@ncsc.gov.uk.
The Defense Advanced Research Projects Agency (DARPA) announced the finalists for its AI Cyber Challenge (AIxCC) at DEF CON, a competition that rewards teams for training large language models (LLMs) to identify and fix vulnerabilities in open-source code. BigTech companies like Google, Microsoft, Anthropic, and OpenAI supported participants with AI model credits. The challenge saw about 40 teams submit projects, which were tested on their ability to detect and remediate injected vulnerabilities in open-source coding projects.
Experts say that generative AI can help automate the detection and patching of security flaws in code, and this development can be critical as unsophisticated yet harmful cyberattacks increasingly target critical facilities such as hospitals and water systems. Automating basic cybersecurity practices, such as scanning and fixing code bugs, could significantly reduce these incidents.
Despite running these tests in a controlled, sandboxed environment, the semifinalists’ LLM projects managed to discover 22 unique vulnerabilities and automatically patch 15 of them. DARPA, which has invested over $2 billion in AI research since 2018, plays a unique role in cybersecurity innovation: it created a mock city under cyberattack within DEF CON, attracting over 12,500 visitors. The seven finalist teams will compete in the challenge’s final round at next year’s DEF CON conference, with government officials hoping these AI tools will soon be applied to protect real-life critical infrastructure.
Anne Neuberger, the Biden administration’s deputy national security advisor for cyber and emerging technology, emphasised the goal of using AI for defense as swiftly as adversaries use it for offense. The White House is already collaborating with the Department of Energy to explore deploying these AI tools within the energy sector and hopes to eventually apply them to proprietary company code.
The United Kingdom and France are set to initiate a consultation on addressing the proliferation and irresponsible use of commercial cyber intrusion tools, according to a UK government announcement.
The consultation is part of the Pall Mall Process, a joint UK-French effort focused on addressing the misuse of commercial hacking tools like spyware. The Pall Mall Process was announced last year when the UK and France, alongside major tech companies like Google, Microsoft, and Meta, issued a joint statement acknowledging the urgent need for decisive action against the malicious exploitation of cyberespionage tools. At a conference convened by the UK and France with representatives from 35 nations, concerns were raised regarding the proliferation of spyware used to listen to phone calls, steal photos and remotely operate cameras and microphones.
The following launch of this process came after President Joe Biden issued an executive order prohibiting federal agencies from utilizing commercial spyware that might threaten US security or had been exploited by foreign entities. The executive order aimed to tackle the increasing instances of spyware abuse internationally, as well as reports of its improper use against US officials, government infrastructure, and ordinary citizens. In 2021, the Biden administration had also taken steps against spyware vendor NSO Group, founded by two former Israeli military officers, by adding the company to its Entity List.
As part of this consultation, both governments invite stakeholders to provide insights on best practices concerning commercial cyber intrusion capabilities (CCICs) across three key groups:
States: Acting as both regulators and potential consumers within the CCIC market.
Industry organizations: Engaged in or connected to the CCIC market, along with their broader value chain.
Civil society, experts, and threat researchers: Possessing relevant expertise on the risks posed by the CCIC market and the strategies to address them.
Previously, experts had already raised concerns about the Pall Mall Process and its goals, highlighting questions such as whether the initiative will be geographically diverse and include a broad range of countries. Will stakeholders be involved, and will companies providing some of the intrusive tools, in particular, be invited for discussions? What does success look like for this process, and for whom?
To participate in this consultation, please follow this link.
An international operation has dismantled the criminal ransomware group Radar/Dispossessor, which had been targeting companies across various sectors, including healthcare and transport. Authorities from the United States and Germany led the effort to bring down the group, which was founded in August 2023 and initially focused on the US before expanding its attacks globally.
The investigation has identified 43 companies as victims, spanning countries such as the UK, Germany, Brazil, and Australia. The group, led by an individual using the alias ‘Brain’, primarily targeted small to medium-sized enterprises. Many more companies are believed to have been affected, with some cases still under investigation.
Radar/Dispossessor exploited vulnerable computer systems, often through weak passwords and the absence of two-factor authentication, to hold data for ransom. Authorities successfully dismantled servers and domains associated with the group in Germany, the US, and Britain.
Twelve suspects have been identified, hailing from various countries, including Germany, Russia, Ukraine, and Kenya. Investigations are ongoing to identify further suspects and uncover more companies that may have been victimised.
On 9 August, hackers from across the globe convened in Las Vegas to hack a new online voting platform to test and identify potential digital vulnerabilities in future election systems. The Secure Internet Voting (SIV) platform, operated by a US firm, allows voting via phones or computers and is currently in small pilot programs in the US. However, its broader adoption for conducting elections confronts challenges arising from security concerns, leading most states to favour the traditional method of auditable paper ballots.
SIV founder David Ernst, noting the general pessimism on the security of internet voting, stated ‘We believe that there are modern tools and technologies that allow you to make it hyper-secure, with a higher level of security than you can currently achieve with paper’. He further highlighted how SIV was successfully used to select a party primary candidate in 2023 when Republican Celeste Maloy was chosen through SIV and subsequently won Utah’s 2nd congressional district seat in November.
Americans are greatly concerned about voting security as they fear potential foreign cyberattacks on the upcoming elections. National security officials have already found Russia and Iran engaging in online influence campaigns in the current election. Moreover, in previous election cycles, Russian hackers targeted election offices and voting machine companies, and as such, their resilience became a paramount concern.
Concerns are mounting over content shared by the Palestinian militant group Hamas on X, the social media platform owned by Elon Musk. The Global Internet Forum to Counter Terrorism (GIFCT), which includes major companies like Facebook, Microsoft, and YouTube, is reportedly worried about X’s continued membership and position on its board, fearing it undermines the group’s credibility.
The Sunday Times reported that X has become the most accessible platform to find Hamas propaganda videos, along with content from other UK-proscribed terrorist groups like Hezbollah and Palestinian Islamic Jihad. Researchers were able to locate such videos within minutes on X.
Why does it matter?
These concerns come as X faces criticism for reducing its content moderation capabilities. The GIFCT’s independent advisory committee expressed alarm in its 2023 report, citing significant reductions in online trust and safety measures on specific platforms, implicitly pointing to X.
Elon Musk’s approach to turning X into a ‘free speech’ platform has included reinstating previously banned extremists, allowing paid verification, and cutting much of the moderation team. The shift has raised fears about X’s ability to manage extremist content effectively. Despite being a founding member of GIFCT, X still needs to meet its financial obligations.
Additionally, the criticism Musk faced in Great Britain indicates the complex and currently unsolvable policy governance question: whether to save the freedom of speech or scrutinise in addition the big tech social media owners and focus on community safety?
Great Britain scenario..
Elon Musk faced criticism for his social media posts, which many believe have fueled ongoing riots in Britain. Musk shared riot footage and made controversial statements, including predicting a civil war and criticizing Prime Minister Keir Starmer for focusing on speech policing instead of community safety. The unrest was sparked by false online claims that a Taylor Swift-themed dance class stabbing involved an illegal Muslim immigrant, though the suspect is actually a 17-year-old from Cardiff with Rwandan Christian roots. The misinformation allegedly led to anti-immigrant protests and civil disorder across Britain, with violence targeting mosques, asylum seeker housing, and police. Prime Minister Starmer condemned social media companies, particularly X, for enabling the spread of violent disinformation. Government officials, including Technology Secretary Peter Kyle and Home Secretary Yvette Cooper, have vowed action against tech platforms, though Britain’s Online Safety Act won’t be fully effective until next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect.
Delta Air Lines is pursuing legal action against CrowdStrike and Microsoft following a significant outage last month that resulted in widespread flight cancellations. The disruptions affected 1.3 million customers and cost the airline at least $500 million. Delta’s CEO, Ed Bastian, criticised the two companies, holding them accountable for the operational failures.
The trouble began with a software update from cybersecurity firm CrowdStrike, which caused issues for Microsoft customers, including several airlines. Although most carriers recovered the next day, Delta continued to experience problems, leading to the cancellation of about 7,000 flights over five days. The US Transportation Department is now investigating the incident.
CrowdStrike and Microsoft have both denied responsibility for the outages. CrowdStrike threatened to defend itself aggressively if Delta proceeds with legal action, while Microsoft suggested that Delta’s outdated IT infrastructure may have contributed to the prolonged disruptions. Delta’s legal representative, David Boies, dismissed these claims, asserting that the airline had invested heavily in its technology and was not at fault.
The financial impact on Delta has been severe. The airline expects to lose $380 million in direct revenue due to refunds and compensation for the cancelled flights. Additional costs, including customer reimbursements and crew-related expenses, amount to $170 million. However, the reduced flight schedule is anticipated to lower Delta’s fuel bill by $50 million.