Jen Easterly, head of the US Cybersecurity and Infrastructure Security Agency (CISA), has asserted that the real villains in cybercrime are software suppliers who deliver faulty and insecure code. At Mandiant’s mWise conference, she emphasised that technology vendors create problems in their products, ultimately making it easier for cybercriminals to attack their targets.
Easterly also argued that the term ‘software vulnerabilities’ minimises the issue, advocating for the more direct term ‘product defects’ instead. Rather than blaming victims for not patching software quickly enough, she urged the industry to question why software requires so many urgent updates in the first place, emphasising the need for greater accountability from tech vendors.
Despite a multi-billion-dollar cybersecurity industry, Easterly lamented the ongoing multi-trillion-dollar software quality issue that fuels cybercrime. She compared the software reliance on critical infrastructure to purchasing a car or boarding a plane without any safety guarantees.
Easterly has consistently pushed for better software quality since taking charge of CISA, stressing that secure code is essential to reducing ransomware and cyberattacks. While acknowledging that perfect code is difficult to achieve, she expressed frustration with the current defect rates and the lack of accountability among developers. At the recent RSA Conference, nearly 70 major companies, including AWS, Microsoft, and Google, signed CISA’s Secure by Design pledge to improve software security practices. This number has now increased to almost 200 vendors, but Easterly noted that adherence to the pledge is still voluntary.
To encourage change, she urged technology buyers to leverage their procurement power by inquiring if software suppliers have signed the pledge and are genuinely committed to building secure products. CISA has released guidance for organisations on assessing software manufacturers’ security priorities during purchasing.
Data from millions of Star Health customers, including sensitive medical information, is being accessed and sold via Telegram chatbots. The breach comes just weeks after Telegram’s founder was criticised for failing to prevent criminal activity on the platform. The hacker responsible claims to possess data from over 31 million customers, with some available for free through the chatbots and bulk data offered for sale.
Star Health, one of India’s largest health insurers, stated that it has reported the breach to local authorities but reassured customers that sensitive data remains secure. Initial assessments revealed no evidence of a widespread compromise, despite reports of leaked documents including medical diagnoses, tax details, and ID copies.
Telegram’s role in enabling chatbots has made it one of the most popular messaging apps globally, with over 900 million users. However, security concerns continue to grow, particularly following the recent arrest of its founder in France. While Telegram denies any wrongdoing, it faces mounting pressure over its moderation policies.
The hacker, who operates under the alias xenZen, claimed responsibility for creating the chatbots and for holding 7.24 terabytes of data. UK-based researcher Jason Parker, who discovered the breach, revealed that the stolen data has been accessible since early August, with the chatbots distributing small samples for free.
A recent report from the US Federal Trade Commission (FTC) has criticised social media platforms for lacking transparency in how they manage user data. Companies such as Meta, TikTok, and Twitch have been highlighted for inadequate data retention policies, raising significant privacy concerns.
Social platforms collect large amounts of data using tracking technologies and by purchasing information from data brokers, often without users’ knowledge. Much of this data fuels the development of AI, with little control given to users. Data privacy for teenagers remains a pressing issue, leading to recent legislative moves in Congress.
Some companies, including X (formerly Twitter), responded by saying that they have improved their data practices since 2020. Others failed to comment. Advertising industry groups defended data collection, claiming it supports free access to online services.
FTC officials are concerned about the risks posed to individuals, especially those not even using the platforms, due to widespread data collection. Inadequate data management by social platforms may expose users to privacy breaches and identity theft.
Disney is phasing out its use of Slack for workplace collaboration after a significant data breach. A hacking group, NullBulge, leaked over a terabyte of Disney’s internal data, affecting thousands of Slack channels, according to reports. This breach included sensitive information like computer code and unreleased projects.
Disney’s Chief Financial Officer, Hugh Johnston, confirmed most departments will stop using Slack by the end of the year. Several teams have already begun transitioning to alternative tools for enterprise-wide collaboration, aiming to improve security and workflow.
The incident, reported in July by the Wall Street Journal, involved over 44 million messages from Slack channels. The company launched an investigation into the unauthorised release of data in August.
NullBulge, known for targeting software supply chains, exploits coding platforms like GitHub and Hugging Face to deceive users into downloading malicious files. Neither an American multinational mass media and entertainment conglomerate nor Slack provided immediate responses to requests for comment.
LinkedIn has come under scrutiny for using user data to train AI models without updating its privacy terms in advance. While LinkedIn has since revised its terms, United States users were not informed beforehand, which usually allows them time to make decisions about their accounts. LinkedIn offers an opt-out feature for data used in generative AI, but this was not initially reflected in their privacy policy.
LinkedIn clarified that its AI models, including content creation tools, use user data. Some models on its platform may also be trained by external providers like Microsoft. LinkedIn assures users that privacy-enhancing techniques, such as redacting personal information, are employed during the process.
The Open Rights Group has criticised LinkedIn for not seeking consent from users before collecting data, calling the opt-out method inadequate for protecting privacy rights. Regulatory bodies, including Ireland‘s Data Protection Commission, have been involved in monitoring the situation, especially within regions under GDPR protection, where user data is not used for AI training.
LinkedIn is one of several platforms reusing user-generated content for AI training. Others, like Meta and Stack Overflow, have also begun similar practices, with some users protesting the reuse of their data without explicit consent.
Australia has introduced the Privacy and Other Legislation Amendment Bill 2024, marking a pivotal advancement in addressing privacy concerns within the digital landscape. The landmark legislation establishes stringent penalties for privacy breaches, imposing sentences of up to six years in prison for general offences and up to seven years for doxxing incidents that target protected characteristics.
Furthermore, the bill enhances the enforcement powers of the Australian Information Commissioner, enabling swift action against non-compliance with privacy laws. Restoring the Australian Privacy Commissioner as a standalone position further strengthens the oversight needed to uphold privacy standards nationwide.
In its commitment to modernising privacy laws for the digital age, Australia views the Privacy and Other Legislation Amendment Bill 2024 as the initial phase of a comprehensive strategy to safeguard citizens’ privacy. The government demonstrates its resolve to hold companies and individuals accountable by significantly increasing maximum penalties for serious privacy breaches.
Additionally, recognising the importance of collaboration, the government will continue to engage with key stakeholders—including industry representatives, small businesses, consumer groups, and the media—to ensure that the approach to privacy protection is equitable and beneficial for both individuals and society.
India is set to introduce an umbrella framework for consent management under the Digital Personal Data Protection (DPDP) Act, focusing on broad guidelines rather than specific rules. That approach is designed to provide flexibility for companies while ensuring they adhere to the overarching principles of data protection. Initially, organisations will be required to use government-issued identity cards for age and consent verification. However, they will eventually have the option to develop and implement their systems tailored to their needs.
Moreover, India is expected to offer certain exemptions to educational institutions, including schools, colleges, and universities, concerning the processing and obtaining parental consent for children’s data. That measure aims to alleviate the compliance burden on educational entities. In contrast, edtech companies will not benefit from these exemptions and must adhere to the full consent management rules outlined by the DPDP Act.
Furthermore, India is reinforcing its commitment to protecting children’s data by prohibiting behavioural tracking and targeted advertising for users under 18. This provision of the DPDP Act highlights the government’s focus on safeguarding young users from intrusive digital practices. It ensures that their online activities are not subject to targeted marketing strategies.
The NCSC has collaborated with cybersecurity agencies from the United States, Australia, Canada, and New Zealand to effectively address the global botnet threat. That joint effort underscores the importance of international cooperation in tackling cyber threats that span multiple countries.
By combining their expertise and resources, these agencies have been able to produce a comprehensive advisory that provides detailed information on the botnet’s operation, its impact, and the types of devices it targets. Consequently, this collaboration ensures a robust and unified response to the threat, reflecting the global commitment to enhancing cybersecurity.
Moreover, the advisory issued by these agencies details how the botnet, managed by Integrity Technology Group and used by the cyber actor Flax Typhoon, exploits vulnerabilities in internet-connected devices. It includes technical information on the botnet’s activities, such as malware distribution and Distributed Denial of Service (DDoS) attacks, and offers practical mitigation strategies.
Therefore, it underscores the need for updating and securing devices to prevent them from becoming part of the botnet, providing crucial guidance to individuals and organisations seeking to protect their digital infrastructure. In addition, this international collaboration serves to promote proactive security measures and raise awareness about cybersecurity best practices. The joint advisory encourages users to safeguard their devices and avoid contributing to malicious activities immediately.
The National Security Agency (NSA), in conjunction with the Federal Bureau of Investigation (FBI), United States Cyber Command’s Cyber National Mission Force (CNMF), and international allies, has issued a critical cybersecurity advisory. Titled ‘People’s Republic of China-Linked Actors Compromise Routers and IoT Devices for Botnet Operations,’ the advisory reveals the extensive activities of cyber actors affiliated with the People’s Republic of China (PRC).
These actors have breached internet-connected devices worldwide, establishing a massive botnet. To address this threat, the NSA has outlined several key mitigations aimed at helping device vendors, owners, and operators secure their devices and networks. These recommendations include regularly applying patches and updates, turning off unused services and ports, replacing default passwords with strong alternatives, and implementing network segmentation to reduce IoT device risks.
Furthermore, the advisory suggests monitoring network traffic for signs of DDoS attacks, planning device reboots to eliminate non-persistent malware, and upgrading outdated equipment with supported models. Moreover, NSA Cybersecurity Director Dave Luber has emphasised the importance of the advisory, noting that it provides crucial and timely insights into the botnet’s infrastructure, the geographical distribution of the compromised devices, and effective mitigation strategies.
According to the advisory, the botnet encompasses thousands of devices across various sectors, with over 260,000 devices compromised in North America, Europe, Africa, and Southeast Asia as of June 2024. Consequently, this extensive network of affected devices highlights the urgent need for enhanced security measures to protect against such pervasive cyber threats.
Microsoft researchers have uncovered a Russian disinformation operation that falsely accused United States Democratic presidential candidate Kamala Harris of leaving a 13-year-old girl paralysed in a hit-and-run incident in 2011. The operation, led by a Kremlin-linked group called Storm-1516, used actors and fabricated news outlets, including a fake site called ‘KBSF-TV’, to spread the baseless claim. The hoax was widely shared on social media, gaining millions of views.
The disinformation effort is part of a broader Russian campaign to interfere with the upcoming US presidential election. After initial difficulties shifting focus following President Biden’s withdrawal from the 2024 race, Russian actors have targeted Harris and her running mate, Tim Walz, with fabricated conspiracy theories. The false claim against Harris was amplified on social media by pro-Russian figures, including Aussie Cossack, who encouraged MAGA supporters to spread the misinformation.
Microsoft‘s investigation highlights how Storm-1516 produces misleading videos featuring actors impersonating journalists or whistleblowers. The hit-and-run story gained traction online, particularly on X.com, where it was shared by key figures within the pro-Russian ecosystem. The US Justice Department has also recently charged two Russian state media employees with money laundering, linked to efforts to influence the election.
US officials believe Russia’s goal is to deepen political divisions within the country and undermine public support for military aid to Ukraine. Kamala Harris has stated her intention to continue supporting Ukraine’s defence against Russia‘s invasion if elected.