Government entities in Australia to assess foreign control risks in tech

Australia has instructed all government entities to review their technology assets for risks of foreign control or influence. The directive aims to address increasing cyber threats from hostile states and financially motivated attacks. The Australian Signals Directorate (ASD) recently warned of state-sponsored Chinese hacking targeting Australian networks.

The Department of Home Affairs has issued three legally-binding instructions requiring over 1,300 government entities to identify Foreign Ownership, Control or Influence (FOCI) risks in their technology, including hardware, software, and information systems. The organisations in question must report their findings by June 2025.

Additionally, government entities are mandated to audit all internet-facing systems and services, developing specific security risk management plans. They must also engage with the ASD for threat intelligence sharing by the end of the month, ensuring better visibility and enhanced cybersecurity.

The new cybersecurity measures are part of the Protective Security Policy Framework, following Australia’s ban on TikTok from government devices in April 2023 due to security risks. The head of the Australian Security Intelligence Organisation (ASIO) has highlighted the growing espionage and cyber sabotage threats, emphasising the interconnected vulnerabilities in critical infrastructure.

National blockchain ‘Nigerium’ aims to boost Nigeria’s tech security

The Nigerian Government has announced the development of a locally-made blockchain called ‘Nigerium’, designed to secure national data and enhance cybersecurity. The National Information Technology Development Agency (NITDA) is leading this initiative to address concerns about reliance on foreign blockchain technologies, such as Ethereum, which may not align with Nigeria’s interests.

NITDA Director General Kashifu Abdullahi introduced the ‘Nigerium’ project during a visit from the University of Hertfordshire Law School delegation in Abuja. He highlighted the need for a blockchain under Nigeria’s control to maintain data sovereignty and position the country as a leader in the competitive global tech landscape. The project, proposed by the University of Hertfordshire, aims to create a blockchain tailored to Nigeria’s unique requirements and regulatory framework.

The indigenous blockchain offers several advantages, including enhanced security, data control, and economic growth. By managing its own blockchain, Nigeria can safeguard sensitive information, improve cyber defence capabilities, and promote trusted transactions within its digital economy. The collaboration between the private and public sectors is crucial for the success of ‘Nigerium’, marking a significant step towards technological autonomy.

If successful, ‘Nigerium’ could place Nigeria at the forefront of blockchain technology in Africa, ensuring a secure and prosperous digital future. This initiative represents a strategic move towards maintaining data sovereignty and fostering innovation, positioning Nigeria to better control its technological destiny.

FTC bans NGL app from minors, issues $5 million fine for cyberbullying exploits

The US Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office have banned the anonymous messaging app NGL from serving children under 18 due to rampant cyberbullying and threats.

The FTC’s latest action, part of a broader crackdown on companies mishandling consumer data or making exaggerated AI claims, also requires NGL to pay $5 million and implement age restrictions to prevent minors from using the app. NGL, which marketed itself as a safe space for teens, was found to have exploited its young users by sending them fake, anonymous messages designed to prey on their social anxieties.

The app then charged users for information about the senders, often providing only vague hints. The FTC lawsuit, which names NGL’s co-founders, highlights the app’s deceptive practices and its failure to protect users. However, the case against NGL is a notable example of FTC Chair Lina Khan’s focus on regulating digital data and holding companies accountable for AI-related misconduct.

The FTC’s action is part of a larger effort to protect children online, with states like New York and Florida also passing laws to limit minors’ access to social media. Regulatory push like this one aims to address the growing concerns about the impact of social media on children’s mental health.

AI cybersecurity in devices deemed high-risk by European Commission

AI-based cybersecurity and emergency services components in internet-connected devices are expected to be classified as high-risk under the AI Act, according to a European Commission document seen by Euractiv. The document, which interprets the relationship between the 2014 Radio Equipment Directive (RED) and the AI Act, marks the first known instance of how AI-based safety components will be treated under the new regulations. The RED pertains to wireless devices, including those using Wi-Fi and Bluetooth, beyond traditional radios.

Under the AI Act, high-risk AI systems will be subject to extensive testing, risk management, security measures, and documentation. The Act includes a list of use cases where AI deployment is automatically considered high-risk, such as in critical infrastructure and law enforcement. It also sets criteria for categorising other high-risk products, requiring third-party conformity assessments in line with sector-specific regulations. AI cybersecurity and emergency services components meet these criteria under the RED, thus being classified as high-risk.

Even in cases where the RED allows for self-assessment compliance with harmonised standards, these AI-based components are still deemed high-risk. The AI Act references numerous sectoral regulations that could classify AI products as high-risk, extending beyond electronics to medical devices, aviation, heavy machinery, and personal watercraft. The preliminary interpretation suggests that self-assessment standards are insufficient to remove the high-risk classification from AI products in these industries.

The AI Act imposes significant requirements on high-risk AI systems, while those not in this category face only minor transparency obligations. The Commission’s document is a preliminary interpretation, and the full application of the AI Act, which spans over 500 pages, remains to be seen. Despite initial estimates that 5-15% of AI systems would be classified as high-risk, a 2022 survey of EU-based startups indicated that 33-50% of these startups consider their products high-risk. Further interpretive work is needed to understand how the AI Act will impact various sectors.

Why does it matter?

The abovementioned proceedings highlight the European Commission’s stringent approach to regulating AI-based cybersecurity and emergency services in internet-connected devices. By classifying these components as high-risk, the AI Act mandates rigorous testing, security measures, and documentation, ensuring robust safety standards. This move underscores the EU’s commitment to protecting critical infrastructure and sensitive data and signals significant regulatory implications for various industries, potentially influencing global standards and practices in AI technology.

Microsoft details threat from new AI jailbreaking method

Microsoft has warned about a new jailbreaking technique called Skeleton Key, which can prompt AI models to disclose harmful information by bypassing their behavioural guidelines. Detailed in a report published on 26 June, Microsoft explained that Skeleton Key forces AI models to respond to illicit requests by modifying their behavioural guidelines to provide a warning rather than refusing the request outright. A technique like this, called Explicit: forced instruction-following, can lead models to produce harmful content.

The report highlighted an example where a model was manipulated to provide instructions for making a Molotov cocktail under the guise of an educational context. The prompt allowed the model to deliver the information with only a prefixed warning by instructing the model to update its behaviour. Microsoft tested the Skeleton Key technique between April and May 2024 on various AI models, including Meta LLama3-70b, Google Gemini Pro, and GPT 3.5 and 4.0, finding it effective but noting that attackers need legitimate access to the models.

Microsoft has addressed the issue in its Azure AI-managed models using prompt shields and has shared its findings with other AI providers. The company has also updated its AI offerings, including its Copilot AI assistants, to prevent guardrail bypassing. Furthermore, the latest disclosure underscores the growing problem of generative AI models being exploited for malicious purposes, following similar warnings from other researchers about vulnerabilities in AI models.

Why does it matter?

In April 2024, Anthropic researchers discovered a technique that could force AI models to provide instructions for constructing explosives. Earlier this year, researchers at Brown University found that translating malicious queries into low-resource languages could induce prohibited behaviour in OpenAI’s GPT-4. These findings highlight the ongoing challenges in ensuring the safe and responsible use of advanced AI models.

EU Commission opens €210m fund for cybersecurity and digital skills initiatives

The European Commission has opened the application process to fund cybersecurity and digital skills initiatives, exceeding a €210m ($227.3m) investment under the Digital Europe Programme (DEP). Established in 2021, the DEP aims to contribute to the digital transformation of the EU’s society and economy, with a planned total budget of €7.5bn over seven years. It funds critical strategic areas such as supercomputing, AI, cybersecurity, and advanced digital skills to advance this vision.

In the latest funding cycle, the European Commission will allocate €35m ($37.8m) towards projects safeguarding large industrial installations and critical infrastructures. An additional €35m will be designated for implementing cutting-edge cybersecurity technologies and tools.

Furthermore, €12.8m ($13.8m) will be invested in establishing, reinforcing, and expanding national and cross-border security operation centres (SOCs). The initiative aligns with the proposed EU Cyber Solidarity Act, which aims to establish a European Cybersecurity Alert System to enhance the detection, analysis, and response to cyber threats. The envisioned system will consist of cross-border SOCs using advanced technologies like AI to share threat intelligence with authorities across the EU swiftly.

Moreover, the DEP will allocate €20m to assist member states in complying with the EU cybersecurity laws and national cybersecurity strategies. That includes the updated NIS2 Directive, which mandates strengthening cybersecurity measures in critical sectors and requires it to be transposed into national legislation by October 2024.

Finally, the latest DEP funding round will also allocate €55m ($59.5m) towards advanced digital skills, supporting the design and delivery of higher education programs in key digital technology domains. Additionally, €8m ($8.6m) will be directed towards European Digital Media Observatories (EDMOs) to finance independent regional hubs focused on analysing and combating disinformation in digital media.

IBM and Microsoft expand cybersecurity partnership for enhanced cloud protection

IBM Consulting and Microsoft have expanded their long-standing partnership to help clients modernise their cybersecurity operations and manage hybrid cloud identities. As businesses increasingly adopt hybrid cloud and AI technologies, protecting valuable data has become critical.

IBM Consulting integrates its cybersecurity services with Microsoft’s security technology portfolio to modernise end-to-end security operations. The collaboration aims to provide tools and expertise to protect data through cloud solutions, ultimately driving business growth. Mark Hughes, Global Managing Partner of Cybersecurity Services at IBM Consulting, emphasises that ‘security must be a foundational part of every organisation’s core operations.’

IBM’s Threat Detection and Response (TDR) Cloud Native service combines Microsoft Sentinel, Microsoft Defender XDR, and Microsoft Defender for Cloud with AI-powered security technologies to accelerate threat detection and response. IBM’s global team of security analysts provides 24/7 monitoring and investigation of security alerts across clients’ hybrid cloud environments, maximising the value of Microsoft’s end-to-end security solutions.

AI Innovation Challenge launched to combat cybercrime in the UK

The City of London Corporation, London and Partners and Microsoft have launched an AI Innovation Challenge, where participants will vie to spot and stop cybercriminals using fake identities and audio and visual deepfakes to commit fraud. With the increase of such events and the ubiquity of GenAI models, Nvidia, the multinational AI chip-maker, is increasingly becoming the modern-day Standard Oil. Nvidia’s chips can be found in just about all areas of economic activity, from education to medicine and in nearly all financial and professional services.

With its growing usage, its potential for fighting cybercrime increases, given its ability to analyse vast amounts of data rapidly, decipher patterns, and ultimately lead to higher fraud detection rates and greater trust in and securitise customer services. Banks in the United Kingdom lead the way in AI adoption, particularly as some 90 percent of them have already onboarded generative AI models to their asset portfolios.

Participants of the AI Innovation Challenge have until 26 July 2024 to register for the competition, which is scheduled for six weeks between September and November. The final event promises to be a display of fraud detection and other cybersecurity innovations developed during the course of the competition.

AI-generated Elon Musk hijacks Channel Seven’s YouTube

Channel Seven is currently investigating a significant breach on its YouTube channel, where unauthorised content featuring an AI-generated deepfake version of Elon Musk was streamed repeatedly. The incident on Thursday involved the channel being altered to mimic Tesla’s official presence. Viewers were exposed to a fabricated live stream where the AI-generated Musk promoted cryptocurrency investments via a QR code, claiming a potential doubling of assets.

During the stream, the fake Musk engaged with an audience, urging them to take advantage of the purported investment opportunity. The footage also featured a chat box from the fake Tesla page, displaying comments and links that further promoted the fraudulent scheme. The incident affected several other channels under Channel Seven’s umbrella, including 7 News and Spotlight, with all content subsequently deleted from these platforms.

A spokesperson from Channel Seven acknowledged the issue, confirming they are investigating alongside YouTube to resolve the situation swiftly. The network’s main YouTube page appeared inaccessible following the breach, prompting the investigation into how the security lapse occurred. The incident comes amidst broader challenges for Seven West Media, which recently announced significant job cuts as part of a cost-saving initiative led by its new CEO.

Why does it matter?

The breach underscores growing concerns over cybersecurity on social media platforms, particularly as unauthorised access to high-profile channels can disseminate misleading or harmful information. Channel Seven’s efforts to address the issue highlight the importance of robust digital security measures in safeguarding against such incidents in the future.