FTC bans NGL app from minors, issues $5 million fine for cyberbullying exploits

The US Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office have banned the anonymous messaging app NGL from serving children under 18 due to rampant cyberbullying and threats.

The FTC’s latest action, part of a broader crackdown on companies mishandling consumer data or making exaggerated AI claims, also requires NGL to pay $5 million and implement age restrictions to prevent minors from using the app. NGL, which marketed itself as a safe space for teens, was found to have exploited its young users by sending them fake, anonymous messages designed to prey on their social anxieties.

The app then charged users for information about the senders, often providing only vague hints. The FTC lawsuit, which names NGL’s co-founders, highlights the app’s deceptive practices and its failure to protect users. However, the case against NGL is a notable example of FTC Chair Lina Khan’s focus on regulating digital data and holding companies accountable for AI-related misconduct.

The FTC’s action is part of a larger effort to protect children online, with states like New York and Florida also passing laws to limit minors’ access to social media. Regulatory push like this one aims to address the growing concerns about the impact of social media on children’s mental health.

AI cybersecurity in devices deemed high-risk by European Commission

AI-based cybersecurity and emergency services components in internet-connected devices are expected to be classified as high-risk under the AI Act, according to a European Commission document seen by Euractiv. The document, which interprets the relationship between the 2014 Radio Equipment Directive (RED) and the AI Act, marks the first known instance of how AI-based safety components will be treated under the new regulations. The RED pertains to wireless devices, including those using Wi-Fi and Bluetooth, beyond traditional radios.

Under the AI Act, high-risk AI systems will be subject to extensive testing, risk management, security measures, and documentation. The Act includes a list of use cases where AI deployment is automatically considered high-risk, such as in critical infrastructure and law enforcement. It also sets criteria for categorising other high-risk products, requiring third-party conformity assessments in line with sector-specific regulations. AI cybersecurity and emergency services components meet these criteria under the RED, thus being classified as high-risk.

Even in cases where the RED allows for self-assessment compliance with harmonised standards, these AI-based components are still deemed high-risk. The AI Act references numerous sectoral regulations that could classify AI products as high-risk, extending beyond electronics to medical devices, aviation, heavy machinery, and personal watercraft. The preliminary interpretation suggests that self-assessment standards are insufficient to remove the high-risk classification from AI products in these industries.

The AI Act imposes significant requirements on high-risk AI systems, while those not in this category face only minor transparency obligations. The Commission’s document is a preliminary interpretation, and the full application of the AI Act, which spans over 500 pages, remains to be seen. Despite initial estimates that 5-15% of AI systems would be classified as high-risk, a 2022 survey of EU-based startups indicated that 33-50% of these startups consider their products high-risk. Further interpretive work is needed to understand how the AI Act will impact various sectors.

Why does it matter?

The abovementioned proceedings highlight the European Commission’s stringent approach to regulating AI-based cybersecurity and emergency services in internet-connected devices. By classifying these components as high-risk, the AI Act mandates rigorous testing, security measures, and documentation, ensuring robust safety standards. This move underscores the EU’s commitment to protecting critical infrastructure and sensitive data and signals significant regulatory implications for various industries, potentially influencing global standards and practices in AI technology.

Microsoft details threat from new AI jailbreaking method

Microsoft has warned about a new jailbreaking technique called Skeleton Key, which can prompt AI models to disclose harmful information by bypassing their behavioural guidelines. Detailed in a report published on 26 June, Microsoft explained that Skeleton Key forces AI models to respond to illicit requests by modifying their behavioural guidelines to provide a warning rather than refusing the request outright. A technique like this, called Explicit: forced instruction-following, can lead models to produce harmful content.

The report highlighted an example where a model was manipulated to provide instructions for making a Molotov cocktail under the guise of an educational context. The prompt allowed the model to deliver the information with only a prefixed warning by instructing the model to update its behaviour. Microsoft tested the Skeleton Key technique between April and May 2024 on various AI models, including Meta LLama3-70b, Google Gemini Pro, and GPT 3.5 and 4.0, finding it effective but noting that attackers need legitimate access to the models.

Microsoft has addressed the issue in its Azure AI-managed models using prompt shields and has shared its findings with other AI providers. The company has also updated its AI offerings, including its Copilot AI assistants, to prevent guardrail bypassing. Furthermore, the latest disclosure underscores the growing problem of generative AI models being exploited for malicious purposes, following similar warnings from other researchers about vulnerabilities in AI models.

Why does it matter?

In April 2024, Anthropic researchers discovered a technique that could force AI models to provide instructions for constructing explosives. Earlier this year, researchers at Brown University found that translating malicious queries into low-resource languages could induce prohibited behaviour in OpenAI’s GPT-4. These findings highlight the ongoing challenges in ensuring the safe and responsible use of advanced AI models.

EU Commission opens €210m fund for cybersecurity and digital skills initiatives

The European Commission has opened the application process to fund cybersecurity and digital skills initiatives, exceeding a €210m ($227.3m) investment under the Digital Europe Programme (DEP). Established in 2021, the DEP aims to contribute to the digital transformation of the EU’s society and economy, with a planned total budget of €7.5bn over seven years. It funds critical strategic areas such as supercomputing, AI, cybersecurity, and advanced digital skills to advance this vision.

In the latest funding cycle, the European Commission will allocate €35m ($37.8m) towards projects safeguarding large industrial installations and critical infrastructures. An additional €35m will be designated for implementing cutting-edge cybersecurity technologies and tools.

Furthermore, €12.8m ($13.8m) will be invested in establishing, reinforcing, and expanding national and cross-border security operation centres (SOCs). The initiative aligns with the proposed EU Cyber Solidarity Act, which aims to establish a European Cybersecurity Alert System to enhance the detection, analysis, and response to cyber threats. The envisioned system will consist of cross-border SOCs using advanced technologies like AI to share threat intelligence with authorities across the EU swiftly.

Moreover, the DEP will allocate €20m to assist member states in complying with the EU cybersecurity laws and national cybersecurity strategies. That includes the updated NIS2 Directive, which mandates strengthening cybersecurity measures in critical sectors and requires it to be transposed into national legislation by October 2024.

Finally, the latest DEP funding round will also allocate €55m ($59.5m) towards advanced digital skills, supporting the design and delivery of higher education programs in key digital technology domains. Additionally, €8m ($8.6m) will be directed towards European Digital Media Observatories (EDMOs) to finance independent regional hubs focused on analysing and combating disinformation in digital media.

IBM and Microsoft expand cybersecurity partnership for enhanced cloud protection

IBM Consulting and Microsoft have expanded their long-standing partnership to help clients modernise their cybersecurity operations and manage hybrid cloud identities. As businesses increasingly adopt hybrid cloud and AI technologies, protecting valuable data has become critical.

IBM Consulting integrates its cybersecurity services with Microsoft’s security technology portfolio to modernise end-to-end security operations. The collaboration aims to provide tools and expertise to protect data through cloud solutions, ultimately driving business growth. Mark Hughes, Global Managing Partner of Cybersecurity Services at IBM Consulting, emphasises that ‘security must be a foundational part of every organisation’s core operations.’

IBM’s Threat Detection and Response (TDR) Cloud Native service combines Microsoft Sentinel, Microsoft Defender XDR, and Microsoft Defender for Cloud with AI-powered security technologies to accelerate threat detection and response. IBM’s global team of security analysts provides 24/7 monitoring and investigation of security alerts across clients’ hybrid cloud environments, maximising the value of Microsoft’s end-to-end security solutions.

AI Innovation Challenge launched to combat cybercrime in the UK

The City of London Corporation, London and Partners and Microsoft have launched an AI Innovation Challenge, where participants will vie to spot and stop cybercriminals using fake identities and audio and visual deepfakes to commit fraud. With the increase of such events and the ubiquity of GenAI models, Nvidia, the multinational AI chip-maker, is increasingly becoming the modern-day Standard Oil. Nvidia’s chips can be found in just about all areas of economic activity, from education to medicine and in nearly all financial and professional services.

With its growing usage, its potential for fighting cybercrime increases, given its ability to analyse vast amounts of data rapidly, decipher patterns, and ultimately lead to higher fraud detection rates and greater trust in and securitise customer services. Banks in the United Kingdom lead the way in AI adoption, particularly as some 90 percent of them have already onboarded generative AI models to their asset portfolios.

Participants of the AI Innovation Challenge have until 26 July 2024 to register for the competition, which is scheduled for six weeks between September and November. The final event promises to be a display of fraud detection and other cybersecurity innovations developed during the course of the competition.

AI-generated Elon Musk hijacks Channel Seven’s YouTube

Channel Seven is currently investigating a significant breach on its YouTube channel, where unauthorised content featuring an AI-generated deepfake version of Elon Musk was streamed repeatedly. The incident on Thursday involved the channel being altered to mimic Tesla’s official presence. Viewers were exposed to a fabricated live stream where the AI-generated Musk promoted cryptocurrency investments via a QR code, claiming a potential doubling of assets.

During the stream, the fake Musk engaged with an audience, urging them to take advantage of the purported investment opportunity. The footage also featured a chat box from the fake Tesla page, displaying comments and links that further promoted the fraudulent scheme. The incident affected several other channels under Channel Seven’s umbrella, including 7 News and Spotlight, with all content subsequently deleted from these platforms.

A spokesperson from Channel Seven acknowledged the issue, confirming they are investigating alongside YouTube to resolve the situation swiftly. The network’s main YouTube page appeared inaccessible following the breach, prompting the investigation into how the security lapse occurred. The incident comes amidst broader challenges for Seven West Media, which recently announced significant job cuts as part of a cost-saving initiative led by its new CEO.

Why does it matter?

The breach underscores growing concerns over cybersecurity on social media platforms, particularly as unauthorised access to high-profile channels can disseminate misleading or harmful information. Channel Seven’s efforts to address the issue highlight the importance of robust digital security measures in safeguarding against such incidents in the future.

BlackBerry surpasses revenue expectations, driven by cybersecurity demand

BlackBerry surpassed expectations for Q1 revenue by reporting $144 million, exceeding the estimated $134.1 million by analysts. The Canadian firm credits this achievement to a strong demand for cybersecurity services in response to rising online threats.

Looking ahead to Q2, BlackBerry forecasts revenue between $136 million and $144 million, with its cybersecurity division expected to contribute $82 million to $86 million. Furthermore, BlackBerry’s collaboration with AMD to develop robotic systems for industrial and healthcare applications indicates its diversification beyond cybersecurity.

Why does it matter?

Recent significant data breaches in sectors like automotive and healthcare have intensified the need for enhanced cybersecurity measures, benefiting companies like BlackBerry. Despite a general slowdown in tech spending, these security concerns are prompting organisations and governments to strengthen their defences, thereby boosting BlackBerry’s performance.

Central banks urged to embrace AI

The Bank for International Settlements (BIS) has advised central banks to harness the benefits of AI while cautioning against its use in replacing human decision-makers. In its first comprehensive report on AI, the BIS highlighted the technology’s potential to enhance real-time data monitoring and improve inflation predictions – capabilities that have become critical following the unforeseen inflation surges during the COVID-19 pandemic and the Ukraine crisis. While AI models could mitigate future risks, their unproven and sometimes inaccurate nature makes them unsuitable as autonomous rate setters, emphasised Cecilia Skingsley of the BIS. Human accountability remains crucial for decisions on borrowing costs, she noted.

The BIS, often termed the central bank for central banks, is already engaged in eight AI-focused projects to explore the technology’s potential. Hyun Song Shin, the BIS’s head of research, stressed that AI should not be seen as a ‘magical’ solution but acknowledged its value in detecting financial system vulnerabilities. However, he also warned of the risks associated with AI, such as new cyber threats and the possibility of exacerbating financial crises if mismanaged.

The widespread adoption of AI could significantly impact labour markets, productivity, and economic growth, with firms potentially adjusting prices more swiftly in response to economic changes, thereby influencing inflation. The BIS has called for the creation of a collaborative community of central banks to share experiences, best practices, and data to navigate the complexities and opportunities presented by AI. That collaboration aims to ensure AI’s integration into financial systems is both effective and secure, promoting resilient and responsive economic governance.

In conclusion, the BIS’s advisory underscores the importance of balancing AI’s promising capabilities with the necessity for human intervention in central banking operations. By fostering an environment for shared knowledge and collaboration among central banks, the BIS seeks to maximise AI benefits while mitigating inherent risks, thereby supporting more robust economic management in the face of technological advancements.

EU cybersecurity exercise organised to test energy sector’s cyber resilience

The 7th edition of Cyber Europe, organised by the European Union Agency for Cybersecurity (ENISA), tested the resilience of the EU energy sector, highlighting cybersecurity as an increasing threat to critical infrastructure. In 2023, over 200 cyber incidents targeted the energy sector, with more than half aimed specifically at Europe, underscoring the sector’s vulnerability due to its crucial role in the European economy.

Juhan Lepassaar, Executive Director of ENISA, highlighted the exercise’s role in enhancing preparedness and response capacities to protect critical infrastructure, essential for the single market’s stability.

According to ENISA’s Network and Information Security (NIS) Investments report, 32% of energy sector operators lack Security Operations Center (SOC) monitoring for critical Operation Technology (OT) processes, while 52% integrate OT and Information Technology (IT) under a single SOC.

This year’s Cyber Europe exercise focused on a scenario involving cyber threats to EU energy infrastructure amidst geopolitical tensions. Over two days, stakeholders from 30 national cybersecurity agencies and numerous EU bodies collaborated, developing crisis management skills and coordinating responses to simulated cyber incidents. The exercise, one of Europe’s largest, involved over thousand experts across various domains, facilitated by ENISA, which celebrates its 20th anniversary in 2024.