EU asks Amazon for DSA compliance details

The European Commission has requested that Amazon provide detailed information regarding its measures to comply with the Digital Services Act (DSA) obligations. Specifically, the Commission is interested in the transparency of Amazon’s recommender systems. Amazon has been given a deadline of 26 July to respond.

The DSA mandates that major tech companies, like Amazon, take more responsibility in addressing illegal and harmful content on their platforms. The regulatory push aims to create a safer and more predictable online environment for users. Amazon stated that it is currently reviewing the EU’s request and plans to work closely with the European Commission.

A spokesperson for Amazon expressed support for the Commission’s objectives, emphasising the company’s commitment to a safe and trustworthy shopping experience. Amazon highlighted its significant investments in protecting its platform from bad actors and illegal content and noted that these efforts align with DSA compliance.

How AI is reshaping US intelligence operations

The US intelligence community is fully embracing generative AI, marking a significant shift towards transparency in its adoption of cutting-edge technology. Leaders within agencies like the CIA are openly discussing how generative AI enhances intelligence operations, from aiding in content triage and search capabilities to supporting analysts in generating counter arguments and ideation.

Lakshmi Raman, the CIA’s director of Artificial Intelligence Innovation, highlighted the transformative impact of generative AI during a recent address at the Amazon Web Services Summit in Washington, D.C. She noted its critical role in processing vast amounts of data to extract actionable insights, crucial for keeping pace with global developments and informing policymakers amidst a constant influx of news.

Despite its potential benefits, the deployment of generative AI within the intelligence community is not without its challenges and risks. Concerns over accuracy and security persist, as erroneous outputs—termed ‘hallucinations’—could have severe consequences in national security contexts. Adele Merritt, Intelligence Community Chief Information Officer, stressed the need for cautious adoption, ensuring that AI technologies adhere to strict privacy and security standards.

In response to these challenges, major tech companies like Microsoft and AWS are adapting their cloud services to cater to classified government needs, offering secure environments for deploying generative AI tools. AWS, for instance, launched a significant initiative to support government agencies with training and technical support for generative AI, underscoring its commitment to enhancing national security capabilities through innovative technology solutions.

However, this concerted effort by both intelligence agencies and tech providers aims to harness the full potential of generative AI while mitigating associated risks, thus shaping the future of intelligence operations in an increasingly data-driven world.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Decade-old vulnerabilities patched addressing supply chain risks to numerous Apple devices

Researchers at cybersecurity firm EVA Information Security have uncovered three major vulnerabilities in CocoaPods, a widely used tool that simplifies the process of updating apps on iOS and macOS devices. These vulnerabilities, which went unnoticed for nearly a decade, posed significant risks as they could have allowed attackers to inject malware into apps utilizing CocoaPods. Given that CocoaPods is commonly used to integrate pre-written code into iOS and macOS apps, the vulnerabilities could have enabled attackers to modify app architectures with malicious code.

The vulnerabilities stem from a migration process in May 2014, which left thousands of CocoaPods packages ‘orphaned’ and potentially vulnerable. According to EVA researchers, CocoaPods is extensively used by iOS developers, including major companies like Google, GitHub, Amazon, Dropbox, and others, making the impact widespread across various projects and dependencies.

One of the most critical vulnerabilities, identified as CVE-2024-38368, could have been exploited by malicious actors to inject malware into apps using compromised packages, effectively bypassing security measures and compromising user data.

EVA responsibly disclosed these vulnerabilities to CocoaPods, which promptly patched them in October 2023 before publicly disclosing the findings. As of now, there are no known instances of these vulnerabilities being exploited by malicious actors. The proactive response from CocoaPods mitigated potential risks to app developers and users relying on the platform for their software development needs.

RockYou2024 password leak exposes nearly 10 billion unique passwords

The largest compilation of nearly ten billion unique passwords, titled RockYou2024, was leaked on a popular hacking forum, posing significant risks for users prone to reusing passwords. Discovered by Cybernews researchers, the file contains 9,948,575,739 plaintext passwords and was posted by a user named ObamaCare. The leak is believed to combine data from various old and new breaches, dramatically increasing the threat of credential-stuffing attacks.

Credential stuffing attacks exploit leaked passwords to gain unauthorised access to accounts, affecting users and businesses. The RockYou2024 leak significantly heightens this risk, as previous attacks on companies like Santander and Ticketmaster demonstrated. Cybernews highlighted the need for robust security measures, such as resetting compromised passwords, using strong, unique passwords, and enabling multi-factor authentication (MFA).

The RockYou2024 leak follows the 2021 release of a similar but smaller compilation, RockYou2021, which contained 8.4 billion passwords. The new dataset has grown by 15 percent, incorporating an additional 1.5 billion passwords. The compilation is believed to include information from over 4,000 databases collected over more than two decades, making it a potent tool for cybercriminals.

To protect against potential breaches, Cybernews advises users to reset exposed passwords, use MFA, and utilise password managers. The company will also integrate RockYou2024 data into its Leaked Password Checker, allowing individuals to verify if their credentials have been compromised. The leak follows another significant breach, the Mother of All Breaches (MOAB), which involved 12 terabytes of data and 26 billion records earlier this year.

OpenAI encrypts ChatGPT macOS chats after security flaw

OpenAI’s ChatGPT macOS app was found to be storing user chats in plain text until recently, raising security concerns. The Verge reported that the AI firm has now released an update to encrypt conversations on macOS. The discovery was made by software developer Pedro Vieito, who noted that OpenAI was distributing the app exclusively through their website and bypassing Apple’s sandbox protections.

Sandboxing, which isolates an app and its data from the rest of the system, is optional on macOS, but is commonly used by chat applications to protect sensitive information. By not adhering to this security measure, the ChatGPT app exposed user chats to potential threats. Vieito highlighted the vulnerability on social media, showing how easily another app could access the unprotected data.

OpenAI acknowledged the issue and emphasised that users could opt out of having their chats used to train the AI models. The ChatGPT app, which was made available to macOS users on June 25, now includes encryption to enhance user privacy and security.

Yamaman launches facial recognition for light rail and buses

Japanese light rail and bus operator Yamaman Co has introduced facial recognition technology to its Jorudan Style Point&Pass ticketing system on the Yukarigaoka Line and local bus services. Passengers can now use the Eucalyptus Pass system by registering online with a photo and credit card details. At the stations, facial recognition cameras identify users, open barriers, and automatically charge their credit cards for the flat fare of ¥200 or a day ticket for ¥500.

Previously, passengers used magnetic tickets, but these machines are being updated to issue paper tickets with QR codes for occasional and non-registered travellers. The new technology builds on a successful 2021 pilot scheme on bus services, and suppliers J MaaS and Panasonic Connect aim to expand the system across Japan.

The implementation, costing around ¥60 million, was partially funded by a government subsidy and is expected to reduce ticketing costs by 30%. The koala theme of the transport services reflects the local presence of eucalyptus trees.

Phishing attack compromises Formula 1 governing body email accounts

The Fédération Internationale de l’Automobile (FIA), the governing body of auto racing since the 1950s, revealed that attackers managed to access personal data by compromising several FIA email accounts through a phishing attack. Established in 1904 as the Association Internationale des Automobile Clubs Reconnus (AIACR), the FIA is a non-profit international association that oversees various auto racing championships, including Formula 1 and the World Rally Championship (WRC). With 242 member organisations spanning 147 countries across five continents, the FIA also governs the FIA Foundation, which supports and finances road safety research.

In response to the breach, the organisation swiftly took corrective actions, including promptly blocking the unauthorised accesses upon discovery of the incidents. The FIA informed the Swiss data protection regulator (Préposé Fédéral à la Protection des Données et à la Transparence) and the French data protection regulator (Commission Nationale de l’Informatique et des Libertés) about the security breach.

To prevent similar incidents in the future, the FIA implemented enhanced security measures and expressed regret for any concerns raised among the affected individuals. Emphasising its commitment to data protection and information security, the FIA continuously evaluates and strengthens its systems to combat evolving cyber threats. However, details such as the breach detection timeline, the extent of personal information accessed, and the nature of the exposed or stolen sensitive data remain undisclosed by the organisation.

Australia moves top secret data to Amazon cloud

Australia is set to transfer its top-secret intelligence data to the cloud under a $2 billion agreement with Amazon Web Services to enhance defence interoperability with the United States. Defence Minister Richard Marles emphasised that the move to distributed, purpose-built facilities would bolster the resilience of data crucial for the defence force, ensuring continued operation even if individual servers fail.

The Director General of the Australian Signals Directorate, Rachel Noble, highlighted that the shift will also incorporate increased use of AI to analyse data. Noble stressed the importance of using AI ethically and with careful governance to understand its impact on data and its applications within the intelligence community.

Marles noted the significance of maintaining a common computing environment with US defence forces, especially as modern warfare increasingly relies on top-secret data, such as that used by F-35A joint strike fighter aircraft. He explained that data from sensors feeding into these platforms is vital for targeting, defence, and protection of other assets.

Prime Minister of Australia, Anthony Albanese, announced that the partnership with Amazon Web Services would enhance national security capabilities and create 2,000 local jobs. Director-General of National Intelligence Andrew Shearer reiterated that interoperability with security partners like the United States remains a top priority.

Google warns of generative AI dangers

A recent research paper from Google reveals that generative AI already distorts socio-political reality and scientific consensus. The paper, titled ‘Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data,’ was co-authored by researchers from Google DeepMind, Jigsaw, and Google.org.

It categorises various ways generative AI tools are misused, analysing around 200 incidents reported in the media and academic papers between January 2023 and March 2024. Unlike warnings about hypothetical future risks, this research focuses on the real harm generative AI is currently causing, such as flooding the internet with generated text, audio, images, and videos.

The researchers found that most AI misuse involves exploiting system capabilities rather than attacking the models themselves. However, this misuse blurs the lines between authentic and deceptive content, undermining public trust. AI-generated content is being used for impersonation, creating non-consensual intimate images, and amplifying harmful content. These activities often uphold the terms of service of AI tools, highlighting a significant challenge in regulating AI misuse.

Google’s research also emphasises the environmental impact of generative AI. The increasing integration of AI into various products drives energy consumption, making it difficult to reduce emissions. Despite efforts to improve data centre efficiency, the overall rise in AI use has outpaced these gains. The paper calls for a multi-faceted approach to mitigate AI misuse, involving collaboration between policymakers, researchers, industry leaders, and civil society.

Brazil halts Meta’s new privacy policy for AI training, citing serious privacy risks

Brazil’s National Data Protection Authority (ANPD) has taken immediate action to halt the implementation of Meta’s new privacy policy concerning the use of personal data to train generative AI systems within the country.

The ANPD’s precautionary measure, announced in Brazil’s official gazette, suspends the processing of personal data across all Meta products, extending to individuals who are not users of the tech company’s platforms. The regulatory body, operating under Brazil’s Justice Ministry, has imposed a daily fine of 50,000 reais ($8,836.58) for any directive violations.

The decision by the ANPD was motivated by the perceived ‘imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of affected individuals.’ As a result, Meta is mandated to revise its privacy policy to eliminate the segment related to the processing of personal data for generative AI training. Additionally, Meta must issue an official statement confirming the suspension of personal data processing for this purpose.

In response to the ANPD’s ruling, Meta expressed disappointment, characterising the move as a setback for innovation and predicting a delay in delivering AI benefits to the Brazilian population. Meta defended its practices by pointing to its transparency policy compared to other industry players who have used public content for training models and products. The company asserted that its approach aligns with Brazil’s privacy laws and regulations.