Levi Strauss & Co reports data breach affecting 72,000 customers

Levi Strauss & Co, the renowned manufacturer of Levi’s denim jeans, recently disclosed a data breach incident in a notification submitted to the Office of the Maine Attorney General. The company revealed that on June 13, it detected an unusual surge in activity on its website, prompting an immediate investigation to understand the nature and extent of the breach.

Following the investigation, Levi’s determined that the incident was a ‘credential stuffing’ attack, a tactic whereby malicious actors leverage compromised account credentials obtained from external breaches to launch automated bot attacks on another platform – in this case, www.levis.com. Importantly, Levi’s clarified that the compromised login credentials did not originate from their systems.

The attackers successfully executed the credential stuffing attack, gaining unauthorised access to customer accounts and extracting sensitive personal data. The compromised information included customers’ names, email addresses, saved addresses, order histories, payment details, and partial credit card information encompassing the last four digits of card numbers, card types, and expiration dates.

In the report submitted to the Maine state regulator, Levi’s disclosed that approximately 72,231 individuals were impacted by this security breach. Despite the breach, Levi’s assured that there was no evidence of fraudulent transactions conducted using the compromised data, as their systems need additional authentication for saved payment methods to be used in purchases.

In response to the breach, Levi Strauss & Co took swift action by deactivating account credentials for all affected user accounts during the relevant timeframe. Additionally, the company enforced a mandatory password reset after detecting suspicious activities on its website, thereby prioritising the security and protection of its customers’ data.

Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities

Apple, Microsoft, and Google are spearheading a technological revolution with their vision of AI smartphones and computers. These advanced devices aim to automate tasks like photo editing and sending birthday wishes, promising a seamless user experience. However, to achieve this level of functionality, these tech giants are seeking increased access to user data.

In this evolving landscape, users are confronted with the decision of whether to share more personal information. Windows computers may capture screenshots of user activities, iPhones could aggregate data from various apps, and Android phones might analyse calls in real time to detect potential scams. The shift towards data-intensive operations raises concerns about privacy and security, as companies require deeper insights into user behaviour to deliver tailored services.

The emergence of OpenAI’s ChatGPT has catalysed a transformation in the tech industry, prompting major players like Apple, Google, and Microsoft to revamp their strategies and invest heavily in AI-driven services. The focus is on creating a dynamic computing interface that continuously learns from user interactions to provide proactive assistance, an essential strategy for the future. While the potential benefits of AI integration are substantial, inherent security risks are associated with the increased reliance on cloud computing and data processing. As AI algorithms demand more computational power, sensitive personal data may need to be transmitted to external servers for analysis. The data transfer to the cloud introduces vulnerabilities, potentially exposing user information to unauthorised access by third parties.

Against this backdrop, tech companies have emphasised their commitment to safeguarding user data, implementing encryption and stringent protocols to protect privacy. As users navigate this evolving landscape of AI-driven technologies, understanding the implications of data sharing and the mechanisms employed to protect privacy is crucial. Apple, Microsoft, and Google are at the forefront of integrating AI into their products and services, each with a unique data privacy and security approach. Apple, for instance, unveiled Apple Intelligence, a suite of AI services integrated into its devices, promising enhanced functionalities like object removal from photos and intelligent text responses. Apple is also revamping its voice assistant, Siri, to enhance its conversational abilities and provide it with access to data from various applications.

The company aims to process AI data locally to minimise external exposure, with stringent measures in place to secure data transmitted to servers. Apple’s commitment to protecting user data differentiates it from other companies that retain data on their servers. However, concerns have been raised about the lack of transparency regarding Siri requests sent to Apple’s servers. Security researcher Matthew Green argued that there are inherent security risks to any data leaving a user’s device for processing in the cloud.

Microsoft has introduced AI-powered features in its new Windows computers called Copilot+ PC, ensuring data privacy and security through a new chip and other technologies. The Recall system enables users to quickly retrieve documents and files by typing casual phrases, with the computer taking screenshots every five seconds for analysis directly on the PC. While Recall offers enhanced functionality, security researchers caution about potential risks if the data is hacked. Google has also unveiled a suite of AI services, including a scam detector for phone calls and an ‘Ask Photos’ feature. The scam detector operates on the phone without Google listening to calls, enhancing user security. However, concerns have been raised about the transparency of Google’s approach to AI privacy, particularly regarding the storage and potential use of personal data for improving its services.

Why does it matter?

As these tech giants continue to innovate with AI technologies, users must weigh the benefits of enhanced functionalities against potential privacy and security risks associated with data processing and storage in the cloud. Understanding how companies handle user data and ensuring transparency in data practices are essential for maintaining control over personal information in the digital age.

Industry leaders unite for ethical AI data practices

Several companies that license music, images, videos, and other datasets for training AI systems have formed the first trade group in the sector, the Dataset Providers Alliance (DPA). The founding members of the DPA include Rightsify, vAIsual, Pixta, and Datarade. The group aims to advocate for ethical data sourcing, including protecting intellectual property rights and ensuring rights for individuals depicted in datasets.

The rise of generative AI technologies has led to backlash from content creators and numerous copyright lawsuits against major tech companies like Google, Meta, and OpenAI. Developers often train AI models using vast amounts of content, much of which is scraped from the internet without permission. To address these issues, the DPA will establish ethical standards for data transactions, ensuring that members do not sell data obtained without explicit consent. The alliance will also push for legislative measures in the NO FAKES Act, penalising unauthorised digital replicas of voices or likenesses and supporting transparency requirements in AI training data.

The DPA plans to release a white paper in July outlining its positions and advocating for these standards and legislative changes to ensure ethical practices in AI data sourcing and usage.

CLTR urges UK government to create formal system for managing AI misuse and malfunctions

The UK should implement a system to log misuse and malfunctions in AI to keep ministers informed of alarming incidents, according to a report by the Centre for Long-Term Resilience (CLTR). The think tank, which focuses on responses to unforeseen crises, urges the next government to establish a central hub for recording AI-related episodes across the country, similar to the Air Accidents Investigation Branch.

CLTR highlights that since 2014, news outlets have recorded 10,000 AI ‘safety incidents,’ documented in a database by the Organisation for Economic Co-operation and Development (OECD). These incidents range from physical harm to economic, reputational, and psychological damage. Examples include a deepfake of Labour leader Keir Starmer and Google’s Gemini model depicting World War II soldiers inaccurately. The report’s author, Tommy Shaffer Shane, stresses that incident reporting has been transformative in aviation and medicine but is largely missing in AI regulation.

The think tank recommends the UK government adopt a robust incident reporting regime to manage AI risks effectively. It suggests following the safety protocols of industries like aviation and medicine, as many AI incidents may go unnoticed due to the lack of a dedicated AI regulator. Labour has pledged to introduce binding regulations for advanced AI companies, and CLTR emphasises that such a setup would help the government anticipate and respond quickly to AI-related issues.

Additionally, CLTR advises creating a pilot AI incident database, which could collect episodes from existing bodies such as the Air Accidents Investigation Branch and the Information Commissioner’s Office. The think tank also calls for UK regulators to identify gaps in AI incident reporting and build on the algorithmic transparency reporting standard already in place. An effective incident reporting system would help the Department for Science, Innovation and Technology (DSIT) stay informed and address novel AI-related harms proactively.

Experts join Regulating AI’s new advisory board

Regulating AI, a non-profit organisation dedicated to promoting AI governance, has announced its advisory board’s formation. Board members include notable figures such as former US Senator Cory Gardner, former Bolivian President Jorge Quiroga, and former Finnish Prime Minister Esko Aho. The board aims to foster a sustainable AI ecosystem that benefits humanity while addressing potential risks and ethical concerns.

The founder of Regulating AI, Sanjay Puri, expressed his excitement about the diverse expertise and perspectives the new board members bring. He emphasised the importance of their wisdom in navigating the complexities of the rapidly evolving AI landscape and shaping policies that balance innovation with ethical considerations and societal well-being.

One of the organisation’s key initiatives is developing a comprehensive AI governance framework. That includes promoting international cooperation, advocating for diverse voices, and exploring sector-specific AI implications. Former President of Bolivia Jorge Quiroga highlighted the transformational power of AI and the need for effective regulation that considers the unique challenges of developing nations.

Regulating AI aims to build public trust, align international standards, and empower various stakeholders through its board. Former US Senator Gardner underscored the necessity of robust regulatory frameworks to ensure AI is developed and deployed responsibly, protecting consumer privacy, preventing algorithmic bias, and upholding democratic values. The organisation also seeks to educate and raise awareness about AI regulations, fostering discussions among experts and policymakers to advance understanding and implementation.

Privacy concerns behind Apple abandoning Meta partnership, report says

In recent days, the landscape of AI integration on Apple’s devices has become a topic of discussion. Initially, it was reported that a potential partnership could involve Apple’s cooperation with Meta’s AI services. However, ‘people with knowledge on the matter’ told Bloomberg this is not the case, explaining that Apple had explored a potential partnership in March of this year, before settling on OpenAI for part of the recently announced Apple Intelligence services. Reportedly, this partnership was abandoned due to Apple’s privacy concerns. Apple has repeatedly criticised Meta’s privacy practices, making a collaboration between the two tech giants potentially damaging to Apple’s image as a privacy-focussed company.

The timing of these discussions coincides with Meta facing privacy concerns over its new AI tools in the European Union. Despite this, Meta recently rolled out these same tools in India.

Earlier this month, Apple launched its own suite of AI features under the Apple Intelligence brand, including integration in Siri. Apple partnered with OpenAI to allow iPhone users to utilise ChatGPT for specific queries. The company says Siri will always ask for your permission before connecting to ChatGPT, and give you the choice to provide it with data, like a photo, if needed for your query. “From a privacy point of view, you’re always in control and have total transparency,” said Apple senior vice president Criag Federighi. That stance underpins Apple’s strategy as it demarcates itself in the world of AI integration, balancing innovation with its core principle of user privacy.

Apple is not depending exclusively on one AI provider though. At the Worldwide Developers Conference (WWDC), it announced its willingness to work with Google to integrate the Gemini AI model into its ecosystem. They have already partnered to train Apple’s AI. The extent of this integration remains to be seen, but it indicates Apple’s strategy of diversifying its AI partnerships.

Oracle warns of significant financial impact from potential US TikTok ban

Oracle has cautioned investors that a potential US ban on TikTok could negatively impact its financial results. A new law signed by President Biden in April could make it illegal for Oracle to provide internet hosting services to TikTok unless its China-based owners meet certain conditions. Oracle warned that losing TikTok as a client could harm its revenue and profits, as TikTok relies on Oracle’s cloud infrastructure for storing and processing US user data.

Analysts consider TikTok one of Oracle’s major clients, contributing significantly to its cloud business revenue. Estimates suggest Oracle earns between $480 million to $800 million annually from TikTok, while its cloud unit generated $6.9 billion in sales last year. The cloud business’s growth, driven by demand for AI work, has boosted Oracle’s shares by 34% this year.

Why does it matter?

The new law requires TikTok to find a US buyer within 270 days or face a ban, with a possibility of extension. TikTok, which disputes the security concerns, has sued to overturn the law. It highlights its collaboration with Oracle, termed ‘Project Texas,’ aimed at safeguarding US data from its Chinese parent company, ByteDance. Despite this, Oracle has remained discreet about its relationship with TikTok, not listing it among its key cloud customers and avoiding public discussion.

Millions of Americans impacted by debt collector data breach

A massive data breach has hit Financial Business and Consumer Solutions (FBCS), a debt collection agency, affecting millions of Americans. Initially reported in February 2024, the breach was found to have exposed the personal information of around 1.9 million individuals in the US, which later increased to 3 million in June. Compromised data includes full names, Social Security numbers, dates of birth, and driver’s license or ID card numbers. FBCS has notified the affected individuals and relevant authorities.

The breach occurred on 14 February but was discovered by FBCS on 26 February. The company notified the public in late April, explaining that the delay was due to their internal investigation rather than any law enforcement directives. The leaked information could include various personal details such as names, addresses, Social Security numbers, and medical records, though not all affected individuals had all types of data exposed.

FBCS has strengthened its security measures in response to the breach and built a new secure environment. Additionally, they offer those impacted 24 months of free credit monitoring and identity restoration services. The company advises everyone affected to be vigilant about sharing personal information and to monitor their bank accounts for any suspicious activity to protect against potential phishing and identity theft.

Geologists voice concerns about potential censorship and bias in Chinese AI chatbot

Geologists are expressing concerns about potential Chinese censorship and bias in GeoGPT, a new AI chatbot backed by the International Union of Geological Sciences (IUGS). Developed under the Deep-time Digital Earth (DDE) program, which is heavily funded by China, GeoGPT aims to assist geoscientists, particularly in developing countries, by providing access to extensive geological data. However, issues around transparency and censorship have been highlighted by experts, raising questions about the chatbot’s reliability.

Critics like Prof. Paul Cleverley have pointed out potential censorship and lack of transparency in GeoGPT’s responses. Although DDE representatives claim that the chatbot’s information is purely geoscientific and free from state influence, tests with its underlying AI, Qwen, developed by Alibaba, suggest that certain sensitive questions may be avoided or answered inadequately. That contrasts with responses from other AI models like ChatGPT, which provide more direct information on similar queries.

Further concerns are raised about the involvement of Chinese funding and the potential for biassed data usage. Geoscientific research, which includes valuable information about natural resources, could be strategically filtered. Additionally, the terms of use for GeoGPT prohibit generating content that undermines national security or incites subversion, aligning with Chinese laws, which may influence the chatbot’s outputs.

The IUGS president, John Ludden, has stated that GeoGPT’s database will be made public once appropriate governance is ensured. However, with the project being predominantly funded by Chinese sources, geoscientists remain sceptical about the impartiality and transparency of GeoGPT’s data and responses.

Cybersecurity measures ramp up for 2024 Olympics

Next month, athletes worldwide will converge on Paris for the eagerly awaited 2024 Summer Olympics. While competitors prepare for their chance to win coveted medals, organisers are focused on defending against cybersecurity threats. Over the past decade, cyberattacks have become more sophisticated due to the misuse of AI. However, the responsible application of AI offers a promising countermeasure.

Sports organisations are increasingly partnering with AI-driven companies like Visual Edge IT, which specializes in risk reduction. Although Visual Edge IT does not directly work with the Olympics, cybersecurity expert Peter Avery shared insights on how Olympic organisers can mitigate risks. Avery emphasised the importance of robust technical, physical, and administrative controls to protect against cyber threats. He highlighted the need for a comprehensive incident response plan and the necessity of preparing for potential disruptions, such as internet overload and infrastructure attacks.

The advent of AI has revolutionised both productivity and cybercrime. Avery noted that AI allows cybercriminals to automate attacks, making them more efficient and widespread. He stressed that a solid incident response plan and regular simulation exercises are crucial for managing cyber threats. As Avery pointed out, the question is not if a cyberattack will happen but when.

The International Olympic Committee (IOC) also embraces AI responsibly within sports. IOC President Thomas Bach announced the AI plan to identify talent, personalise training, and improve judging fairness. The Summer Olympics in Paris, which run from 26 July to 11 August, will significantly test these cybersecurity and AI initiatives.