US Department of Justice charges Russian hacker in cyberattack plot against Ukraine

The US Department of Justice has charged a Russian individual for allegedly conspiring to sabotage Ukrainian government computer systems as part of a broader hacking scheme orchestrated by Russia in anticipation of its unlawful invasion of Ukraine.

In a statement released by US prosecutors in Maryland, it was disclosed that Amin Stigal, aged 22, stands accused of aiding in the establishment of servers used by Russian state-backed hackers to carry out destructive cyber assaults on Ukrainian government ministries in January 2022, a month preceding the Kremlin’s invasion of Ukraine.

The cyber campaign, dubbed ‘WhisperGate,’ employed wiper malware posing as ransomware to intentionally and irreversibly corrupt data on infected devices. Prosecutors asserted that the cyberattacks were orchestrated to instil fear across Ukrainian civil society regarding the security of their government’s systems.

The indictment notes that the Russian hackers pilfered substantial volumes of data during the cyber intrusions, encompassing citizens’ health records, criminal histories, and motor insurance information from Ukrainian government databases. Subsequently, the hackers purportedly advertised the stolen data for sale on prominent cybercrime platforms.

Stigal is moreover charged with assisting hackers affiliated with Russia’s military intelligence unit, the GRU, in targeting Ukraine’s allies, including the United States. US prosecutors highlighted that the Russian hackers repeatedly targeted an unspecified US government agency situated in Maryland between 2021 and 2022 before the invasion, granting jurisdiction to prosecutors in the district to pursue charges against Stigal.

In a subsequent development in October 2022, the same servers arranged by Stigal were reportedly employed by the Russian hackers to target the transportation sector of an undisclosed central European nation, which allegedly provided civilian and military aid to Ukraine post-invasion. The incident aligns with a cyberattack in Denmark during the same period, resulting in widespread disruptions and delays across the country’s railway network.

The US government has announced a $10 million reward for information leading to the apprehension of Stigal, who is currently evading authorities and believed to be in Russia. If convicted, Stigal could face a maximum sentence of five years in prison.

AI protections included in new Hollywood worker’s contracts

The International Alliance of Theatrical Stage Employees (IATSE) has reached a tentative three-year agreement with major Hollywood studios, including Disney and Netflix. The deal promises significant pay hikes and protections against the misuse of AI, addressing key concerns of the workforce.

Under the terms of the agreement, IATSE members, such as lighting technicians and costume designers, will receive pay raises of 7%, 4%, and 3.5% over the three-year period. These increases mark a substantial improvement in compensation for the crew members who are vital to film and television production.

A crucial element of the deal is the inclusion of language that prevents employees from being required to provide AI prompts if it could result in job displacement. The provision aims to safeguard jobs against the potential threats posed by AI technologies in the industry.

The new agreement comes on the heels of a similar labor deal reached in late 2023 between the SAG-AFTRA actors’ union and the studios. That contract, which ended a nearly six-month production halt, provided substantial pay raises, streaming bonuses, and AI protections, amounting to over $1 billion in benefits over three years.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Levi Strauss & Co reports data breach affecting 72,000 customers

Levi Strauss & Co, the renowned manufacturer of Levi’s denim jeans, recently disclosed a data breach incident in a notification submitted to the Office of the Maine Attorney General. The company revealed that on June 13, it detected an unusual surge in activity on its website, prompting an immediate investigation to understand the nature and extent of the breach.

Following the investigation, Levi’s determined that the incident was a ‘credential stuffing’ attack, a tactic whereby malicious actors leverage compromised account credentials obtained from external breaches to launch automated bot attacks on another platform – in this case, www.levis.com. Importantly, Levi’s clarified that the compromised login credentials did not originate from their systems.

The attackers successfully executed the credential stuffing attack, gaining unauthorised access to customer accounts and extracting sensitive personal data. The compromised information included customers’ names, email addresses, saved addresses, order histories, payment details, and partial credit card information encompassing the last four digits of card numbers, card types, and expiration dates.

In the report submitted to the Maine state regulator, Levi’s disclosed that approximately 72,231 individuals were impacted by this security breach. Despite the breach, Levi’s assured that there was no evidence of fraudulent transactions conducted using the compromised data, as their systems need additional authentication for saved payment methods to be used in purchases.

In response to the breach, Levi Strauss & Co took swift action by deactivating account credentials for all affected user accounts during the relevant timeframe. Additionally, the company enforced a mandatory password reset after detecting suspicious activities on its website, thereby prioritising the security and protection of its customers’ data.

Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities

Apple, Microsoft, and Google are spearheading a technological revolution with their vision of AI smartphones and computers. These advanced devices aim to automate tasks like photo editing and sending birthday wishes, promising a seamless user experience. However, to achieve this level of functionality, these tech giants are seeking increased access to user data.

In this evolving landscape, users are confronted with the decision of whether to share more personal information. Windows computers may capture screenshots of user activities, iPhones could aggregate data from various apps, and Android phones might analyse calls in real time to detect potential scams. The shift towards data-intensive operations raises concerns about privacy and security, as companies require deeper insights into user behaviour to deliver tailored services.

The emergence of OpenAI’s ChatGPT has catalysed a transformation in the tech industry, prompting major players like Apple, Google, and Microsoft to revamp their strategies and invest heavily in AI-driven services. The focus is on creating a dynamic computing interface that continuously learns from user interactions to provide proactive assistance, an essential strategy for the future. While the potential benefits of AI integration are substantial, inherent security risks are associated with the increased reliance on cloud computing and data processing. As AI algorithms demand more computational power, sensitive personal data may need to be transmitted to external servers for analysis. The data transfer to the cloud introduces vulnerabilities, potentially exposing user information to unauthorised access by third parties.

Against this backdrop, tech companies have emphasised their commitment to safeguarding user data, implementing encryption and stringent protocols to protect privacy. As users navigate this evolving landscape of AI-driven technologies, understanding the implications of data sharing and the mechanisms employed to protect privacy is crucial. Apple, Microsoft, and Google are at the forefront of integrating AI into their products and services, each with a unique data privacy and security approach. Apple, for instance, unveiled Apple Intelligence, a suite of AI services integrated into its devices, promising enhanced functionalities like object removal from photos and intelligent text responses. Apple is also revamping its voice assistant, Siri, to enhance its conversational abilities and provide it with access to data from various applications.

The company aims to process AI data locally to minimise external exposure, with stringent measures in place to secure data transmitted to servers. Apple’s commitment to protecting user data differentiates it from other companies that retain data on their servers. However, concerns have been raised about the lack of transparency regarding Siri requests sent to Apple’s servers. Security researcher Matthew Green argued that there are inherent security risks to any data leaving a user’s device for processing in the cloud.

Microsoft has introduced AI-powered features in its new Windows computers called Copilot+ PC, ensuring data privacy and security through a new chip and other technologies. The Recall system enables users to quickly retrieve documents and files by typing casual phrases, with the computer taking screenshots every five seconds for analysis directly on the PC. While Recall offers enhanced functionality, security researchers caution about potential risks if the data is hacked. Google has also unveiled a suite of AI services, including a scam detector for phone calls and an ‘Ask Photos’ feature. The scam detector operates on the phone without Google listening to calls, enhancing user security. However, concerns have been raised about the transparency of Google’s approach to AI privacy, particularly regarding the storage and potential use of personal data for improving its services.

Why does it matter?

As these tech giants continue to innovate with AI technologies, users must weigh the benefits of enhanced functionalities against potential privacy and security risks associated with data processing and storage in the cloud. Understanding how companies handle user data and ensuring transparency in data practices are essential for maintaining control over personal information in the digital age.

CLTR urges UK government to create formal system for managing AI misuse and malfunctions

The UK should implement a system to log misuse and malfunctions in AI to keep ministers informed of alarming incidents, according to a report by the Centre for Long-Term Resilience (CLTR). The think tank, which focuses on responses to unforeseen crises, urges the next government to establish a central hub for recording AI-related episodes across the country, similar to the Air Accidents Investigation Branch.

CLTR highlights that since 2014, news outlets have recorded 10,000 AI ‘safety incidents,’ documented in a database by the Organisation for Economic Co-operation and Development (OECD). These incidents range from physical harm to economic, reputational, and psychological damage. Examples include a deepfake of Labour leader Keir Starmer and Google’s Gemini model depicting World War II soldiers inaccurately. The report’s author, Tommy Shaffer Shane, stresses that incident reporting has been transformative in aviation and medicine but is largely missing in AI regulation.

The think tank recommends the UK government adopt a robust incident reporting regime to manage AI risks effectively. It suggests following the safety protocols of industries like aviation and medicine, as many AI incidents may go unnoticed due to the lack of a dedicated AI regulator. Labour has pledged to introduce binding regulations for advanced AI companies, and CLTR emphasises that such a setup would help the government anticipate and respond quickly to AI-related issues.

Additionally, CLTR advises creating a pilot AI incident database, which could collect episodes from existing bodies such as the Air Accidents Investigation Branch and the Information Commissioner’s Office. The think tank also calls for UK regulators to identify gaps in AI incident reporting and build on the algorithmic transparency reporting standard already in place. An effective incident reporting system would help the Department for Science, Innovation and Technology (DSIT) stay informed and address novel AI-related harms proactively.

Ransomware actors encrypted Indonesia’s national data centre

Hackers have encrypted systems at Indonesia’s national data centre with ransomware, causing disruptions in immigration checks at airports and various public services, according to the country’s communications ministry. The ministry reported that the Temporary National Data Centre (PDNS) systems were infected with Brain Cipher, a new variant of the LockBit 3.0 ransomware.

Communications Minister Budi Arie Setiadi informed that the hackers demanded $8 million for decryption but emphasised that the government would not comply. The attack targeted the Surabaya branch of the national data centre, not the Jakarta location.

The breach risks exposing data from state institutions and local governments. The cyberattack, which began last Thursday, disrupted services such as visa and residence permit processing, passport services, and immigration document management, according to Hinsa Siburian, head of the national cyber agency. The ransomware also impacted online enrollment for schools and universities, prompting an extension of the registration period, as local media reported. Overall, at least 210 local services were disrupted.

Although LockBit ransomware was used, it may have been deployed by a different group, as many use the leaked LockBit 3.0 builder, noted SANS Institute instructor Will Thomas. LockBit was a prolific ransomware operation until its extortion site was shut down in February, but it resurfaced three months later. Cybersecurity analyst Dominic Alvieri also pointed out that the Indonesian government hasn’t been listed on LockBit’s leak site, likely due to typical delays during negotiations. Previously, Indonesia’s data centre has been targeted by hackers, and in 2023, ThreatSec claimed to have breached its systems, stealing sensitive data, including criminal records.

EU sanctions six Russian-linked hackers

Six individuals were added to the EU’s sanctions list – they all have been involved in cyberattacks targeting critical infrastructure, state functions, classified information, and emergency response systems in EU member states, according to the official press release. These sanctions mark the first instance of measures against cybercriminals employing ransomware in essential services such as health and banking.

Among those sanctioned are Ruslan Peretyatko and Andrey Korinets of the ‘Callisto group,’ known for cyber operations against the EU and third countries through phishing campaigns aimed at stealing sensitive data in defense and external relations.

Also targeted are Oleksandr Sklianko and Mykola Chernykh of the ‘Armageddon hacker group,’ allegedly supported by Russia’s Federal Security Service (FSB), responsible for impactful cyberattacks on EU governments and Ukraine using phishing and malware.

Additionally, Mikhail Tsarev and Maksim Galochkin, involved in deploying ‘Conti‘ and ‘Trickbot‘ malware under the ‘Wizard Spider’ group, face sanctions. These ransomware campaigns have caused significant economic damage across sectors including health and banking in the EU.

The EU’s horizontal cyber sanctions regime now covers 14 individuals and four entities, involving asset freezes and travel bans, and prohibiting EU persons and entities from providing funds to those listed.

With these new measures, the EU and its member states emphasize their commitment to combating persistent malicious cyber activities. Last June, the European Council agreed that new measures were needed to strengthen its Cyber Diplomacy Toolbox.

Central banks urged to embrace AI

The Bank for International Settlements (BIS) has advised central banks to harness the benefits of AI while cautioning against its use in replacing human decision-makers. In its first comprehensive report on AI, the BIS highlighted the technology’s potential to enhance real-time data monitoring and improve inflation predictions – capabilities that have become critical following the unforeseen inflation surges during the COVID-19 pandemic and the Ukraine crisis. While AI models could mitigate future risks, their unproven and sometimes inaccurate nature makes them unsuitable as autonomous rate setters, emphasised Cecilia Skingsley of the BIS. Human accountability remains crucial for decisions on borrowing costs, she noted.

The BIS, often termed the central bank for central banks, is already engaged in eight AI-focused projects to explore the technology’s potential. Hyun Song Shin, the BIS’s head of research, stressed that AI should not be seen as a ‘magical’ solution but acknowledged its value in detecting financial system vulnerabilities. However, he also warned of the risks associated with AI, such as new cyber threats and the possibility of exacerbating financial crises if mismanaged.

The widespread adoption of AI could significantly impact labour markets, productivity, and economic growth, with firms potentially adjusting prices more swiftly in response to economic changes, thereby influencing inflation. The BIS has called for the creation of a collaborative community of central banks to share experiences, best practices, and data to navigate the complexities and opportunities presented by AI. That collaboration aims to ensure AI’s integration into financial systems is both effective and secure, promoting resilient and responsive economic governance.

In conclusion, the BIS’s advisory underscores the importance of balancing AI’s promising capabilities with the necessity for human intervention in central banking operations. By fostering an environment for shared knowledge and collaboration among central banks, the BIS seeks to maximise AI benefits while mitigating inherent risks, thereby supporting more robust economic management in the face of technological advancements.

Oracle warns of significant financial impact from potential US TikTok ban

Oracle has cautioned investors that a potential US ban on TikTok could negatively impact its financial results. A new law signed by President Biden in April could make it illegal for Oracle to provide internet hosting services to TikTok unless its China-based owners meet certain conditions. Oracle warned that losing TikTok as a client could harm its revenue and profits, as TikTok relies on Oracle’s cloud infrastructure for storing and processing US user data.

Analysts consider TikTok one of Oracle’s major clients, contributing significantly to its cloud business revenue. Estimates suggest Oracle earns between $480 million to $800 million annually from TikTok, while its cloud unit generated $6.9 billion in sales last year. The cloud business’s growth, driven by demand for AI work, has boosted Oracle’s shares by 34% this year.

Why does it matter?

The new law requires TikTok to find a US buyer within 270 days or face a ban, with a possibility of extension. TikTok, which disputes the security concerns, has sued to overturn the law. It highlights its collaboration with Oracle, termed ‘Project Texas,’ aimed at safeguarding US data from its Chinese parent company, ByteDance. Despite this, Oracle has remained discreet about its relationship with TikTok, not listing it among its key cloud customers and avoiding public discussion.

Millions of Americans impacted by debt collector data breach

A massive data breach has hit Financial Business and Consumer Solutions (FBCS), a debt collection agency, affecting millions of Americans. Initially reported in February 2024, the breach was found to have exposed the personal information of around 1.9 million individuals in the US, which later increased to 3 million in June. Compromised data includes full names, Social Security numbers, dates of birth, and driver’s license or ID card numbers. FBCS has notified the affected individuals and relevant authorities.

The breach occurred on 14 February but was discovered by FBCS on 26 February. The company notified the public in late April, explaining that the delay was due to their internal investigation rather than any law enforcement directives. The leaked information could include various personal details such as names, addresses, Social Security numbers, and medical records, though not all affected individuals had all types of data exposed.

FBCS has strengthened its security measures in response to the breach and built a new secure environment. Additionally, they offer those impacted 24 months of free credit monitoring and identity restoration services. The company advises everyone affected to be vigilant about sharing personal information and to monitor their bank accounts for any suspicious activity to protect against potential phishing and identity theft.