Mediabank faces legal action in Australia over massive data breach

Following the 2022 Mediabank’s cyber incident, the Office of the Australian Information Commissioner has initiated legal proceedings against the company, alleging the significant data breach impacted a vast number of customers, including 5.1 million Medibank customers, 2.8 million ahm customers, and 1.8 million international customers, totalling 9.7 million individuals. 

While Mediabank initially blamed a third party contractor and a ‘misconfigured firewall’ for the incident, a federal court case in Australia has revealed that the breach originated from an IT service desk operator at Medibank who stored multiple account credentials on his work computer which provided a gateway for a hacker to illicitly access Medibank’s systems. The hacker exploited this access for nearly two months and managed to extract a substantial amount of personal data, estimated at around 520GB.

The breach was aggravated by the absence of multi-factor authentication on Medibank’s Global Protect VPN, a security loophole that had been previously flagged in reports by KPMG and Datacom in 2020 and 2021. The Office of the Australian Information Commissioner has criticised Medibank for failing to promptly address these known security vulnerabilities. Legal action has been taken against Medibank in response to the breach. Moreover, the government has identified the alleged perpetrator as a Russian citizen named Aleksandr Gennadievich Ermakov and will be imposing sanctions against him under the new autonomous sanctions law. The incident stresses the critical importance of proactive risk mitigation strategies to safeguard sensitive customer information from malicious cyber threats.

Report uncovers hackers now use emojis to command malware

Researchers from the cybersecurity firm Volexity have uncovered a sophisticated cyber threat that uses the popular Discord messaging service for command and control (C2) purposes. That was discovered during a targeted cyber attack on the Indian government this year, where a malicious software named Disgomoji was deployed. The attack was attributed to a suspected Pakistani threat actor known as UTA0137. The group uses emojis for C2 communication on the Discord platform, showcasing a new covert approach to conduct espionage campaigns against Indian government entities.

The Disgomoji malware, tailored to target Linux systems, specifically the custom BOSS distribution used by the Indian government is highly sophisticated in its design and execution. Initial access to the targeted systems was believed to have been gained through phishing attacks, leveraging decoy documents as bait. Once infiltrated, the malware established dedicated channels within Discord servers, with each channel representing an individual victim. That setup allowed the threat actor to interact with each victim separately, enhancing the precision and effectiveness of the attack.

Upon activation, Disgomoji initiated a check-in process, transmitting crucial system information such as IP address, username, hostname, operating system details, and current working directory to the attacker. The malware exhibited persistence mechanisms which ensured its survival through system reboots and allowed it to maintain a covert presence within the compromised systems. Communication between the attacker and the malware was facilitated through an emoji-based protocol or in other words, with commands issued via emojis. For instance, as Disgomoji executes the command, it responds with a “⏰” emoji, and upon completion, it shows the “✅.”

Why does it matter?

The malware’s capabilities extended beyond basic communication, including advanced functionalities such as network scanning using tools like Nmap, network tunnelling through Chisel and Ligolo, and data exfiltration via file sharing services. Disgomoji also employed deceptive tactics, masquerading as a Firefox update to deceive victims into sharing sensitive information like passwords. 

Volexity’s attribution to a Pakistan-based threat actor was supported by various indicators, including Pakistani time zones in the malware sample, infrastructure links to known threat actors in Pakistan, the use of the Punjabi language, and the selection of targets aligned with Pakistan’s strategic interests. The detailed analysis stresses the evolving sophistication of cyber threats and the critical importance of robust cybersecurity measures to safeguard against such malicious activities.

AI award-winning headless flamingo photo found to be real

A controversial AI-generated photo of a headless flamingo has ignited a heated debate over the ethical implications of AI in art and technology. The image, which was honored in the AI category of the 1839 Awards’ Color Photography Contest, has drawn criticism and concern from various sectors, including artists, technologists, and ethicists. 

The photo, titled ‘F L A M I N G O N E,’ depicts a flamingo without its head. It was created by photographer Miles Astray using a sophisticated AI model designed to generate lifelike images. Contrary to initial impressions, the photo wasn’t generated from a text prompt but was instead based on a real — and not at all beheaded — flamingo that Astray captured on the beaches of Aruba two years ago. After the photo won both third place in the category and the People’s Vote award, Astray revealed the truth, leading to his disqualification.

Proponents of AI-generated art assert that such creations push the boundaries of artistic expression, offering new and innovative ways to explore and challenge traditional concepts of art. They argue that the AI’s ability to produce unconventional and provocative images can be seen as a form of artistic evolution, allowing for greater diversity and creativity in the art world. However, detractors highlight the potential risks and ethical dilemmas posed by such technology. The headless flamingo photo, in particular, has been described as unsettling and inappropriate, sparking a broader conversation about the limits of AI-generated content. Concerns have been raised about the potential for AI to produce harmful or distressing images, and the need for guidelines and oversight to ensure responsible use.

The release of the headless flamingo photo has prompted a range of responses from the art and tech communities. Some artists view the image as a provocative statement on the nature of AI and its role in society, while others see it as a troubling example of the technology’s potential to create disturbing content. Tech experts emphasise the importance of developing ethical frameworks and guidelines for AI-generated art. They argue that while AI has the potential to revolutionize creative fields, it is crucial to establish clear boundaries and standards to prevent misuse and ensure that the technology is used responsibly.

‘‘F L A M I N G O N E’ accomplished its mission by sending a poignant message to a world grappling with ever-advancing, powerful technology and the profusion of fake images it brings. My goal was to show that nature is just so fantastic and creative, and I don’t think any machine can beat that. But, on the other hand, AI imagery has advanced to a point where it’s indistinguishable from real photography. So where does that leave us? What are the implications and the pitfalls of that? I think that is a very important conversation that we need to be having right now.”, Miles Astray told The Washington Post.

Why does it matter?

The controversy surrounding the AI-generated headless flamingo photo highlights the broader ethical challenges posed by artificial intelligence in creative fields. As AI technology continues to advance, it is increasingly capable of producing highly realistic and complex images. That raises important questions about the role of AI in art, the responsibilities of creators and developers, and the need for ethical guidelines to navigate these new frontiers.

FCC names Royal Tiger as first official AI robocall scammer gang

The US Federal Communications Commission (FCC) has identified Royal Tiger as the first official AI robocall scammer gang, marking a milestone in efforts to combat sophisticated cyber fraud. Royal Tiger has used advanced techniques like AI voice cloning to impersonate government agencies and financial institutions, deceiving millions of Americans through robocall scams.

These scams involve automated systems that mimic legitimate entities to trick individuals into divulging sensitive information or making fraudulent payments. Despite the FCC’s actions, experts warn that AI-driven scams will likely increase, posing significant challenges in protecting consumers from evolving tactics such as caller ID spoofing and persuasive social engineering.

While the FCC’s move aims to raise awareness and disrupt criminal operations, individuals are urged to remain vigilant. Tips include scepticism towards unsolicited calls, utilisation of call-blocking services, and verification of caller identities by contacting official numbers directly. Avoiding sharing personal information over the phone without confirmation of legitimacy is crucial to mitigating the risks posed by these scams.

Why does it matter?

As technology continues to evolve, coordinated efforts between regulators, companies, and the public are essential in staying ahead of AI-enabled fraud and ensuring robust consumer protection measures are in place. Vigilance and proactive reporting of suspicious activities remain key in safeguarding against the growing threat of AI-driven scams.

X bans over 230,000 accounts in India for violations

Between April 26 and May 25, Elon Musk’s X Corp banned 229,925 accounts in India, primarily for promoting child sexual exploitation and non-consensual nudity. Additionally, 967 accounts were removed for promoting terrorism, bringing the total to 230,892 banned accounts during this period. In compliance with the new IT Rules, 2021, X Corp’s monthly report noted receiving 17,580 user complaints in India. The company processed 76 grievances appealing account suspensions but upheld all suspensions after review.

The report also mentioned 31 general account-related inquiries. Most user complaints involved ban evasion (6,881), hateful conduct (3,763), sensitive adult content (3,205), and abuse/harassment (2,815). Previously, between March 26 and April 25, X banned 184,241 accounts in India and removed 1,303 for promoting terrorism.

Why does it matter?

India, with nearly 700 million internet users, has introduced new regulations for social media, streaming services, and digital news outlets. These rules mandate firms to enable traceability of encrypted messages, establish local offices with senior officials, comply with takedown requests within 24 hours, resolve grievances within 15 days, and publish a monthly compliance report detailing received requests and actions taken.

Meta halts AI launch in Europe after EU regulator ruling

Meta’s main EU regulator, the Irish Data Protection Commission (DPC), requested that the company delay the training of its large language models (LLMs) on content published publicly by adults on the company’s platforms. In response, Meta announced they would not be launching their AI in Europe for the time being. 

The main reason behind the request is Meta’s plan to use this data to train its AI models without explicitly seeking consent. The company claims it must do so or else its AI ‘won’t accurately understand important regional languages, cultures or trending topics on social media.’ It is already developing continent-specific AI technology. Another cause for concern is Meta’s use of information belonging to people who do not use its services. In a message to its Facebook users, it said that it may process information about non-users if they appear in an image or are mentioned on their platforms. 

The DPC welcomed Meta’s decision to delay its implementation. The commission is leading the regulation of Meta’s AI tools on behalf of EU data protection authorities (DPAs), 11 of which received complaints by advocacy group NOYB (None Of Your Business). NOYB argues that the GDPR is flexible enough to accommodate this AI, as long as it asks for the user’s consent. The delay comes right before Meta’s new privacy policy comes into force on 26 June. 

Beyond the EU, the executive director of the UK’s Information Commissioner’s Office was pleased with the delay, and added that ‘in order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset.’

G7 summit underscores ethical AI, digital inclusion, and global solidarity

The G7 leaders met with counterparts from several countries, including Algeria, Argentina, Brazil, and India, along with heads of major international organisations such as the African Development Bank and the UN, to address global challenges impacting the Global South. They emphasised the need for a unified and equitable international response to these issues, underscoring solidarity and shared responsibility to ensure inclusive solutions.

Pope Francis made an unprecedented appearance at the summit, contributing valuable insights on AI. The leaders discussed AI’s potential to enhance industrial productivity while cautioning against its possible negative impacts on the labour market and society. They stressed the importance of developing AI that is ethical, transparent, and respects human rights, advocating for AI to improve services while protecting workers.

The leaders highlighted the necessity of bridging digital divides and promoting digital inclusion, supporting Italy’s proposal for an AI Hub for Sustainable Development. The hub aims to strengthen local AI ecosystems and advance AI’s role in sustainable development.

They also emphasised the importance of education, lifelong learning, and international mobility to equip workers with the necessary skills to work with AI. Finally, the leaders committed to fostering cooperation with developing and emerging economies to close digital gaps, including the gender digital divide, and achieve broader digital inclusion.

Austrian advocacy group NOYB accuses Google of tracking users 

Alphabet’s Google has been hit with a complaint by the Austrian advocacy group NOYB (None Of Your Business) over alleged browser tracking. The complaint, filed with the Austrian data protection authority, claims that Google’s ‘Privacy Sandbox’ feature, which is designed to protect user privacy by blocking covert tracking techniques and limiting data sharing with third parties, actually allows Google to track users within the browser without their informed consent.

NOYB argues that the feature, which is advertised as an improvement over third-party tracking cookies, is misleading and does not meet the requirements for free consent under the EU’s General Data Protection Regulation (GDPR). The group claims that users are tricked into accepting Google’s first-party ad tracking by being presented with a pop-up that says ‘turn on ad privacy feature,’ which they believe would protect their personal data. However, this feature actually enables Google to track users’ online behavior and generate a list of advertising topics based on their browsing history.

NOYB has asked the Austrian data protection authority to ensure Google’s GDPR compliance, halt data processing based on invalid consent, and inform data recipients to stop using this data. They also seek a substantial fine to deter future violations, emphasizing the importance of GDPR adherence. Google defends its Privacy Sandbox APIs, highlighting significant privacy enhancements over third-party cookies. The company states it is working closely with global regulators on a balanced solution beneficial to users and the ecosystem.

IOC implements AI for athlete safety at Paris Olympics

The International Olympic Committee (IOC) will deploy AI to combat social media abuse directed at 15,000 athletes and officials during the Paris Olympics next month, IOC President Thomas Bach announced on Friday. With the Games set to begin on 26 July, more than 10,500 athletes will compete across 32 sports, generating over half a billion social media engagements.

The AI system aims to safeguard athletes by monitoring and automatically erasing abusive posts to provide extensive protection against cyber abuse. That initiative comes amid ongoing global conflicts, including the wars in Ukraine and Gaza, which have already led to social media abuse cases.
Russian and Belarusian athletes, who will compete as neutral athletes without their national flags, are included in the protective measures. The IOC did not specify the level of access athletes would need to grant for the AI monitoring.

Despite recent political developments in France, including a snap parliamentary election called by President Emmanuel Macron, Bach assured that preparations for the Olympics remain on track. He emphasised that both the government and opposition are determined to ensure that France presents itself well during the Games.

Clearview AI reaches unusual settlement in privacy lawsuit

Facial recognition company Clearview AI has reached a groundbreaking class action settlement to address allegations of violating the privacy rights of millions of Americans. Filed in Chicago federal court on Wednesday, the agreement is notably unconventional as it does not specify a monetary payout upfront. Instead, it ties compensation to Clearview AI’s future financial outcomes, such as its potential IPO or merger valuation.

The lawsuit, rooted in Clearview AI’s alleged scraping of billions of facial images from the internet without consent, invoked Illinois’ biometric privacy law. Although Clearview denies any wrongdoing, the proposed settlement now awaits approval from US District Judge Sharon Johnson Coleman.

In a related development earlier this year, Clearview AI agreed with the ACLU to restrict access to its facial recognition database for private entities and government agencies in Illinois for five years. The plaintiffs’ attorneys acknowledged that this prior agreement influenced their approach to the class action settlement, adopting a structure that allows class members to share in potential future profits of Clearview AI.

The novel settlement approach, spearheaded by Loevy & Loevy, aims to provide meaningful relief to affected individuals while navigating Clearview AI’s financial constraints. Attorney Jon Loevy highlighted that this solution allows class members to reclaim some ownership over their biometric data, reflecting a unique attempt to compensate for privacy violations in the digital age.