Oracle agrees to pay $115 million to settle privacy invasion lawsuit

Oracle has settled a $115 million lawsuit to resolve allegations of consumer privacy invasion through the collection and sale of personal information. The preliminary settlement, filed in a San Francisco federal court, awaits a judge’s judgement. Despite agreeing to the settlement, Oracle denied any wrongdoing in their defence.

The plaintiffs accused Oracle of violating federal and state privacy laws by creating unauthorised ‘digital dossiers’ for millions of people. These dossiers reportedly included data on browsing behaviour, banking activities, and shopping preferences. They claimed Oracle sold this information to marketers, sometimes using products like ID Graph for targeted advertising.

The settlement applies to individuals whose data was collected or sold since August 2018. As part of the agreement, Oracle will cease gathering information from URLs of previously visited websites or from online forms not on Oracle’s own sites. The company, based in Austin, Texas, did not respond to requests for comment.

Privacy rights activist Michael Katz-Lacabe and social media and privacy professor Jennifer Golbeck are among the named plaintiffs. Their legal team, Lieff Cabraser Heimann & Bernstein, may seek up to $28.75 million from the settlement for legal fees.

MUFG penalises CEO and executives with pay cuts

Largest banking group in Japan, Mitsubishi UFJ Financial Group (MUFG), will cut the pay of its CEO and five other executives due to breaches of ‘firewall’ regulations at its banking and securities arms. The Financial Services Agency (FSA) ordered MUFG to submit business improvement plans after discovering these breaches, marking a significant regulatory action.

Group CEO Hironori Kamezawa and five other executives will have their monthly salaries reduced by 30% for two to five months. Additionally, three former directors at MUFG’s banking unit and one at its securities arm are required to return 10% to 30% of their three-month salary. Kamezawa, who earned 339 million yen ($2.16 million) in the last fiscal year, will also face these cuts.

The FSA found at least 26 cases where confidential client information was shared between MUFG Bank and its two securities partnerships with Morgan Stanley from 2020 to 2023. MUFG Bank also offered preferential lending rates to clients doing business with these brokerages, violating regulations that prohibit sharing customer data without consent.

In response, MUFG has submitted a business improvement plan to the FSA. The company is taking measures to address the regulatory breaches and ensure compliance with financial regulations in the future.

Meta suspends AI use in Brazil amid privacy concerns

Meta has suspended the use of its generative AI (GenAI) tools in Brazil after the country’s data protection authority issued a preliminary ban on its new privacy policy. The suspension follows a decision by Brazil’s National Data Protection Authority (ANPD) to halt Meta’s policy, citing risks to users’ fundamental data rights.

ANPD’s decision arose from concerns over Meta’s use of personal data to train its AI systems without users’ explicit consent. The agency warned of ‘serious and irreparable damage’ to the rights of data subjects and imposed a daily fine of 50,000 reais for non-compliance. Meta expressed disappointment, stating that the decision is a setback for innovation and AI development in Brazil.

The controversy in Brazil reflects broader global challenges for tech companies navigating stringent data privacy laws. In regions like the European Union, similar regulatory hurdles have forced Meta and other tech giants to pause their AI tool rollouts. Human Rights Watch highlighted risks associated with personal data in AI training, noting how personal photos, including those of Brazilian children, have been misused in image datasets, raising significant privacy and ethical concerns.

Meta’s response aligns with its recent actions in Europe, where it withheld its AI models due to regulatory uncertainties. This situation underscores the tension between advancing AI technologies and adhering to evolving data protection regulations.

OpenAI whistleblowers call for SEC investigation

Whistle-blowers have filed a complaint with the US Securities and Exchange Commission (SEC) against OpenAI, calling for an investigation into the company’s allegedly restrictive non-disclosure agreements (NDAs). The complaint, alleges that OpenAI’s NDAs required employees to waive their federal rights to whistle-blower compensation, creating a chilling effect on their right to speak up.

Senator Chuck Grassley’s office provided the letter to Reuters, stating that OpenAI’s policies appear to prevent whistleblowers from receiving due compensation for their protected disclosures. The whistle-blowers have requested that the SEC fine OpenAI for each improper agreement and review all contracts containing NDAs, including employment, severance, and investor agreements. OpenAI did not immediately respond to requests for comment.

This complaint follows other legal and regulatory challenges faced by OpenAI. The company has been sued for allegedly stealing people’s data, and US authorities have called for companies to ensure their AI products do not violate civil rights. OpenAI recently formed a Safety and Security Committee to address safety concerns as it begins training its next AI model.

Gemini AI caught accessing private Google Drive documents

Google’s Gemini AI has been discovered scanning PDF files on Google Drive without user consent, sparking concerns over AI safety and privacy. Senior Advisor Kevin Bankston revealed that the AI generated a summary of a private tax return without permission, raising significant privacy issues.

Bankston shared his struggles to disable the feature, which continued to operate despite attempts to find the correct controls. The difficulty in managing Gemini’s integration in Google Drive has led to questions about Google’s handling of user data and privacy settings.

Google previously assured users that Workspace data would not be used to train AI or target ads. However, this incident has raised doubts about data hygiene and privacy.

Bankston’s experience suggests that prior participation in Google Workspace Labs might have influenced Gemini’s behaviour, highlighting the need for better user control and consent as AI technology advances.

OpenAI’s project Strawberry: Transformative AI sparks ethical debate

According to a Reuters report, the fairly new OpenAI project, Strawberry, is set to create giant waves in the research industry. The project, which some claim could be a renamed version of the company’s project Q* from last year, has been tagged as potentially having capabilities to navigate the net to conduct deep research.

The company’s representative confirmed to the news agency that the reasoning ability of their models will invariably improve with time. Just last Tuesday, employees of OpenAI were treated to a demo of a model with human-like reasoning capabilities. The meeting came on the heels of the negative commentary the company has faced for placing a gag order on employees for publicly exposing the dangers its innovations can potentially pose to humanity.  

Earlier in July, employees sent a seven-page letter to the US Security Exchange Commission (SEC) chair, Gary Gensler, detailing what they deem as risks OpenAI’s projects can pose to humans. The letter was tinged with urgency as the agency was advised to take swift and aggressive action against the company for violating current regulations.

US senators introduce COPIED Act to combat intellectual property theft in creative industry

The Content Origin Protection and Integrity from Edited and Deepfaked Media Bill, also known as the COPIED Act, was introduced on 11 July 2024 by US lawmakers, Senators Marsha Blackburn, Maria Cantrell and Martin Heinrich. The bill is expected to safeguard the intellectual property of creatives, particularly journalists, publishers, broadcasters and artists.

In recent times, the work and images of creatives have been used or modified without consent, at times to generate income. The push for legislation in the area was intensified in January after explicit AI-generated images of the US musician Taylor Swift surfaced on X

According to the bill, images, videos, audio clips and texts are considered deepfakes if they contain ‘synthetic or synthetically modified content that appears authentic to a reasonable person and creates a false understanding or impression’. If moved into legislation, the bill restricts online platforms where US-based customers frequent, and annual revenue of at least $50 million is generated or where 25 million active users are registered for three consecutive months.

Under the bill, companies that deploy or develop AI models must install a feature allowing users to tag such images with contextual or content provenance information, such as their source and history, in a machine-readable format. After that, it would be illegal to remove such tags for any other reason than research, use these images to train subsequent AI models or generate content. Victims will then have the right to sue offenders. 

The COPIED Act is backed by several artist-affiliated groups, including SAG-AFTRA, the National Music Publishers’ Association, the Songwriters Guild of America (SGA), the National Association of Broadcasters as well as The US National Institute of Standards and Technology (NIST), the US Patent and Trademark Office (USPTO) and the US Copyright Office. The bill also has received bipartisan support.

PayPal hit with $27.3 million fine in Poland

Poland’s antitrust and consumer protection watchdog, UOKiK, has fined PayPal Europe 106.6 million zlotys ($27.3 million) for failing to clearly outline activities that could incur penalties in its contractual clauses. UOKiK stated that PayPal’s descriptions of prohibited activities could have been more precise, making it difficult for users to understand what actions were not allowed and the potential consequences.

UOKiK’s head, Tomasz Chrostny, criticised PayPal’s clauses as general, ambiguous, and incomprehensible, giving the company excessive discretion to determine whether a user has committed a prohibited act and what penalties to impose. That could include actions like blocking money on accounts.

PayPal responded by emphasising its commitment to fair treatment and transparent communication with customers. The company stated that it has been cooperating with UOKiK during the investigation and is reviewing the decision. PayPal also noted that the decision is not final and that it has the opportunity to appeal in court.

AI advancements and digital divide discussed at Samsung event in Paris

Samsung unveiled its latest range of foldable devices, earbuds, and wearables at the Louvre in Paris, followed by a panel discussion with executives from Samsung, Google, Qualcomm, and more. The panel explored various AI-related topics, including Samsung’s two-way Interpreter translations and the company’s collaboration with Google on Circle To Search.

Dr Chris Brauer from Goldsmiths University of London presented findings from Samsung’s Mobile AI Report, highlighting a potential AI divide. He pointed out that while many people are embracing AI for its quality-of-life benefits, a minority remain reluctant, correlating with lower self-reported life satisfaction. This emerging divide could impact individuals’ ability to achieve their goals and navigate life successfully.

The report surveyed 5000 adults across France, Germany, South Korea, the UK, and the US, focusing on creativity, productivity, social relationships, and physical health. The digital divide remains a significant issue, with 30% of the world still under- or unconnected, limiting access to the latest technology. Qualcomm’s Don McGuire emphasised the importance of addressing this divide to ensure broader accessibility to AI tools for healthcare, education, and socioeconomic advancement.

Why does this matter?

AI has been a part of our digital lives for years, but recent advancements have brought it to the forefront, thanks to tools like ChatGPT and Dall-E. As the world moves towards an AI-driven future, addressing the digital divide is crucial to ensure that everyone benefits from these technological advancements.

Microsoft reveals VALL-E 2 AI, achieving human-like speech

Microsoft has made a significant leap forward in AI speech generation with its VALL-E 2 text-to-speech (TTS) system. VALL-E 2 achieves human parity, meaning it can produce voices indistinguishable from real people. The system only needs a few seconds of audio to learn and mimic a speaker’s voice.

Tests on speech datasets like LibriSpeech and VCTK showed that VALL-E 2’s voice quality matches or even surpasses human quality. Features like ‘Repetition Aware Sampling’ and ‘Grouped Code Modeling’ allow the system to handle complex sentences and repetitive phrases naturally, ensuring smooth and realistic speech output.

Despite releasing audio samples, Microsoft considers VALL-E 2 too advanced for public release due to potential misuse like voice spoofing. This cautious approach aligns with the wider industry’s concerns, as seen with OpenAI’s restrictions on its voice technology.

While VALL-E 2 represents a significant breakthrough, it remains a research project for now. The development of AI continues apace, with companies striving to balance innovation with ethical considerations.