DuckDuckGo calls for new EU action against Google

Privacy-focused search engine DuckDuckGo has urged the European Commission to launch three new investigations into Google’s compliance with the EU’s Digital Markets Act (DMA). DuckDuckGo argues that the rules, designed to curb Big Tech dominance, have not yet delivered meaningful change in the search market.

The Digital Markets Act, adopted in 2022, requires major tech firms to ensure users can switch services easily and prohibits practices that favour their own products. DuckDuckGo’s senior vice-president, Kamyl Bazbaz, claimed in a blog post that Google’s measures fall short of the law’s requirements, calling for formal probes to drive compliance.

Google is already under two DMA-related investigations concerning its app store rules and alleged discrimination against third-party services. A spokesperson for the company stated that Google is cooperating with the Commission and has made significant adjustments to its services. They emphasised consumer choice and data protection as key priorities while rejecting claims of non-compliance.

DuckDuckGo also accused Google of proposing to share anonymised search data with competitors that excludes the vast majority of search queries, rendering it ineffective. Additional allegations include failing to make switching search engines straightforward. Companies breaching the DMA could face fines up to 10% of their global annual revenue.

AI chatbots in healthcare: Balancing potential and privacy concerns amidst regulatory gaps

Security experts are urging caution when using AI chatbots like ChatGPT and Grok for interpreting medical scans or sharing private health information. Recent trends show users uploading X-rays, MRIs, and other sensitive data to these platforms, but such actions can pose significant privacy risks. Uploaded medical images may become part of training datasets for AI models, leaving personal information exposed to misuse.

Unlike healthcare apps covered by laws like HIPAA, many AI chatbots lack strict data protection safeguards. Companies offering these services may use the data to improve their algorithms, but it’s often unclear who has access or how the data will be used. This lack of transparency has raised alarms among privacy advocates.

X-owner Elon Musk recently encouraged users to upload medical imagery to Grok, his platform’s AI chatbot, citing its potential to evolve into a reliable diagnostic tool. However, Musk acknowledged that Grok is still in its early stages, and critics warn that sharing such data online could have lasting consequences.

Lyft enhances driver safety measures

Lyft is introducing new safety features, including rider verification badges, to enhance security on its platform. This update provides drivers with more passenger information, such as names, ratings, and verification badges, before accepting rides. The company will also implement safety alerts in certain areas, such as school zones and traffic enforcement locations, to further safeguard both riders and drivers.

The changes come alongside an easier dashcam registration process, with passengers now notified when recordings may occur during their ride. Another innovation allows drivers to report traffic conditions and hazards, contributing to real-time map updates. In addition, a new restroom finder tool will let drivers locate and rate facilities, improving convenience during long shifts.

Lyft’s competitor, Uber, launched similar safety updates earlier, including driver options to record trips via smartphone. Lyft’s initiatives signal its commitment to staying competitive while prioritising the safety and experience of its users.

OpenAI and Common Sense Media launch AI training for teachers

OpenAI, in partnership with Common Sense Media, has introduced a free training course aimed at helping teachers understand AI and prompt engineering. The course is designed to equip educators with the skills to use ChatGPT effectively in classrooms, including creating lesson content and streamlining administrative tasks.

The launch comes as OpenAI increases its efforts to promote the positive educational uses of ChatGPT, which became widely popular after its release in November 2022. While the tool’s potential for aiding students has been recognised, its use also sparked concerns about cheating and plagiarism.

Leah Belsky, formerly of Coursera and now leading OpenAI’s education efforts, emphasised the importance of teaching both students and teachers to use AI responsibly. Belsky noted that student adoption of ChatGPT is high, with many parents viewing AI literacy as crucial for future careers. The training is available on Common Sense Media’s website, marking the first of many initiatives in this partnership.

TikTok faces divestment deadline in the US

Senator Richard Blumenthal has reaffirmed that ByteDance must divest TikTok’s US operations by January 19 or risk a ban. The measure, driven by security concerns over potential Chinese surveillance, was signed into law in April. A one-time extension of 90 days is available if significant progress is made, but Blumenthal emphasised that laws cannot be disregarded.

Blumenthal also raised alarms over China’s influence on US technology companies. Tesla’s production in China and the US military’s reliance on SpaceX were flagged as security risks. He pointed to Elon Musk’s economic ties with China as a potential vulnerability, warning that such dependencies could compromise national interests.

Apple faced criticism for complying with Chinese censorship and surveillance demands while generating significant revenue from the country. Concerns were voiced that major tech companies might prioritise profits over US security. Neither Apple nor Tesla has commented on these claims.

TikTok and ByteDance are challenging the divestment law in court. A decision is expected soon, but restrictions will tighten for app stores and hosting services if compliance is not achieved. The Biden administration has clarified that it supports ending Chinese ownership of TikTok rather than an outright ban.

California passes new law regulating AI in healthcare

California Governor Gavin Newsom has signed Assembly Bill 3030 (AB 3030) into law, which will regulate the use of generative AI (GenAI) in healthcare. Effective 1 January 2025, the law mandates that any AI-generated communications related to patient care must include a clear disclaimer informing patients of its AI origin. It also instructs patients to contact human healthcare providers for further clarification.

The bill is part of a larger effort to ensure patient transparency and mitigate risks linked to AI in healthcare, especially as AI tools become increasingly integrated into clinical environments. However, AI-generated communications that have been reviewed by licensed healthcare professionals are exempt from these disclosure requirements. The law focuses on clinical communications and does not apply to non-clinical matters like appointment scheduling or billing.

AB 3030 also introduces accountability for healthcare providers who fail to comply, with physicians facing oversight from the Medical Board of California. The law aims to balance AI’s potential benefits, such as reducing administrative burdens, with the risks of inaccuracies or biases in AI-generated content. California’s move is part of broader efforts to regulate AI in healthcare, aligning with initiatives like the federal AI Bill of Rights.

As the law takes effect, healthcare providers in California will need to adapt to these new rules, ensuring that AI-generated content is flagged appropriately while maintaining the quality of patient care.

OpenAI faces lawsuit from Indian News Agency

Asian News International (ANI), one of India’s largest news agencies, has filed a lawsuit against OpenAI, accusing it of using copyrighted news content to train its AI models without authorisation. ANI alleges that OpenAI’s ChatGPT generated false information attributed to the agency, including fabricated interviews, which it claims could harm its reputation and spread misinformation.

The case, filed in the Delhi High Court, is India’s first legal action against OpenAI on copyright issues. While the court summoned OpenAI to respond, it declined to grant an immediate injunction, citing the complexity of the matter. A detailed hearing is scheduled for January, and an independent expert may be appointed to examine the case’s copyright implications.

OpenAI has argued that copyright laws don’t protect factual data and noted that websites can opt out of data collection. ANI’s counsel countered that public access does not justify content exploitation, emphasising the risks posed by AI inaccuracies. The case comes amid growing global scrutiny of AI companies over their use of copyrighted material, with similar lawsuits ongoing in the US, Canada, and Germany.

German court rules Facebook users can seek compensation for data breach

Germany‘s Federal Court of Justice (BGH) has ruled that Facebook users affected by data breaches in 2018 and 2019 are entitled to compensation, even without proving financial losses. The court determined that the loss of control over personal data is sufficient grounds for damages, marking a significant step in data protection law.

The case stems from a 2021 breach involving Facebook’s friend search feature, where third parties accessed user accounts by exploiting phone number guesses. Lower courts in Cologne previously dismissed compensation claims, but the BGH ordered a re-examination, suggesting around €100 in damages could be awarded per user without proof of financial harm.

Meta, Facebook’s parent company, has resisted compensation, arguing that users did not suffer concrete damages. A spokesperson for Meta described the ruling as inconsistent with recent European Court of Justice decisions and noted that similar claims have been dismissed by German courts in thousands of cases. The breach reportedly impacted around six million users in Germany.

The court also instructed a review of Facebook’s terms of use, questioning whether they were transparent and whether user consent for data handling was voluntary. The decision adds pressure on companies to strengthen data protection measures and could set a precedent for future claims across Europe.

Tighter messaging controls for under-13 players on Roblox

Roblox has announced new measures to protect users under 13, permanently removing their ability to send messages outside of games. In-game messaging will remain available, but only with parental consent. Parents can now remotely manage accounts, oversee friend lists, set spending controls, and enforce screen time limits.

The gaming platform, which boasts 89 million users, has faced scrutiny over claims of child abuse on its service. In August, Turkish authorities blocked Roblox, citing concerns over user-generated content. A lawsuit filed in 2022 accused the company of facilitating exploitation, including sexual and financial abuse of a young girl in California.

New rules also limit communication for younger players, allowing under-13 users to receive public broadcast messages only within specific games. Roblox will implement updated content descriptors such as ‘Minimal’ and ‘Restricted’ to classify games, restricting access for users under nine to appropriate experiences.

Access to restricted content will now require users to be at least 17 years old and verify their age. These changes aim to enhance child safety amid growing concerns and highlight Roblox’s efforts to address ongoing challenges in its community.

Meta responds to antitrust fine over WhatsApp data

Meta Platforms is challenging a decision by India’s Competition Commission (CCI) over WhatsApp’s data-sharing practices. The regulator imposed a $25.4 million fine and restricted data-sharing between WhatsApp and other Meta-owned applications for five years, citing antitrust violations linked to the 2021 privacy policy.

The investigation began in March 2021 after WhatsApp introduced a controversial privacy policy enabling data transfers within Meta’s ecosystem. The CCI ruled that WhatsApp must not condition access to its services on user agreement to share personal data for advertising purposes.

Meta maintains the privacy policy does not affect the confidentiality of personal messages. A spokesperson emphasised no user accounts were deleted or had functionality reduced due to the update, underscoring its commitment to user privacy.

The company plans to legally challenge the CCI’s decision, reiterating its stance that the policy complies with privacy standards. The dispute highlights the growing scrutiny of global tech companies’ practices in India, one of the largest digital markets.