New classroom features announced for ChromeOS

Google is rolling out a unique accessibility feature for Chromebooks that allows users to control their devices using head and facial movements. Initially introduced in December, this tool is designed for people with motor impairments and uses AI to let facial gestures act as a virtual cursor. The feature is available on Chromebooks with 8GB of RAM or more and builds on Google’s prior efforts, such as its Project Gameface accessibility tool for Windows and Android.

In addition to accessibility, Google is unveiling over 20 new Chromebook models this year, including the Lenovo Chromebook Plus 2-in-1, to complement its existing lines. The devices target educators, students, and general users seeking enhanced performance and versatility.

Google has also introduced ‘Class Tools’ for ChromeOS, which offer teachers real-time screen-sharing capabilities. These tools allow educators to share content directly with students, monitor their progress, and activate live captions or translations during lessons. Integration with Figma’s FigJam now brings interactive whiteboard assignments to Google Classroom, promoting collaboration and creative group work. Together, these updates aim to enhance accessibility and productivity in education.

Confusion as Meta users face automatic follow issue with Trump profiles

Meta users in the US are experiencing an unusual phenomenon where they are being automatically re-followed by the accounts of President Donald Trump, Vice President JD Vance, and first lady Melania Trump. The issue emerged after users intentionally unfollowed these accounts following the administration’s transition. Feedback from users, including actress Demi Lovato and comedian Sarah Colonna, highlighted frustration over the inability to maintain their choice to unfollow prominent political figures.

Upon the change of administration, official White House social media accounts are supposed to transition smoothly to the new leaders. While Meta’s communications director Andy Stone acknowledged that followers from the Biden administration were carried over to Trump’s accounts, he confirmed that users were not being forced to re-follow these profiles. Stone suggested that delays in processing follow and unfollow requests might contribute to the confusion experienced by users.

Many individuals reported recurrent issues despite efforts to unfollow the accounts multiple times, raising questions about the underlying technicalities involved. Users are expressing concerns over privacy and choice in the use of social media platforms, as the ability to curate their feeds appears compromised. However, this automatic re-following could reflect broader implications for user control in digital spaces.

As Meta has yet to release a detailed response to the reported glitch, users continue to voice their concerns across multiple platforms. The situation underscores an ongoing need for clarity and assurance regarding user preferences in social media interactions, especially during a politically sensitive time.

India probes Uber and Ola over iPhone pricing

The Indian government has issued notices to ride-hailing companies Ola and Uber, launching an investigation into allegations of price discrimination. Concerns have arisen over reports and user complaints suggesting that iPhone users are being charged significantly higher fares for the same rides compared to those using Android devices. This investigation, led by the Central Consumer Protection Agency (CCPA), aims to determine if these price discrepancies are indeed occurring and whether they constitute unfair trade practices.

The government has previously expressed strong opposition to differential pricing, deeming it an unfair and discriminatory practice. India is a crucial market for both Ola and Uber, with intense competition among various ride-hailing services. The outcome of this investigation could have significant implications for the industry, potentially impacting pricing models and consumer trust.

Beyond the ride-hailing sector, the CCPA will also examine potential pricing disparities in other sectors, including food delivery and online ticketing platforms. The broader investigation aims to identify and address any instances where consumers may be facing discriminatory pricing based on factors such as the device they use or other personal characteristics.

Ensuring fair and transparent pricing practices in the digital economy is crucial. As technology continues to shape our daily lives, it is essential to address concerns about potential algorithmic biases and discriminatory practices that may be embedded within digital platforms. The Indian government’s action sends a clear message that such practices will not be tolerated and that consumer protection remains a top priority.

Private messages shared by LinkedIn spark class-action lawsuit

LinkedIn, owned by Microsoft, faces a class-action lawsuit from its Premium customers who allege that the platform improperly shared their private messages with third parties to train AI models. The lawsuit alleges that LinkedIn introduced a new privacy setting last August that allowed users to control the sharing of their data, yet failed to adequately inform them about the use of their messages for AI training.

Customers claim that a stealthy update to LinkedIn’s privacy policy on 18 September outlined this data usage, while also stating that opting out of data sharing would not prevent past training from being utilised.

The plaintiffs, representing millions of Premium users, seek damages for breaches of contract and violations of California’s unfair competition laws. In addition, they demand compensation of $1,000 for each individual affected by alleged violations of the federal Stored Communications Act. The lawsuit highlights concerns over the potential misuse of customer data, asserting that LinkedIn deliberately obscured its practices to evade scrutiny regarding user privacy.

LinkedIn has denied the allegations, stating that the claims lack merit. The legal action arose just hours after President Donald Trump announced a significant AI investment initiative, backed by Microsoft and other major companies. In San Jose, California, the case has been filed as De La Torre v. LinkedIn Corp in the federal district court.

With privacy becoming an increasingly crucial issue, the implications of this lawsuit could resonate throughout the tech industry. Customers are scrutinising platforms’ commitments to safeguarding personal information, especially in the context of rapidly evolving AI technologies.

US FTC leader Lina Khan announces resignation

Lina Khan, a prominent advocate of strong antitrust enforcement, has announced her resignation as chair of the US Federal Trade Commission (FTC) in a memo to staff. Her departure, set to occur in the coming weeks, marks the end of a tenure that challenged numerous corporate mergers and pushed for greater accountability among powerful companies.

During her leadership, Khan spearheaded high-profile lawsuits against Amazon, launched investigations into Microsoft, and blocked major deals, including Kroger’s planned $25 billion acquisition of Albertsons. Her efforts often focused on protecting consumers and workers from potential harms posed by dominant corporations.

Khan, the youngest person to lead the FTC, first gained recognition in 2017 for her work criticising Amazon’s market practices. She argued that tech giants exploited outdated antitrust laws, allowing them to sidestep scrutiny. Her aggressive approach divided opinion, with courts striking down some of her policies, including a proposed ban on noncompete clauses.

Following Khan’s exit, the FTC faces a temporary deadlock with two Republican and two Democratic commissioners. Republican Andrew Ferguson has assumed the role of chair, and a Republican majority is expected once the Senate approves Mark Meador, a pro-enforcement nominee, to complete the five-member commission.

Meta, X, Google join EU code to combat hate speech

Major tech companies, including Meta’s Facebook, Elon Musk’s X, YouTube, and TikTok, have committed to tackling online hate speech through a revised code of conduct now linked to the European Union’s Digital Services Act (DSA). Announced Monday by the European Commission, the updated agreement also includes platforms like LinkedIn, Instagram, Snapchat, and Twitch, expanding the coalition originally formed in 2016. The move reinforces the EU’s stance against illegal hate speech, both online and offline, according to EU tech commissioner Henna Virkkunen.

Under the revised code, platforms must allow not-for-profit organisations or public entities to monitor how they handle hate speech reports and ensure at least 66% of flagged cases are reviewed within 24 hours. Companies have also pledged to use automated tools to detect and reduce hateful content while disclosing how recommendation algorithms influence the spread of such material.

Additionally, participating platforms will provide detailed, country-specific data on hate speech incidents categorised by factors like race, religion, gender identity, and sexual orientation. Compliance with these measures will play a critical role in regulators’ enforcement of the DSA, a cornerstone of the EU’s strategy to combat illegal and harmful content online.

US regulator escalates complaint against Snap

The United States Federal Trade Commission (FTC) has referred a complaint about Snap Inc’s AI-powered chatbot, My AI, to the Department of Justice (DOJ) for further investigation. The FTC alleges the chatbot caused harm to young users, though specific details about the alleged harm remain undisclosed.

Snap Inc defended its chatbot, asserting that My AI operates under rigorous safety and privacy measures and criticised the FTC for lacking concrete evidence to support its claims. Despite the company’s reassurances, the FTC stated it had uncovered indications of potential legal violations.

The announcement impacted Snap’s stock performance, with shares dropping by 5.2% to close at $11.22 on Thursday. The US FTC noted that publicising the complaint’s transfer to the DOJ was in the public interest, underscoring the gravity of the allegations.

AI-generated news alerts paused by Apple amid accuracy concerns

Apple has halted AI-powered notification summaries for news and entertainment apps after backlash over misleading news alerts. A BBC complaint followed a summary that misrepresented an article about a murder case involving UnitedHealthcare’s CEO.

The latest developer previews for iOS 18.3, iPadOS 18.3, and macOS Sequoia 15.3 disable notification summaries for such apps, with Apple planning to reintroduce them after improvements. Notification summaries will now appear in italics to help users distinguish them from standard alerts.

Users will also gain the ability to turn off notification summaries for individual apps directly from the Lock Screen. Apple will notify users in the Settings app that the feature remains in beta and may contain errors.

A public beta is expected next week, but the general release date for iOS 18.3 remains unclear. Apple had already announced plans to clarify that summary texts are generated by Apple Intelligence.

US Supreme Court to hear challenge to Texas pornography age verification law

The US Supreme Court will hear a challenge on Wednesday regarding a Texas law that mandates adult websites verify the age of users before granting access to potentially harmful material. The law, which is part of a broader trend across Republican-led states, requires users to submit personal information proving they are at least 18 years old to access pornographic content. The case raises significant First Amendment concerns, as adult entertainment industry groups argue that the law unlawfully restricts free speech and exposes users to risks such as identity theft and data breaches.

The challengers, including the American Civil Liberties Union and the Free Speech Coalition, contend that alternative methods like content-filtering software could better protect minors without infringing on adults’ rights to access non-obscene material. Texas, however, defends the law, citing concerns over the ease with which minors can access explicit content online.

This case is significant because it will test the balance between state efforts to protect minors from explicit content and the constitutional rights of adults to access protected expression. If the Supreme Court upholds the law, it could set a precedent for similar age-verification measures across the US.

Indonesia targets age limits for social media access

Indonesia plans to implement interim guidelines to protect children on social media as it works toward creating a law to establish a minimum age for users, a senior communications ministry official announced on Wednesday. The move follows discussions between Communications Minister Meutya Hafid and President Prabowo Subianto, aiming to address concerns about online safety for children.

The proposed law will mirror recent regulations in Australia, which banned children under 16 from accessing social media platforms like Instagram, Facebook, and TikTok, penalising tech companies that fail to comply. In the meantime, Indonesia will issue regulations requiring platforms to follow child protection guidelines, focusing on shielding children from harmful content while still allowing access to some degree.

Public opinion on the initiative is divided. While parents like Nurmayanti support stricter controls to reduce exposure to harmful material, human rights advocates, including Anis Hidayah, urge caution to ensure children’s access to information is not unduly restricted. A recent survey revealed nearly half of Indonesian children under 12 use the internet, with many accessing social media platforms such as Facebook, Instagram, and TikTok.

This regulatory push reflects Indonesia’s broader efforts to balance digital innovation with safeguarding younger users in its rapidly growing online landscape