Apple denies misusing Siri data following $95 million settlement

Apple has clarified that it has never sold data collected by its Siri voice assistant or used it to create marketing profiles. The statement, issued Wednesday, follows a $95 million settlement last week to resolve a class action lawsuit alleging that Siri had inadvertently recorded private conversations and shared them with third parties, including advertisers. Apple denied the claims and admitted no wrongdoing as part of the settlement, which could result in payouts of up to $20 per Siri-enabled device for millions of affected customers.

The controversy stemmed from claims that Siri sometimes activated unintentionally, recording sensitive interactions. Apple emphasised in its statement that Siri data is used minimally and only for real-time server input when necessary, with no retention of audio recordings unless users explicitly opt-in. Even in such cases, the recordings are solely used to improve Siri’s functionality. Apple reaffirmed its commitment to privacy, stating, ‘Apple has never used Siri data to build marketing profiles, never made it available for advertising, and never sold it to anyone.’

This case has drawn attention alongside a similar lawsuit targeting Google’s Voice Assistant, currently pending in federal court in San Jose, California. Both lawsuits are spearheaded by the same legal teams, highlighting growing scrutiny over how tech companies handle voice assistant data.

Legal world embraces AI for access to justice

AI is revolutionising the legal field, offering solutions to improve fairness and reduce costs in the justice system. Tools powered by AI are being used to streamline tasks like analysing evidence, drafting contracts, and preparing cases. Organisations like the Westway Trust in London are adopting AI to assist clients with complex disputes, such as benefits appeals and housing issues. These tools save hours of work, enabling paralegals to focus on providing better support.

The technology has sparked excitement and debate among legal professionals. AI models are being developed to help barristers identify inconsistencies in real-time court transcripts and assist judges with evidence analysis. Advocates argue that AI could make justice more accessible, while reducing the burden on legal practitioners and cutting costs for clients. However, concerns about accuracy and bias persist, with experts emphasising the importance of human oversight.

Sir Geoffrey Vos, Master of the Rolls, underscores the need for AI to complement, not replace, human judges. Guidelines stress transparency in AI use and the responsibility of lawyers to verify outputs. While tools like ChatGPT can provide general advice, professionals caution against relying on non-specialised AI for legal matters. Experts believe that AI will play a crucial role in addressing the fairness gap in the justice system without compromising the rule of law.

Google counters US push to sell Chrome

Google has proposed a legal alternative to a United States Department of Justice recommendation to dismantle its Chrome browser. Instead, the company suggests barring itself from using app licensing agreements to secure default software positions.

The proposal follows a landmark ruling declaring Google a monopoly. The government seeks stronger measures, including a ban on exclusive deals ensuring Google’s dominance on smartphones and other devices.

Judge Amit Mehta’s decision on antitrust remedies is expected to influence the tech industry. Google plans to appeal any adverse ruling.

The EU to resolve dispute with India over ICT tariffs

The European Union is addressing a trade dispute with India over tariffs on ICT goods, which India has effectively blocked under the World Trade Organization (WTO) by appealing a favourable report for the EU to the non-functional WTO Appellate Body, stalling the resolution process. India has also rejected alternative dispute resolution methods, such as ad hoc appeal arbitration or a mutually agreed solution.

In response, the EU uses its Enforcement Regulation to enforce international trade obligations when disputes are blocked, ensuring that WTO rules are respected. The EU has launched a consultation for concerned entities, with responses due by 10 February 2025, to guide decisions on potential commercial policy measures should a mutually satisfactory solution not be reached.

At the same time, the EU continues to seek a resolution through alternative means, inviting India to join the Multi-Party Interim Appeal Arrangement (MPIA) or agree to ad hoc appeal arbitration. The dispute began in 2014 when India imposed customs duties of up to 20% on various ICT products, which the EU argues violates India’s WTO commitments to apply a zero-duty rate.

In 2019, the EU initiated the WTO dispute settlement process, and in April 2023, the panel ruled in favour of the EU, confirming that India’s tariffs were inconsistent with WTO rules. India appealed the decision in December 2023, prolonging the dispute.

Global stakeholders chart the course for digital governance at the IGF in Riyadh

Global digital governance was the main topic in a key discussion led by moderator Timea Suto, gathering experts to tackle challenges in AI, data management, and internet governance. At the Internet Governance Forum (IGF) in Riyadh, Saudi Arabia, speakers emphasised balancing innovation with regulatory consistency while highlighting the need for inclusive frameworks that address societal biases and underrepresented voices.

Thomas Schneider of Ofcom Switzerland underscored the Council of Europe‘s AI convention as a promising standard for global interoperability. Meta’s Flavia Alves advocated for open-source AI to drive global collaboration and safer products. Meanwhile, Yoichi Iida from Japan‘s Ministry of Communications outlined the G7 Hiroshima AI code as an international step forward, while concerns about dataset biases were raised from the audience.

Data governance discussions focused on privacy and trust in cross-border flows. Maarit Palovirta of Connect Europe called for harmonised regulations to protect privacy while fostering innovation. Yoichi Iida highlighted OECD initiatives on trusted data sharing, with Amr Hashem of the GSMA stressing the need to develop infrastructure alongside governance, particularly in underserved regions.

The future of internet governance also featured prominently, with Irina Soeffky from Germany‘s Digital Ministry reinforcing the multi-stakeholder model amid calls to update WSIS structures. Audience member Bertrand de La Chapelle proposed reforming the Internet Governance Forum to reflect current challenges. Jacques Beglinger of EuroDIG stressed the importance of grassroots inclusion, while Desiree Milosevic-Evans highlighted gender representation gaps in governance.

Canada‘s Larisa Galadza framed the coming year as critical for advancing the Global Digital Compact, with priorities on AI governance under Canada’s G7 presidency. Maria Fernanda Garza of the International Chamber of Commerce (ICC) called for alignment in governance while maintaining flexibility for local needs amid ongoing multilateral challenges.

Speakers concluded that collaboration, inclusivity, and clear mandates are key to shaping effective digital governance. As technological change accelerates, the dialogue reinforces the need for adaptable, action-oriented strategies to ensure equity and innovation globally.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Social media fine plan dropped in Australia

Australia’s government has abandoned a proposal to fine social media platforms up to 5% of their global revenue for failing to curb online misinformation. The decision follows resistance from various political parties, making the legislation unlikely to pass the Senate.

Communications Minister Michelle Rowland stated the proposal aimed to enhance transparency and hold tech companies accountable for limiting harmful misinformation online. Despite broad public support for tackling misinformation, opposition from conservative and crossbench politicians stalled the plan.

The centre-left Labor government, currently lagging in polls, faces criticism for its approach. Greens senator Sarah Hanson-Young described the proposed law as a ‘half-baked option,’ adding to calls for more robust measures against misinformation.

Industry group DIGI, including Meta, argued the proposal merely reinforced an existing code. Australia’s tech regulation efforts are part of broader concerns about foreign platforms undermining national sovereignty.

Social media blamed for fuelling UK unrest, Ofcom finds

Ofcom has linked the violent unrest in England and Northern Ireland during the summer to the rapid spread of harmful content on social media platforms. The media regulator found that disinformation and illegal posts circulated widely online following the Southport stabbings in July, which sparked the disorder.

While some platforms acted swiftly to remove inflammatory content, others were criticised for uneven responses. Experts highlighted the significant influence of social media in driving divisive narratives during the crisis, with some calling for platforms to be held accountable for unchecked dangerous content.

Ofcom, which has faced criticism for its handling of the situation, argued that its enhanced powers under the forthcoming Online Safety Act were not yet in force at the time. The new legislation will introduce stricter responsibilities for tech firms in tackling harmful content and disinformation.

The unrest, the worst seen in the United Kingdom in a decade, resulted in arrests and public scrutiny of tech platforms. A high-profile row erupted between the Prime Minister and Elon Musk, after the billionaire suggested that civil war was inevitable following the disorder, a claim strongly rebuked by Sir Keir Starmer.

Google DeepMind’s AI may ease culture war tensions, say researchers

A new AI tool created by Google DeepMind, called the ‘Habermas Machine,’ could help reduce culture war divides by mediating between different viewpoints. The system takes individual opinions and generates group statements that reflect both majority and minority perspectives, aiming to foster greater agreement.

Developed by researchers, including Professor Chris Summerfield from the University of Oxford, the AI system has been tested in the United Kingdom with more than 5,000 participants. It was found that the statements created by AI were often rated higher in clarity and quality than those written by human mediators, increasing group consensus by eight percentage points on average.

The Habermas Machine was also used in a virtual citizens’ assembly on topics such as Brexit and universal childcare. It was able to produce group statements that acknowledged minority views without marginalising them, but the AI approach does have its critics.

Some researchers argue that AI-mediated discussions don’t always promote empathy or give smaller minorities enough influence in shaping the final statements. Despite these concerns, the potential for AI to assist in resolving social disagreements remains a promising development.

Massachusetts parents sue school over AI use dispute

The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.

The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.

The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.

Independent body in Ireland empowers EU social media users to challenge content moderation decisions

A new independent body in Ireland will allow social media users in the European Union to challenge content moderation decisions made by platforms like Facebook, TikTok, and YouTube. Established under the EU Digital Services Act (DSA), this Appeals Centre aims to provide users with an alternative to the courts when disputing content decisions. Supported by Meta’s Oversight Board Trust and certified by Ireland’s media regulator, the centre is expected to begin operations by the end of the year. It will expand to include more platforms over time.

Thomas Hughes, CEO of the Appeals Centre, emphasised the body’s independence from governments and companies, ensuring that social media content policies are applied fairly. The centre’s team of experts will review cases within 90 days to determine if the platforms’ actions align with their stated policies. The European Commission has expressed support for the initiative, with spokesperson Thomas Regnier highlighting the importance of uniform development across the EU to strengthen online user rights.

Located in Dublin, the Appeals Centre will operate on a funding model that charges social media companies fees for each case. At the same time, users will incur a nominal fee that is refundable if their appeal is successful. However, platforms are not obligated to participate, as the centre lacks the power to enforce binding settlements. The centre will be governed by a board of seven non-executive directors.