Apple denies misusing Siri data following $95 million settlement

Apple has clarified that it has never sold data collected by its Siri voice assistant or used it to create marketing profiles. The statement, issued Wednesday, follows a $95 million settlement last week to resolve a class action lawsuit alleging that Siri had inadvertently recorded private conversations and shared them with third parties, including advertisers. Apple denied the claims and admitted no wrongdoing as part of the settlement, which could result in payouts of up to $20 per Siri-enabled device for millions of affected customers.

The controversy stemmed from claims that Siri sometimes activated unintentionally, recording sensitive interactions. Apple emphasised in its statement that Siri data is used minimally and only for real-time server input when necessary, with no retention of audio recordings unless users explicitly opt-in. Even in such cases, the recordings are solely used to improve Siri’s functionality. Apple reaffirmed its commitment to privacy, stating, ‘Apple has never used Siri data to build marketing profiles, never made it available for advertising, and never sold it to anyone.’

This case has drawn attention alongside a similar lawsuit targeting Google’s Voice Assistant, currently pending in federal court in San Jose, California. Both lawsuits are spearheaded by the same legal teams, highlighting growing scrutiny over how tech companies handle voice assistant data.

Brazil warns tech firms to follow laws or face expulsion

Brazilian Supreme Court Judge Alexandre de Moraes reiterated on Wednesday that technology companies must comply with national laws to continue operating in the country. His statement followed Meta’s recent announcement to scale back its US fact-checking program, raising concerns about its impact on Brazil.

Speaking at an event marking the anniversary of anti-institution riots, Moraes emphasised that the court would not tolerate the use of hate speech for profit. Last year, he ordered the suspension of social media platform X for over a month due to its failure to moderate hate speech, a decision later upheld by the court. X owner Elon Musk criticised the move as censorship but ultimately complied with court demands to restore the platform’s services in Brazil.

Brazilian prosecutors have also asked Meta to clarify whether its US fact-checking changes will apply in Brazil, citing an ongoing investigation into social media platforms’ efforts to combat misinformation and violence. Meta has been given 30 days to respond but declined to comment through its local office.

Google TV introduces AI-powered news summaries with Gemini

Google has announced a major update to its TV operating system at CES 2025, integrating its Gemini AI assistant to deliver personalised news summaries. The new ‘News Brief’ feature will scrape news articles and YouTube headlines from trusted sources to generate a concise recap of daily events. Google plans to roll out the feature to both new and existing Google TV devices by late 2025.

The move marks Google’s deeper foray into AI-generated news, a space that has faced legal challenges from media companies over copyright concerns. While rival firms like OpenAI and Microsoft have been sued over unlicensed content use, Google’s News Brief does not currently display its sources, apart from related YouTube videos. AI-generated news has also faced accuracy issues, with previous AI models producing misleading or entirely false headlines.

Beyond news summaries, Google aims to make TVs more interactive, with Gemini allowing users to search for films, shows, and YouTube videos using natural language. Future Google TVs will include sensors to detect when users enter the room, enabling a more personalised experience. As the company continues expanding AI features in consumer technology, the success of News Brief may depend on how well it addresses content accuracy and transparency concerns.

Spain urges neutrality from social media platforms

The Spanish government stressed social media platforms must remain neutral and avoid interfering in political matters. The statement came after X’s owner, Elon Musk, commented on crime data involving foreigners in Catalonia.

Government spokesperson Pilar Alegria emphasised the need for absolute impartiality from such platforms when responding to questions about Musk’s remarks and his ongoing disagreements with European leaders like Keir Starmer and Emmanuel Macron.

Musk had reposted crime statistics from a Spanish newspaper, leading to criticism from Catalan officials. Catalonia’s Socialist leader Salvador Illa warned against using the region’s name to promote hate speech, while Spanish Prime Minister Pedro Sanchez rejected any link between immigration and crime rates.

The Spanish Interior Ministry previously reported stable or declining crime rates, affirming that immigration has no significant impact on criminal activity.

Meta ends fact-checking program in the US

Meta Platforms has announced the termination of its US fact-checking program and eased restrictions on politically charged discussions, such as immigration and gender identity. The decision, which affects Facebook, Instagram, and Threads, marks a significant shift in the company’s content moderation strategy. CEO Mark Zuckerberg framed the move as a return to ‘free expression,’ citing recent US elections as a cultural tipping point. The changes come as Meta seeks to build rapport with the incoming Trump administration.

In place of fact-checking, Meta plans to adopt a ‘Community Notes’ system, similar to that used by Elon Musk’s platform X. The company will also scale back proactive monitoring of hate speech, relying instead on user reports, while continuing to address high-severity violations like terrorism and scams. Meta is also relocating some policy teams from California to other states, signalling a broader operational shift. The decision follows the promotion of Republican policy executive Joel Kaplan to head of global affairs and the appointment of Trump ally Dana White to Meta’s board.

The move has sparked criticism from fact-checking organisations and free speech advocates. Angie Drobnic Holan, head of the International Fact-Checking Network, pushed back against Zuckerberg’s claims of bias, asserting that fact-checkers provide context rather than censorship. Critics, including the Centre for Information Resilience, warn that the policy rollback could exacerbate disinformation. For now, the changes will apply only to the US, with Meta maintaining its fact-checking operations in regions like the European Union, where stricter tech regulations are in place.

As Meta rolls out its ‘Community Notes’ system, global scrutiny is expected to intensify. The European Commission, already investigating Musk’s X over similar practices, noted Meta’s announcement and emphasised compliance with the EU’s Digital Services Act, which mandates robust content regulation. While Meta navigates a complex regulatory and political landscape, the impact of its new policies on disinformation and public trust remains uncertain.

US Supreme Court to decide TikTok’s fate amid ban fears

The future of TikTok in the United States hangs in the balance as the Supreme Court prepares to hear arguments on 10 January over a law that could force the app to sever ties with its Chinese parent company, ByteDance, or face a ban. The case centres on whether the law violates the First Amendment, with TikTok and its creators arguing that it does, while the US government maintains that national security concerns justify the measure. If the government wins, TikTok has stated it would shut down its US operations by 19 January.

Creators who rely on TikTok for income are bracing for uncertainty. Many have taken to the platform to express their frustrations, fearing disruption to their businesses and online communities. Some are already diversifying their presence on other platforms like Instagram and YouTube, though they acknowledge TikTok’s unique algorithm has provided visibility and opportunities not found elsewhere. Industry experts believe many creators are adopting a wait-and-see approach, avoiding drastic moves until the Supreme Court reaches a decision.

The Biden administration has pushed for a resolution without success, while President-elect Donald Trump has asked the court to delay the ban so he can weigh in once in office. If the ban proceeds, app stores and internet providers will be required to stop supporting TikTok, ultimately rendering it unusable. TikTok has warned that even a temporary shutdown could lead to a sharp decline in users, potentially causing lasting damage to the platform. A ruling from the Supreme Court is expected in the coming weeks.

TikTok faces new allegations of child exploitation

TikTok is under heightened scrutiny following newly unsealed allegations from a Utah lawsuit claiming the platform knowingly allowed harmful activities, including child exploitation and sexual misconduct, to persist on its livestreaming feature, TikTok Live. According to the lawsuit, TikTok disregarded the issue because it ‘profited significantly’ from these livestreams. The revelations come as the app faces a potential nationwide ban in the US unless its parent company, ByteDance, divests ownership.

The complaint, filed by Utah’s Division of Consumer Protection in June, accuses TikTok Live of functioning as a ‘virtual strip club,’ connecting minors with adult predators in real-time. Internal documents and investigations, including Project Meramec and Project Jupiter probes, reveal that TikTok was aware of the dangers. The findings indicate that hundreds of thousands of minors bypassed age restrictions and were allegedly groomed by adults to perform explicit acts in exchange for virtual gifts. The probes also uncovered criminal activities such as money laundering and drug sales facilitated through TikTok Live.

TikTok has defended itself, claiming it prioritises user safety and accusing the lawsuit of distorting facts by selectively quoting outdated internal documents. A spokesperson emphasised the platform’s ‘proactive measures’ to support community safety and dismissed the allegations as misleading. However, the unsealed material from the case, released by Utah Judge Coral Sanchez, paints a stark picture of TikTok Live’s risks to minors.

This lawsuit is not an isolated case. In October, 13 US states and Washington, D.C., filed a bipartisan lawsuit accusing TikTok of exploiting children and fostering addiction to the app. Utah Attorney General Sean Reyes called social media a pervasive tool for exploiting America’s youth and welcomed the disclosure of TikTok’s internal communications as critical evidence for demonstrating the platform’s culpability.

Why does it matter?

The controversy unfolds amid ongoing national security concerns about TikTok’s ties to China. President Joe Biden signed legislation authorising a TikTok ban last April, citing risks that the app could share sensitive data with the Chinese government. The US Supreme Court is set to hear arguments on whether to delay the ban on 10 January, with a decision expected shortly thereafter. The case underscores the intensifying debate over social media’s role in safeguarding users while balancing innovation and accountability.

Meta appoints Joel Kaplan as chief global affairs officer in strategic leadership shift

Meta Platforms has announced Joel Kaplan as its new chief global affairs officer, succeeding Nick Clegg in a significant leadership transition. Kaplan, a prominent Republican and former deputy chief of staff for policy under George W. Bush, has been with Meta since 2011 and previously reported to Clegg.

The reshuffle comes as the company navigates a delicate political landscape ahead of US President-elect Donald Trump’s inauguration, addressing past tensions with the administration over its content policies. Nick Clegg, who joined Meta in 2018 after serving as the UK’s deputy prime minister, announced his decision to step down, describing the timing as ‘right’ for the transition.

He has been instrumental in shaping Meta’s policies on contentious issues like election integrity and content moderation, including creating its independent oversight board. Clegg praised Kaplan as the ideal choice to guide Meta through evolving societal and political expectations for technology.

Kaplan’s tenure at Meta has not been without controversy. He has faced accusations of prioritising conservative agendas under the guise of neutrality, which Meta denied. Notably, Kaplan attended a 2018 Senate hearing on sexual assault allegations against then-Supreme Court nominee Brett Kavanaugh, sparking internal dissent at the company. Despite these challenges, Kaplan’s appointment underscores Meta’s intent to strengthen ties with Republican leadership.

The leadership change aligns with Meta’s broader efforts to mend its relationship with Trump and his administration. The company’s $1 million donation to Trump’s inaugural fund and CEO Mark Zuckerberg’s gestures to appease conservative concerns reflect this shift. That marks a significant chapter in Meta’s ongoing balancing act between political pressures and its role as a global tech powerhouse.

OpenAI delays Media Manager amid creator backlash

In May, OpenAI announced plans for ‘Media Manager,’ a tool to allow creators to control how their content is used in AI training, aiming to address intellectual property (IP) concerns. The project remains unfinished seven months later, with critics claiming it was never prioritised internally. The tool was intended to identify copyrighted text, images, audio, and video, allowing creators to include or exclude their work from OpenAI’s training datasets. However, its future remains uncertain, with no updates since August and missed deadlines.

The delay comes amidst growing backlash from creators and a wave of lawsuits against OpenAI. Plaintiffs, including prominent authors and artists, allege that the company trained its AI models on their works without authorisation. While OpenAI provides ad hoc opt-out mechanisms, critics argue these measures are cumbersome and inadequate.

Media Manager was seen as a potential solution, but experts doubt its effectiveness in addressing complex legal and ethical challenges, including global variations in copyright law and the burden placed on creators to protect their works. OpenAI continues to assert that its AI models transform, rather than replicate, copyrighted material, defending itself under ‘fair use’ protections.

While the company has implemented filters to minimise IP conflicts, lacking comprehensive tools like Media Manager leaves unresolved questions about compliance and compensation. As OpenAI battles legal challenges, the effectiveness and impact of Media Manager—if it ever launches—remain uncertain in the face of an evolving IP landscape.

Elon Musk’s regulatory challenges and potential influence under Trump’s presidency

In the final days of Joe Biden’s US presidency, the SEC pressured Elon Musk to settle allegations of securities violations related to his 2022 Twitter takeover or face civil charges. Musk’s response, shared via social media, included a legal letter accusing the SEC of an ‘improperly motivated’ ultimatum and demanding to know if the White House had influenced the action. Both the SEC and White House declined to comment.

As Donald Trump prepares to take office, questions arise about how Musk’s ties to the incoming administration could impact ongoing federal investigations into his business ventures, including Tesla, SpaceX, and Neuralink. Sources reveal at least 20 active probes into issues ranging from Tesla’s driver-assistance systems to SpaceX’s environmental practices.

Critics warn that Trump’s administration might scale back regulatory scrutiny, while legal experts argue that evidence-based cases could still proceed regardless of Musk’s political connections. Musk’s proximity to Trump has intensified since the election, with Musk participating in high-profile meetings and being appointed to co-lead a government efficiency initiative.

Musk has openly discussed using his position to push policies that could benefit his businesses, such as easing driverless-vehicle regulations. Meanwhile, ongoing investigations, including those by the DOJ and the National Highway Traffic Safety Administration, face uncertainties over enforcement under the new administration.

Musk’s business dealings, including contacts with foreign leaders and regulatory disputes, continue to draw attention. Despite allegations of political interference, agencies like the EPA and NASA have emphasised their commitment to legal responsibilities. However, critics fear that Musk’s influence could undermine the integrity of federal oversight during Trump’s second term.