FTC’s Holyoak raises concerns over AI and kids’ data

Federal Trade Commissioner Melissa Holyoak has called for closer scrutiny of how AI products handle data from younger users, raising concerns about privacy and safety. Speaking at an American Bar Association meeting in Washington, Holyoak questioned what happens to information collected from children using AI tools, comparing their interactions to asking advice from a toy like a Magic 8 Ball.

The FTC, which enforces the Children’s Online Privacy Protection Act, has previously sued platforms like TikTok over alleged violations. Holyoak suggested the agency should evaluate its authority to investigate AI privacy practices as the sector evolves. Her remarks come as the FTC faces a leadership change with President-elect Donald Trump set to appoint a successor to Lina Khan, known for her aggressive stance against corporate consolidation.

Holyoak, considered a potential acting chair, emphasised that the FTC should avoid a rigid approach to mergers and acquisitions, while also predicting challenges to the agency’s worker noncompete ban. She noted that a Supreme Court decision on the matter could provide valuable clarity.

Bluesky gains millions as users leave X

Social media platform Bluesky is experiencing rapid growth as users abandon Elon Musk’s X following Donald Trump’s presidential election victory and concerns over upcoming changes to the platform’s terms of service. Bluesky reported gaining 2.5 million new users in a week, pushing its total to over 16 million. Activity on Bluesky has surged, with record engagement levels, as organisations like the Guardian and prominent figures such as former CNN anchor Don Lemon leave X.

The election of Trump brought both heightened activity and backlash for X. On November 6, the platform saw 46.5 million visits in the US, a year-high figure, but also recorded more than 115,000 account deactivations, the most since Musk’s acquisition. Bluesky and Meta’s Threads also saw increased traffic, signalling growing competition. Analysts attribute Bluesky’s growth partly to dissatisfaction with X’s handling of misinformation and controversial content during the election.

Adding to the exodus is X’s imminent policy change requiring all legal disputes to be settled in Texas courts, a move critics claim favours Musk. The Center for Countering Digital Hate argued this could shield the platform from accountability, while Musk and X remained silent on the controversy. Despite Bluesky’s growth, it trails competitors like Threads and X in total user base, with analysts suggesting X remains strong due to its association with President-elect Trump and microblogging’s inherent network advantages.

German football club exits social media platform X over concerns of hate speech and disinformation

German soccer club St Pauli has announced its withdrawal from the social media platform X, formerly Twitter, citing concerns over hate speech and disinformation. The club accused X’s owner, Elon Musk, of turning the platform into a space for unchecked racism and conspiracy theories, particularly during Germany’s heated political climate ahead of snap elections in February.

St Pauli, known for its progressive values and activism, said X has become a “hate machine” where threats and insults go unpunished under the guise of free speech. The decision mirrors similar moves by media outlets such as The Guardian and Spain’s La Vanguardia, which also left X over concerns about harmful content.

While the German club’s account will remain online as an archive, St Pauli will no longer post new content. The Hamburg-based team, celebrated for its left-wing fan base and social initiatives, stated the decision aligns with its commitment to promoting inclusivity and combating hate.

Elon Musk revives lawsuit against OpenAI

Tesla founder Elon Musk has revived a legal action against OpenAI, alleging the organisation abandoned its original non-profit mission. Filed in a California federal court, the amended complaint names Microsoft, Reid Hoffman, and Dee Templeton as defendants. Additional plaintiffs, including Shivon Zilis, a Neuralink executive and ally of Musk, have also joined the case.

Musk, a co-founder of OpenAI, accuses the organisation of exploiting Microsoft’s infrastructure in what his lawyers describe as a ‘de facto merger.’ He claims OpenAI has benefited from favourable treatment by Microsoft, disadvantaging competitors such as xAI, Musk’s AI venture. The lawsuit also raises concerns over alleged antitrust violations involving OpenAI board members and their connections to Microsoft.

The filing alleges Reid Hoffman and Dee Templeton facilitated agreements between OpenAI and Microsoft that violated antitrust laws. It further details how Hoffman’s dual roles at Microsoft and OpenAI may have allowed access to sensitive information. Zilis, a former OpenAI board member, expressed similar concerns internally but was reportedly ignored.

Musk’s lawyers argue that OpenAI’s transition to a profit-driven model undermines its foundational principles of transparency and safety. The complaint references incidents such as a 2018 cryptocurrency proposal that Musk vetoed, citing potential reputational harm. OpenAI has dismissed the lawsuit as baseless and characterised it as a publicity stunt.

EU hits Meta with $800M antitrust fine

Meta, the parent company of Facebook, has been fined nearly 800M by the European Union for anti-competitive practices related to its Marketplace feature. The European Commission accused the tech giant of abusing its dominant position by tying Marketplace to Facebook’s social network, forcing exposure to the service and disadvantaging competitors.

This marks the first time the EU has penalised Meta for breaching competition laws, though the company has faced previous fines for privacy violations. The investigation found that Meta unfairly used data from competitors advertising on Facebook and Instagram to benefit its own Marketplace, giving it an edge that rivals couldn’t match.

Meta rejected the claims, arguing that the decision lacks evidence of harm to competition or consumers. While the company pledged to comply with the EU’s order to cease the conduct, it plans to appeal the ruling. The case highlights ongoing EU scrutiny of Big Tech, with Meta facing additional investigations on issues like privacy, child safety, and election integrity.

Google launches AI scam detector for Pixel phones

Google has started rolling out its AI-powered Scam Detection feature for the Pixel Phone app, initially available only in the beta version for US users. First announced during Google I/O 2024, the feature uses onboard AI to help users identify potential scam calls. Currently, the update is accessible to Pixel 6 and newer models, with plans to expand to other Android devices in the future.

Scam Detection analyses the audio from incoming calls directly on the device, issuing alerts if suspicious activity is detected. For example, if a caller claims to be from a bank and pressures the recipient to transfer funds urgently, the app provides visual and audio warnings. The processing occurs locally on the phone, utilising the Pixel 9’s Gemini Nano chip or similar on-device machine learning models on earlier Pixel versions, ensuring no data is sent to the cloud.

This feature is part of Google’s ongoing efforts to tackle digital fraud, as the rise in generative AI has made scam calls more sophisticated. It joins the suite of security tools on the Pixel Phone app, including Call Screen, which uses a bot to screen calls before involving the user. Google’s localised approach aims to keep users’ information secure while enhancing their safety.

Currently, Scam Detection requires manual activation through the app’s settings, as it isn’t enabled by default. Google is seeking feedback from early adopters to refine the feature further before a wider release to other Android devices.

Guardian pulls out of X amid content concerns

The Guardian has announced its departure from X, citing concerns over harmful content, such as racist and conspiracy-based posts. The decision marks a significant retreat for one of the UK’s prominent news outlets from the social media platform, which Elon Musk acquired in 2022. According to an editorial, the Guardian stated that the downsides of remaining on X now outweigh any potential benefits.

With over 10.7 million followers, the Guardian’s exit reflects rising concerns about X’s moderation policies. Critics argue that Musk’s relaxed approach has fostered an environment that tolerates misinformation and hate speech. Musk responded to the Guardian’s decision by dismissing the publication as “irrelevant” on X.

The Guardian’s move comes as other high-profile users, including former CNN anchor Don Lemon, also announce plans to leave X. Lemon expressed disappointment in the platform, saying it no longer supports meaningful debate. The UK has seen an increase in concerns about X’s impact, with British police, charities, and public health organisations also reconsidering their use of the platform.

The British government, however, still maintains a presence on X, though it refrains from paid promotions. Instead, it directs advertising efforts towards platforms like Instagram and Facebook. Observers note that the Guardian’s exit may prompt other media outlets to evaluate their stance on social media engagement.

AI threats to democracy spark concern in new report

A report by the Alan Turing Institute warns that AI has fuelled harmful narratives and spread disinformation during a major year for elections. Conducted by the Institute’s Centre for Emerging Technology and Security (CETaS), the study explores how generative AI tools, including deepfake technology and bot farms, have been used to amplify conspiracy theories and sway public opinion. While no concrete evidence links AI directly to changes in election outcomes, the study points to growing concerns over AI’s influence on voter trust.

Researchers observed AI-driven bot farms that mimicked genuine voters and used fake celebrity endorsements to spread conspiracies during key elections. These tactics, they argue, have eroded trust in democratic institutions and heightened public fear of AI’s potential misuse. Lead author Sam Stockwell noted that while evidence remains limited on AI changing electoral results, the urgent need for transparency and better access to social media data is clear.

The Institute has outlined steps to counteract AI’s potential threats to democracy, suggesting stricter deterrents against disinformation, enhanced detection of deepfake content, improved media guidance, and stronger societal defences against misinformation. These recommendations aim to create a safer information environment as AI technology continues to advance.

In response to AI’s growing presence, major AI companies, including those behind ChatGPT and Meta AI, have tightened security to prevent misuse. However, some startups, like Haiper, still lag behind, with fewer safeguards in place, leading to concerns over potentially harmful AI content reaching the public.

Meta to give European users more control over personalised ads

Meta Platforms announced it will soon give Instagram and Facebook users in Europe the option to receive less personalised ads. The decision comes in response to pressure from EU regulators and aims to address concerns about data privacy and targeted advertising. Instead of highly tailored ads, users will be shown adverts based on general factors like age, gender, and location, as well as the content they view in a given session.

The move aligns with the European Union‘s push to regulate major tech companies, supported by legislation like the Digital Markets Act (DMA), which was introduced earlier this year to promote fair competition and enhance user privacy. Additionally, Meta will offer a 40% price reduction on ad-free subscriptions for European customers.

The changes follow a recent ruling by Europe’s highest court, which supported privacy activist Max Schrems and ruled that Meta must limit the use of personal data from Facebook for advertising purposes. Meanwhile, the European Union is set to fine Apple under these new antitrust rules, marking a significant step in the enforcement of stricter regulations for Big Tech.

EU Commissioner calls for tougher 5G security measures

The incoming European Commissioner for Tech Sovereignty, Security, and Democracy, Henna Virkkunen, expressed dissatisfaction with the limited action taken by EU member states to exclude high-risk telecom suppliers, such as China’s Huawei and ZTE, from critical infrastructure. During her confirmation hearing in the European Parliament, Virkkunen noted that although the European Commission adopted 5G security measures in 2020, fewer than half of the EU member states have implemented restrictions on these suppliers. She indicated that this issue will be addressed in the planned revision of the Cyber Security Act next year and stressed the need for more serious action from national governments.

Virkkunen also pointed out that while the EU had adopted the 5G Cybersecurity Toolbox to protect telecom networks, only 11 of the 27 member states have fully implemented measures, including bans and restrictions on high-risk vendors. In addition to her efforts to strengthen cybersecurity, Virkkunen plans to propose a Digital Networks Act in 2025 to overhaul telecom regulations and boost investment and connectivity. On the topic of US Big Tech compliance with EU rules, she reaffirmed the importance of cooperation but emphasised that all companies must adhere to EU regulations, including those set out in the Digital Services Act.