Lina Khan, a prominent advocate of strong antitrust enforcement, has announced her resignation as chair of the US Federal Trade Commission (FTC) in a memo to staff. Her departure, set to occur in the coming weeks, marks the end of a tenure that challenged numerous corporate mergers and pushed for greater accountability among powerful companies.
During her leadership, Khan spearheaded high-profile lawsuits against Amazon, launched investigations into Microsoft, and blocked major deals, including Kroger’s planned $25 billion acquisition of Albertsons. Her efforts often focused on protecting consumers and workers from potential harms posed by dominant corporations.
Khan, the youngest person to lead the FTC, first gained recognition in 2017 for her work criticising Amazon’s market practices. She argued that tech giants exploited outdated antitrust laws, allowing them to sidestep scrutiny. Her aggressive approach divided opinion, with courts striking down some of her policies, including a proposed ban on noncompete clauses.
Following Khan’s exit, the FTC faces a temporary deadlock with two Republican and two Democratic commissioners. Republican Andrew Ferguson has assumed the role of chair, and a Republican majority is expected once the Senate approves Mark Meador, a pro-enforcement nominee, to complete the five-member commission.
A new report from the European Court of Auditors (ECA) highlights progress in tackling unjustified geo-blocking in the EU but calls for stronger enforcement and expanded regulations. Geo-blocking, which restricts online access to goods and services based on nationality or location, was targeted by a 2018 regulation aimed at ensuring fairer treatment in the EU Single Market. However, the ECA found that inconsistent enforcement has left many consumers unprotected.
The report reveals significant disparities in penalties for non-compliance, ranging from minor fines of €26 in some countries to €5 million or even criminal liability in others. These gaps, combined with limited awareness among consumers and traders about available support, have undermined the regulation’s effectiveness. Key exemptions for sectors like audiovisual services—such as streaming platforms and TV distribution—are also causing frustration, with calls to broaden the regulation’s scope during its 2025 review.
Ildikó Gáll-Pelcz, the ECA member responsible for the audit, warned that geo-blocking continues to restrict consumer choices and fuel dissatisfaction. In response, the European Commission has welcomed the findings, signalling potential reforms, including stricter enforcement mechanisms and exploring ways to address challenges tied to copyright practices. The Commission has committed to factoring the report into its upcoming evaluation of the regulation.
The UK government is exploring new AI tools to streamline public services and assist ministers and civil servants. Among these is Parlex, a tool that predicts how MPs may react to proposed policies, offering insights into potential support or opposition based on MPs’ previous parliamentary contributions. Described as a ‘parliamentary vibe check,’ the tool helps policy teams craft strategies before formally proposing new measures.
Part of the AI suite Humphrey—named after the Yes Minister character—Parlex and other tools aim to modernise government operations. These include Minute, which transcribes ministerial meetings, and Lex, which analyses the impact of laws. Another tool, Redbox, automates submission processing, while Consult is projected to save £80 million annually by improving public consultation processes. The Department for Work and Pensions has also utilised AI to analyse handwritten correspondence, accelerating responses to vulnerable individuals.
The broader government strategy, unveiled by Prime Minister Keir Starmer, emphasises integrating AI into public services while balancing privacy concerns. Plans include sharing anonymised NHS data for AI research under stringent safeguards. Ministers believe these innovations could address economic challenges and boost the UK’s economy by up to £470 billion over the next decade. However, past missteps, such as erroneous fraud accusations stemming from flawed algorithms, highlight the need for careful implementation.
Major tech companies, including Meta’s Facebook, Elon Musk’s X, YouTube, and TikTok, have committed to tackling online hate speech through a revised code of conduct now linked to the European Union’s Digital Services Act (DSA). Announced Monday by the European Commission, the updated agreement also includes platforms like LinkedIn, Instagram, Snapchat, and Twitch, expanding the coalition originally formed in 2016. The move reinforces the EU’s stance against illegal hate speech, both online and offline, according to EU tech commissioner Henna Virkkunen.
Under the revised code, platforms must allow not-for-profit organisations or public entities to monitor how they handle hate speech reports and ensure at least 66% of flagged cases are reviewed within 24 hours. Companies have also pledged to use automated tools to detect and reduce hateful content while disclosing how recommendation algorithms influence the spread of such material.
Additionally, participating platforms will provide detailed, country-specific data on hate speech incidents categorised by factors like race, religion, gender identity, and sexual orientation. Compliance with these measures will play a critical role in regulators’ enforcement of the DSA, a cornerstone of the EU’s strategy to combat illegal and harmful content online.
According to a recent study, AI models have shown limitations in tackling high-level historical inquiries. Researchers tested three leading large language models (LLMs) — GPT-4, Llama, and Gemini — using a newly developed benchmark, Hist-LLM. The test, based on the Seshat Global History Databank, revealed disappointing results, with GPT-4 Turbo achieving only 46% accuracy, barely surpassing random guessing.
Researchers from Austria’s Complexity Science Hub presented the findings at the NeurIPS conference last month. Co-author Maria del Rio-Chanona highlighted that while LLMs excel at basic facts, they struggle with nuanced, PhD-level historical questions. Errors included incorrect claims about ancient Egypt’s military and armour development, often due to the models extrapolating from prominent but irrelevant data.
Biases in training data also emerged, with models underperforming on questions related to underrepresented regions like sub-Saharan Africa. Lead researcher Peter Turchin acknowledged these shortcomings but emphasised the potential of LLMs to support historians with future improvements.
Efforts are underway to refine the benchmark by incorporating more diverse data and crafting complex questions. Researchers remain optimistic about AI’s capacity to assist in historical research despite its current gaps.
Russian state-linked hackers, operating under the unit Star Blizzard, have launched a new phishing campaign targeting the WhatsApp accounts of government ministers and officials worldwide. According to Britain’s National Cyber Security Centre (NCSC), Star Blizzard, linked to Russia’s FSB spy agency, aims to undermine political trust in the UK and other similar nations.
Victims receive an email impersonating a US government official, inviting them to join a WhatsApp group. The email contains a QR code that, when scanned, links the victim’s WhatsApp account to an attacker-controlled device or WhatsApp Web, granting the hacker access to sensitive messages. Microsoft confirmed that this tactic allows hackers to exfiltrate data but did not specify whether data was successfully stolen.
The campaign has targeted individuals involved in diplomacy, defence, and Ukraine-related initiatives. This marks the latest attempt by Star Blizzard, which had previously targeted British MPs, universities, and journalists. Microsoft noted that while the campaign seemed to have wound down by November, the use of QR codes in phishing attacks, or ‘quishing,’ shows the hackers’ continued efforts to gain access to sensitive information.
WhatsApp, owned by Meta, emphasised that users should avoid scanning suspicious QR codes and should only link their accounts through official services. Experts also recommend verifying suspicious emails by contacting the sender directly through a known, trusted email address.
The Pentagon is leveraging generative AI to accelerate critical defence operations, particularly the ‘kill chain’, a process of identifying, tracking, and neutralising threats. According to Dr Radha Plumb, the Pentagon’s Chief Digital and AI Officer, AI’s current role is limited to aiding planning and strategising phases, ensuring commanders can respond swiftly while maintaining human oversight over life-and-death decisions.
Major AI firms like OpenAI and Anthropic have softened their policies to collaborate with defence agencies, but only under strict ethical boundaries. These partnerships aim to balance innovation with responsibility, ensuring AI systems are not used to cause harm directly. Meta, Anthropic, and Cohere are tech giants working with defence contractors, providing tools that optimise operational planning without breaching ethical standards.
In the US, Dr Plumb emphasised that the Pentagon’s AI systems operate as part of human-machine collaboration, countering fears of fully autonomous weapons. Despite debates over AI’s role in defence, officials argue that working with the technology is vital to ensure its ethical application. Critics, however, continue to question the transparency and long-term implications of such alliances.
As AI becomes central to defence strategies, the Pentagon’s commitment to integrating ethical safeguards highlights the delicate balance between technological advancement and human control.
Bluesky has launched a vertical video feed, positioning itself as a competitor in the short-video space amidst uncertainty surrounding TikTok’s future in the US. This new feature is accessible via the Explore tab and allows users to scroll through trending videos by swiping up. For convenience, users can pin the feed to their home screen or add it to their list of custom feeds.
Acknowledging developers building TikTok alternatives, Bluesky highlighted emerging platforms such as ‘Tik.Blue’ and ‘Skylight.Social,’ which are currently in early development stages. These efforts align with Bluesky’s growth, as the platform has surpassed 28 million users.
Other platforms are also leveraging TikTok’s precarious situation. Elon Musk’s X recently introduced a vertical video feed, while Meta unveiled Edits, a video editing app to rival ByteDance’s CapCut. Bluesky’s latest move highlights a broader shift among social networks seeking to capture the short-video audience in the US and globally.
The Federal Trade Commission (FTC) has raised concerns about the competitive risks posed by collaborations between major technology companies and developers of generative AI tools. In a staff report issued Friday, the agency pointed to partnerships such as Microsoft’s investment in OpenAI and similar alliances involving Amazon, Google, and Anthropic as potentially harmful to market competition, according to TechCrunch.
FTC Chair Lina Khan warned that these collaborations could create barriers for smaller startups, limit access to crucial AI tools, and expose sensitive information. ‘These partnerships by big tech firms can create lock-in, deprive start-ups of key AI inputs, and reveal sensitive information that undermines fair competition,’ Khan stated.
The report specifically highlights the role of cloud service providers like Microsoft, Amazon, and Google, which provide essential resources such as computing power and technical expertise to AI developers. These arrangements could restrict smaller firms’ access to these critical resources, raise business switching costs, and allow cloud providers to gain unique insights into sensitive data, potentially stifling competition.
Microsoft defended its partnership with OpenAI, emphasising its benefits to the industry. ‘This collaboration has enabled one of the most successful AI startups in the world and spurred unprecedented technology investment and innovation,’ said Rima Alaily, Microsoft’s deputy general counsel. The FTC report underscores the need to address the broader implications of big tech’s growing dominance in generative AI.
Mark Zuckerberg has defended Meta’s use of a dataset containing copyrighted e-books to train its AI models, Llama. The statement emerged from a deposition linked to the ongoing Kadrey v. Meta Platforms lawsuit, which is one of many cases challenging the use of copyrighted content in AI training. Meta reportedly relied on the controversial dataset LibGen, despite internal concerns over potential legal risks.
LibGen, a platform known for providing unauthorised access to copyrighted works, has faced numerous lawsuits and shutdown orders. Newly unsealed court documents suggest that Zuckerberg approved using the dataset to develop Meta’s Llama models. Employees allegedly flagged the dataset as problematic, warning it might undermine the company’s standing with regulators. During questioning, Zuckerberg compared the situation to YouTube’s efforts to remove pirated content, arguing against blanket bans on datasets with copyrighted material.
Meta’s practices are under heightened scrutiny as legal battles pit AI companies against copyright holders. The deposition indicates that Meta considered balancing copyright concerns with practical AI development needs. However, the company faces mounting allegations that it disregarded ethical boundaries, sparking broader debates about fair use and intellectual property in AI training.