Trump discusses TikTok sale with China

President Donald Trump confirmed on Wednesday that he was in active discussions with China over the future of TikTok, as the US seeks to broker a sale of the popular app. Speaking to reporters aboard Air Force One, Trump revealed that talks were ongoing, underscoring the US government’s desire to address national security concerns tied to the app’s ownership by the Chinese company ByteDance. The move comes amid growing scrutiny over TikTok’s data security practices and potential links to the Chinese government.

The Trump administration has expressed concerns that TikTok could be used to collect sensitive data on US users, raising fears about national security risks. As a result, the US has been pushing for ByteDance to sell TikTok’s US operations to an American company. This would be part of an effort to reduce any potential influence from the Chinese government over the app’s data and operations. However, the process has faced complexities, with discussions involving multiple stakeholders, including potential buyers.

While the negotiations continue, the future of TikTok remains uncertain. If a sale is not agreed upon, the US has indicated that it could pursue further actions, including a potential ban of the app. As these talks unfold, the outcome could have significant implications for TikTok’s millions of American users and its business operations in the US, with both sides working to find a solution that addresses the security concerns while allowing the app to continue its success.

For more information on these topics, visit diplomacy.edu.

Two charged after pensioner loses over £100,000 in cryptocurrency fraud

Two men have been charged in connection with a cryptocurrency fraud that saw a 75-year-old man from Aberdeenshire lose more than £100,000. The case, reported to police in July, led to an extensive investigation by officers from the north east division CID.

Following inquiries, officers travelled to Coventry and Mexborough on Tuesday, working alongside colleagues from West Midlands Police and South Yorkshire Police.

The coordinated operation resulted in the arrests of two men, aged 36 and 54, who have now been charged in relation to the fraud allegations.

Police have not yet disclosed details of how the scam was carried out, but cryptocurrency frauds often involve fake investment schemes, phishing scams, or fraudulent trading platforms that lure victims into handing over money with promises of high returns.

Many scams also exploit a lack of regulation in the digital currency sector, making it difficult for victims to recover lost funds.

Authorities have urged the public to remain vigilant and report any suspicious financial activity, particularly scams involving cryptocurrencies.

For more information on these topics, visit diplomacy.edu.

Lawyers warned about AI misuse in court filings

Warnings about AI misuse have intensified after lawyers from Morgan & Morgan faced potential sanctions for using fake case citations in a lawsuit against Walmart.

The firm’s urgent email to over 1,000 attorneys highlighted the dangers of relying on AI tools, which can fabricate legal precedents and jeopardise professional credibility. A lawyer in the Walmart case admitted to unintentionally including AI-generated errors in court filings.

Courts have seen a rise in similar incidents, with at least seven cases involving disciplinary actions against lawyers using false AI-generated information in recent years. Prominent examples include fines and mandatory training for lawyers in Texas and New York who cited fictitious cases in legal disputes.

Legal experts warn that while AI tools can speed up legal work, they require rigorous oversight to avoid costly mistakes.

Ethics rules demand lawyers verify all case filings, regardless of AI involvement. Generative AI, such as ChatGPT, creates risks by producing fabricated data confidently, sometimes referred to as ‘hallucinations’. Experts point to a lack of AI literacy in the legal profession as the root cause, not the technology itself.

Advances in AI continue to reshape the legal landscape, with many firms adopting the technology for research and drafting. However, mistakes caused by unchecked AI use underscore the importance of understanding its limitations.

Acknowledging this issue, law schools and organisations are urging lawyers to approach AI cautiously to maintain professional standards.

For more information on these topics, visit diplomacy.edu.

EU delays AI liability directive due to stalled negotiations

The European Commission has removed the AI Liability Directive from its 2025 work program due to stalled negotiations, though lawmakers in the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) have voted to continue working on the proposal. A spokesperson confirmed that IMCO coordinators will push to keep the directive on the political agenda, despite the Commission’s plans to withdraw it. The Legal Affairs committee has yet to make a decision on the matter.

The AI Liability Directive, proposed in 2022 alongside the EU’s AI Act, aimed to address the potential risks AI systems pose to society. While some lawmakers, such as German MEP Axel Voss, criticised the Commission’s move as a ‘strategic mistake,’ others, like Andreas Schwab, called for more time to assess the impact of the AI Act before introducing separate liability rules.

The proposal’s withdrawal has sparked mixed reactions within the European Parliament. Some lawmakers, like Marc Angel and Kim van Sparrentak, emphasised the need for harmonised liability rules to ensure fairness and accountability, while others expressed concern that such rules might not be needed until the AI Act is fully operational. Consumer groups welcomed the proposed legislation, while tech industry representatives argued that liability issues were already addressed under the revamped Product Liability Directive.

For more information on these topics, visit diplomacy.edu.

Judge allows Musk’s DOGE to keep accessing government data

A US federal judge has denied a request to temporarily block Elon Musk’s Department of Government Efficiency (DOGE) from accessing data from seven federal agencies or making further workforce cuts. The lawsuit, brought by 14 Democratic attorneys general, argued that DOGE was overstepping its authority by reshaping agencies and obtaining vast amounts of government information. However, Judge Tanya Chutkan ruled that the plaintiffs failed to prove immediate harm, allowing DOGE to continue operations.

Despite this decision, the judge acknowledged serious constitutional concerns regarding Musk’s authority. She noted that Musk had not been nominated by the US President Trump or confirmed by the Senate, raising potential violations of the Appointments Clause. In her ruling, Chutkan also criticised the Trump administration’s legal arguments, suggesting inconsistencies in its justification for DOGE’s powers.

While the restraining order was denied, the states can still pursue their case, potentially seeking a preliminary injunction to halt DOGE’s access to federal data. New Mexico Attorney General Raúl Torrez vowed to continue the legal fight, accusing Musk of destabilising government functions and acting without proper oversight. The battle over DOGE’s legitimacy is expected to intensify in the coming months.

For more information on these topics, visit diplomacy.edu.

US court urged to reconsider net neutrality ruling after push from public interest groups

Public interest groups have urged a US court to revisit its decision blocking the reinstatement of net neutrality rules. The appeal was submitted to the 6th Circuit Court of Appeals after a three-judge panel ruled that the Federal Communications Commission (FCC) lacked authority to enforce the rules.

These rules, first implemented in 2015 and later repealed under a different administration, aim to ensure equal access to the internet for all users.

Advocates, including Free Press and Public Knowledge, argue that the court’s ruling conflicts with a previous decision by another court. They emphasised the importance of protecting users from potential abuses by broadband providers, who might prioritise their own interests over fair access.

A representative for FCC Commissioner Brendan Carr, an opponent of net neutrality, has not yet responded to the appeal.

Net neutrality rules prevent internet providers from blocking or slowing content or giving preferential treatment to certain users. While state-level rules remain in place in regions like California, the court’s decision could halt federal efforts to oversee broadband regulation.

Earlier this year, the FCC had sought to reinstate these protections, but industry groups successfully argued for a temporary block.

Supporters of the rules include major tech companies, while telecom industry representatives view them as unnecessary and counterproductive. The ongoing legal battles could determine whether federal regulators will regain the ability to enforce open internet policies.

For more information on these topics, visit diplomacy.edu.

Google settles tax dispute in Italy for 326 million euros

Milan prosecutors have announced plans to drop a case against Google’s European division after the company agreed to settle a tax dispute by paying 326 million euros (£277 million). The settlement covers the period from 2015 to 2019, including penalties, sanctions, and interest.

The tax dispute stemmed from allegations that Google had failed to file and pay taxes on revenue generated in Italy, based on the digital infrastructure it operates within the country. This comes after the company settled a previous tax case with Italian authorities in 2017 by paying 306 million euros, which acknowledged Google’s permanent presence in Italy.

In 2023, Italy had requested that Google pay 1 billion euros in unpaid taxes and penalties. However, with this latest settlement, the case against the tech giant appears to be resolved for now.

For more information on these topics, visit diplomacy.edu.

Google faces backlash from privacy advocates over new tracking rules

Google has introduced changes to its online tracking policies, allowing fingerprinting, a technique that collects data such as IP addresses and device information to help advertisers identify users. The new rules mark a shift in Google’s approach to online tracking.

Google states that these data signals are already widely used across the industry and that its goal is to balance privacy with the needs of businesses and advertisers. The company previously restricted fingerprinting for ad targeting but now argues that evolving internet usage—such as browsing from smart TVs and gaming consoles—has made conventional tracking methods, like cookies, less effective. The company also emphasises that users continue to have choices regarding personalised ads and that it encourages responsible data use across the industry.

Critics argue that fingerprinting is harder for users to control compared to cookies, as it does not rely on locally stored files but rather collects real-time data about a user’s device and network. Some privacy advocates believe this change marks a shift toward tracking methods that provide users with fewer options to opt out.

Martin Thomson, an engineer at Mozilla, noted that by allowing fingerprinting, Google has given itself—and the advertising industry it dominates—permission to use a form of tracking that people can’t do much to stop. Lena Cohen, staff technologist at the Electronic Frontier Foundation, expressed similar concerns, stating that fingerprinting could make user data more accessible to advertisers, data brokers, and law enforcement.

The UK’s Information Commissioner’s Office (ICO) has raised concerns over fingerprinting, stating that it could reduce users’ ability to control how their information is collected. In a December blog post, Stephen Almond, the ICO’s Executive Director of Regulatory Risk, wrote that this change irresponsible, and that advertisers and businesses using this technology will need to demonstrate compliance with privacy and data laws.

Google responded that it welcomes further discussions with regulators and highlighted that IP addresses have long been used across the industry for fraud prevention and security.

For more information on these topics, visit diplomacy.edu.

Europol chief warns trust in law enforcement at risk

Law enforcement agencies must ensure public understanding of the need for expanded investigative powers to effectively combat the increasing scale and complexity of cybercrime, Europol’s chief Catherine De Bolle stated at the Munich Cyber Security Conference.

De Bolle emphasised that cybercriminal activity is not only growing in volume but also evolving in sophistication, leveraging both traditional telecom infrastructure and advanced digital tools, including dark web marketplaces. In response, she underscored the necessity for law enforcement agencies to strengthen their technical capabilities. However, she noted that implementing large-scale investigative measures must be balanced with maintaining public confidence in state institutions.

Her remarks followed those of Sir Jeremy Fleming, former director of the UK’s cyber intelligence agency GCHQ, who spoke about the importance of maintaining public trust in intelligence operations.

De Bolle further stressed the need for stronger collaboration between government agencies, private sector entities, and international organisations to address cyber threats effectively. As cybercrime and state-sponsored cyber activities increasingly overlap, she advocated for a shift away from fragmented approaches, calling for ‘multilateral responses’ to improve collective cybersecurity readiness.

For more information on these topics, visit diplomacy.edu.

AI copyright case could set legal precedent

A US federal judge has ruled that Ross Intelligence infringed on Thomson Reuters’ copyright by using its legal research content to train an AI platform. The decision marks a significant moment in the ongoing debate over AI and intellectual property, as over 39 similar lawsuits progress through US courts.

Ross had argued that its use of Reuters’ Westlaw headnotes, summaries of legal decisions, was transformative, meaning it repurposed the material for a different function. However, the judge rejected this defence, ruling that Ross merely repackaged the content without adding significant new value. The company’s commercial intent also played a role in the ruling, as its AI system directly competed with Reuters’ legal research services.

The ruling could impact future AI copyright cases, particularly those involving generative AI models trained on publicly available content. While some believe it strengthens the case for content creators, others argue its scope is limited. Legal experts caution that further court decisions will be needed to define how copyright law applies to AI training in the long term.

For more information on these topics, visit diplomacy.edu.