EU plans new law to tackle online consumer manipulation

The European Commission is preparing to introduce the Digital Fairness Act, a new law that aims to boost consumer protection online instead of adding more regulatory burden on businesses.

Justice Commissioner Michael McGrath described the upcoming legislation as both pro-consumer and pro-business during a speech at the European Retail Innovation Summit, seeking to calm industry concerns about further EU regulation following the Digital Services Act and the Digital Markets Act.

Designed to tackle deceptive practices in the digital space, the law will address issues such as manipulative design tricks known as ‘dark patterns’, influencer marketing, and personalised pricing based on user profiling.

It will also target concerns around addictive service design and virtual currencies in video games—areas where current EU consumer rules fall short. The legislation will be based on last year’s Digital Fairness Fitness Check, which highlighted regulatory gaps in the online marketplace.

McGrath acknowledged the cost of complying with EU-wide consumer protection measures, which can run into millions for businesses.

However, he stressed that the new act would provide legal clarity and ease administrative pressure, particularly for smaller companies, instead of complicating compliance requirements further.

A public consultation will begin in the coming weeks, ahead of a formal legislative proposal expected by mid-2026.

Maria-Myrto Kanellopoulou, head of the Commission’s consumer law unit, promised a thoughtful approach, saying the process would be both careful and thorough to ensure the right balance is struck.

For more information on these topics, visit diplomacy.edu

EU refuses to soften tech laws for Trump trade deal

The European Union has firmly ruled out dismantling its strict digital regulations in a bid to secure a trade deal with Donald Trump. Henna Virkkunen, the EU’s top official for digital policy, said the bloc remained fully committed to its digital rulebook instead of relaxing its standards to satisfy American demands.

While she welcomed a temporary pause in US tariffs, she made clear that the EU’s regulations were designed to ensure fairness and safety for all companies, regardless of origin, and were not intended as a direct attack on US tech giants.

Tensions have mounted in recent weeks, with Trump officials accusing the EU of unfairly targeting American firms through regulatory means. Executives like Mark Zuckerberg have criticised the EU’s approach, calling it a form of censorship, while the US has continued imposing tariffs on European goods.

Virkkunen defended the tougher obligations placed on large firms like Meta, Apple and Alphabet, explaining that greater influence came with greater responsibility.

She also noted that enforcement actions under the Digital Markets Act and Digital Services Act aim to ensure compliance instead of simply imposing large fines.

Although France has pushed for stronger retaliation, the European Commission has held back from launching direct countermeasures against US tech firms, instead preparing a range of options in case talks fail.

Virkkunen avoided speculation on such moves, saying the EU preferred cooperation to conflict. At the same time, she is advancing a broader tech strategy, including plans for five AI gigafactories, while also considering adjustments to the EU’s AI Act to better support small businesses and innovation.

Acknowledging creative industries’ concerns over generative AI, Virkkunen said new measures were needed to ensure fair compensation for copyrighted material used in AI training instead of leaving European creators unprotected.

The Commission is now exploring licensing models that could strike a balance between enabling innovation and safeguarding rights, reflecting the bloc’s intent to lead in tech policy without sacrificing democratic values or artistic contributions.

For more information on these topics, visit diplomacy.edu.

TikTok affair, China disagrees with Trump over $54B deal due to tariffs rise

The fate of TikTok hangs in the balance as China and the US trade moves over a potential deal to keep the app alive for its 170 million American users. 

On 9 April 2025, China’s commerce ministry declared that any sale of TikTok must pass its government’s strict review, throwing a wrench into negotiations just as President Donald Trump hinted that a deal remains within reach.

China’s stance is clear: no deal gets the green light without approval. 

The ministry stressed that TikTok’s sales must comply with Chinese laws, particularly those governing technology exports, a nod to a 2020 regulation that gives Beijing veto power over the app’s algorithm, the secret ingredient behind its viral success. 

The disagreement comes after Trump’s recent tariff hikes, which slapped a 54% duty on Chinese goods, prompting Beijing to push back hard. 

China had already signalled it wouldn’t budge on the deal following Trump’s tariff announcement, a move that doesn’t seem to give TikTok too much significance in a broader trade war.

Meanwhile, Trump, speaking on 9 April 2025, kept hope alive, insisting that a TikTok deal is ‘still on the table.’ He extended the deadline for ByteDance, TikTok’s Chinese parent, to find a non-Chinese buyer by 75 days, pushing the cutoff to mid-June after a near-miss on 5 April

The deal, which would spin off TikTok’s US operations into a new entity majority-owned by American investors, could have been nearly finalised before China’s objections stalled it

Investors, too, are on edge, with the US entity’s future clouded by geopolitical sparring. 

Trump’s optimism, paired with his earlier willingness to ease tariffs, shows he’s playing a long game, balancing national security fears with a desire to keep the app functional for its massive US audience.

Washington has long worried that TikTok’s Chinese ownership makes it a conduit for Beijing to spy on the Americans or sway public opinion, a concern that led to a 2024 law demanding ByteDance divest the app or face a ban

That law briefly shuttered TikTok in January 2025, only for Trump to step in with a reprieve. Now, with ByteDance poised to hold a minority stake in a US-based TikTok, the deal’s success hinges on China’s nod, a nod that looks increasingly elusive as trade tensions simmer. 

If China blocks the deal, it could set a precedent for other nations to tighten their grip on digital exports, radically reshaping governmental interdisciplinary approaches and cyberspace, posing a final question: will the internet, as we know it, remain as a globally unified societal enabler or it will divide into national space with new monopolies?

AI feud intensifies as OpenAI sues Elon Musk

OpenAI has filed a countersuit against Elon Musk, accusing the billionaire entrepreneur of a sustained campaign of harassment intended to damage the company and regain control over its AI developments.

The legal filing comes in response to Musk’s lawsuit earlier this year, in which he claimed OpenAI had strayed from its founding mission of developing AI for the benefit of humanity.

In its countersuit, OpenAI urged a federal court to block Musk from taking further ‘unlawful and unfair actions’ and hold him accountable for the alleged damage already inflicted.

The company cited press attacks, legal pressure, and social media posts to Musk’s 200 million followers as tactics aimed at undermining its operations and reputation.

It also described Musk’s demands for corporate records and attempted acquisition efforts as part of a broader scheme to derail OpenAI’s progress.

The legal conflict highlights the growing rivalry between OpenAI and xAI, the AI firm Musk launched in 2023.

OpenAI maintains that Musk’s actions are motivated by self-interest and a desire to slow down a competing organisation. A jury trial has been scheduled for spring 2026 to resolve the escalating dispute.

For more information on these topics, visit diplomacy.edu.

Former Facebook executive says Meta misled over China

Former Facebook executive Sarah Wynn-Williams has accused Meta of compromising US national security to grow its business in China.

Testifying before the Senate Judiciary Committee, Wynn-Williams alleged that company executives misled employees, lawmakers, and the public about their dealings with the Chinese Communist Party.

Wynn-Williams claimed Meta aimed to gain favour in Beijing while secretly pursuing an $18 billion venture there.

In her remarks, Wynn-Williams said Meta removed the Facebook account of Chinese dissident Guo Wengui under pressure from Beijing. While the company maintains the removal was due to violations of its policies, she framed it as part of a broader pattern of submission to Chinese demands.

She also accused Meta of ignoring security warnings linked to the proposed Pacific Light Cable Network, a project that could have allowed China access to United States user data. According to her, the plans were only halted after lawmakers intervened.

Meta has denied the claims, calling her testimony false and out of touch with reality. A spokesperson noted that the company does not operate in China and that Mark Zuckerberg’s interest in the market had long been public.

The allegations arrive days before Meta’s major antitrust trial, which could result in the breakup of its ownership of Instagram and WhatsApp.

For more information on these topics, visit diplomacy.edu.

Apple challenges UK government over encrypted iCloud access order

A British court has confirmed that Apple is engaged in legal proceedings against the UK government concerning a statutory notice linked to iCloud account encryption. The Investigatory Powers Tribunal (IPT), which handles cases involving national security and surveillance, disclosed limited information about the case, lifting previous restrictions on its existence.

The dispute centres on a government-issued Technical Capability Notice (TCN), which, according to reports, required Apple to provide access to encrypted iCloud data for users in the UK. Apple subsequently removed the option for end-to-end encryption on iCloud accounts in the region earlier this year. While the company has not officially confirmed the connection, it has consistently stated it does not create backdoors or master keys for its products.

The government’s position has been to neither confirm nor deny the existence of individual notices. However, in a rare public statement, a government spokesperson clarified that TCNs do not grant direct access to data and must be used in conjunction with appropriate warrants and authorisations. The spokesperson also stated that the notices are designed to support existing investigatory powers, not expand them.

The IPT allowed the basic facts of the case to be released following submissions from media outlets, civil society organisations, and members of the United States Congress. These parties argued that public interest considerations justified disclosure of the case’s existence. The tribunal concluded that confirming the identities of the parties and the general subject matter would not compromise national security or the public interest.

Previous public statements by US officials, including the former President and the current Director of National Intelligence, have acknowledged concerns surrounding the TCN process and its implications for international technology companies. In particular, questions have been raised regarding transparency and oversight of such powers.

Legal academics and members of the intelligence community have also commented on the broader implications of government access to encrypted platforms, with some suggesting that increased openness may be necessary to maintain public trust.

The case remains ongoing. Future proceedings will be determined once both parties have reviewed a private judgment issued by the court. The IPT is expected to issue a procedural timetable following input from both Apple and the UK Home Secretary.

For more information on these topics, visit diplomacy.edu.

Man uses AI avatar in New York court

A 74-year-old man representing himself in a New York State appeal has apologised after using an AI-generated avatar during court proceedings.

Jerome Dewald submitted a video featuring a youthful digital figure to deliver part of his legal argument, prompting confusion and criticism from the judges. One justice described the move as misleading, expressing frustration over the lack of prior disclosure.

Dewald later explained he intended to ease his courtroom anxiety and present his case more clearly, not to deceive.

In a letter to the judges, he acknowledged that transparency should have taken priority and accepted responsibility for the confusion caused. His case, a contract dispute with a former employer, remains under review by the appellate court.

The incident has reignited debate over the role of AI in legal settings. Recent years have seen several high-profile cases where AI-generated content introduced errors or false information, highlighting the risks of using generative technology without proper oversight.

Legal experts say such incidents are becoming increasingly common as AI tools become more accessible.

For more information on these topics, visit diplomacy.edu.

Meta rolls out restricted teen accounts across platforms

Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September

The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon. 

These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.

Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them. 

Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited. 

These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.

On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages. 

Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.

The initiative follows increasing pressure on social media platforms to address concerns around teen mental health. 

In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments. 

Some states have even mandated parental consent for teen access to social platforms.

Meta reports that over 54 million Instagram accounts have migrated to Teen Accounts. 

According to the company, 97% of users aged 13 to 15 keep the default protections in place. 

A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.

As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.

For more information on these topics, visit diplomacy.edu.

Trump administration pushes for pro-AI shift in US federal agencies

The White House announced on Monday a shift in how US federal agencies will approach AI, prioritising innovation over the stricter regulatory framework previously established under President Biden. 

A new memorandum from the Office of Management and Budget instructs agencies to appoint chief AI officers and craft policies to expand the use of AI technologies across government operations.

This pivot includes repealing two Biden-era directives emphasising transparency and safeguards against AI misuse. 

The earlier rules required federal agencies to implement protective measures for civil rights and limit unchecked acquisition of AI tools. 

These protections have now been replaced with a call for a more ‘forward-leaning and pro-innovation’ stance, removing what the current administration views as excessive bureaucratic constraints.

Federal agencies are now expected to develop AI strategies within six months. These plans must identify barriers to responsible AI implementation and improve how the technology is used enterprise-wide. 

The administration also encouraged the development of specific policies for generative AI, emphasising maximising the use of American-made solutions and enhancing interoperability between systems.

The policy change is part of President Trump’s broader rollback of previous AI governance, including his earlier revocation of a 2023 executive order signed by Biden that required developers to disclose sensitive training data. 

The new framework aims to streamline AI procurement processes and eliminate what the administration labels unnecessary reporting burdens while still maintaining basic privacy protections.

Federal agencies have already begun integrating AI into their operations. The Federal Aviation Administration, for example, has applied machine learning to analyse safety reports and identify emerging aviation risks. 

Under the new guidelines, such initiatives are expected to accelerate, signalling a broader federal embrace of AI across sectors.

For more information on these topics, visit diplomacy.edu.

Russia fines Telegram over extremist content

A Moscow court has fined the messaging platform Telegram 7 million roubles (approximately $80,000) for failing to remove content allegedly promoting terrorist acts and inciting anti-government protests, according to TASS (Russian state news agency).

The court ruled that Telegram did not comply with legal obligations to take down materials deemed extremist, including calls to sabotage railway systems in support of Ukrainian forces and to overthrow the Russian government.

The judgement cited specific Telegram channels accused of distributing such content. Authorities argue that these channels played a role in encouraging public unrest and potentially supporting hostile actions against the Russian state.

The decision adds to the long-standing tension between Russia’s media watchdogs and Telegram, which remains one of the most widely used messaging platforms across Russia and neighbouring countries.

Telegram has not stated in response to the fine, and it is unclear whether the company plans to challenge the court’s ruling. 

The platform was founded by Russian-born entrepreneur Pavel Durov and is currently headquartered in Dubai, boasting close to a billion users globally. 

Telegram’s decentralised nature and encrypted messaging features have made it popular among users seeking privacy, but it has also drawn criticism from governments citing national security concerns.

Durov himself returned to Dubai in March after months in France following his 2024 arrest linked to accusations that Telegram was used in connection with fraud, money laundering, and the circulation of illegal content.

Although he has denied any wrongdoing, the incident has further strained the company’s relationship with authorities in Russia.

This latest legal action reflects Russia’s ongoing crackdown on digital platforms accused of facilitating dissent or undermining state control.

With geopolitical tensions still high, especially surrounding the conflict in Ukraine, platforms like Telegram face increasing scrutiny and legal pressure in multiple jurisdictions.