WhatsApp wins support in EU fine appeal

WhatsApp has gained support from an adviser to the European Court of Justice in its fight against a higher fine imposed by the EU privacy watchdog.

The Irish Data Protection Authority fined WhatsApp 225 million euros ($242.2 million) in 2021 for privacy breaches.

The fine was increased after the European Data Protection Board (EDPB) intervened.

A lower tribunal had rejected WhatsApp’s challenge, saying the company lacked legal standing. However, WhatsApp appealed to the Court of Justice of the European Union (CJEU).

Advocate General Tamara Capeta disagreed with the tribunal, recommending that the case be referred back to the General Court for further review.

The CJEU usually follows the adviser’s recommendations, and a final ruling is expected soon. This case could have significant implications for the fine imposed on WhatsApp.

For more information on these topics, visit diplomacy.edu.

Trump weighs tariff cuts to secure TikTok deal

US President Donald Trump has indicated he is willing to reduce tariffs on China as part of a deal with ByteDance, TikTok’s Chinese parent company, to sell the popular short-video app.

ByteDance faces an April 5 deadline to divest TikTok’s US operations or risk a nationwide ban over national security concerns.

The law mandating the sale stems from fears in Washington that Beijing could exploit the app for influence operations and data collection on American users.

Trump suggested he may extend the deadline if negotiations require more time and acknowledged China’s role in the deal’s approval. Speaking to reporters, he hinted that tariff reductions could be used as leverage to finalise an agreement.

China’s commerce ministry responded by reaffirming its stance on trade discussions, stating that engagement with Washington should be based on mutual respect and benefit.

The White House has taken an active role in brokering a potential sale, with discussions centring on major non-Chinese investors increasing their stakes to acquire TikTok’s US operations. Vice President JD Vance has expressed confidence that a framework for the deal could be reached by the April deadline.

Free speech advocates, meanwhile, continue to challenge the law, arguing that banning TikTok could violate the First Amendment rights of American users.

For more information on these topics, visit diplomacy.edu.

Trump dismisses Signal leak, supports Waltz

US President Donald Trump on Tuesday downplayed the incident in which sensitive military plans for a strike against Yemen’s Houthis were mistakenly shared in a group chat that included a journalist. Trump referred to it as ‘the only glitch in two months’ and insisted that it was ‘not a serious’ issue.

The development, which surprised him when first questioned by reporters, has sparked criticism from Democratic lawmakers accusing the administration of mishandling sensitive information.

The lapse occurred when US National Security Adviser Mike Waltz unintentionally included Jeffrey Goldberg, editor-in-chief of The Atlantic, in a group chat with 18 high-ranking officials discussing military strike plans.

Waltz admitted to the mistake and accepted full responsibility, stating that an aide had mistakenly added Goldberg’s contact to the conversation.

The incident, which took place over the Signal app, has raised concerns due to the app’s public availability and its use for discussing such sensitive matters.

While Trump continued to express support for Waltz, Democratic critics, including former Secretary of State Hillary Clinton, have voiced strong disapproval.

Clinton, commenting on the breach, highlighted the irony of the situation, given Trump’s previous criticisms of Hillary Clinton’s use of a private email server for sensitive material.

For more information on these topics, visit diplomacy.edu.

X’s Türkiye tangle, between freedom of speech, control, and digital defiance

In the streets of Istanbul and beyond, a storm of unrest swept Türkiye in the past week, sparked by the arrest of Istanbul Mayor Ekrem İmamoğlu, a political figure whose detention has provoked nationwide protests. Amid these events, a digital battlefield has emerged, with X, the social media platform helmed by Elon Musk, thrust into the spotlight. 

Global news reveals that X has suspended many accounts linked to activists and opposition voices sharing protest details. Yet, a twist: X has also publicly rebuffed a Turkish government demand to suspend ‘over 700 accounts,’ vowing to defend free speech. 

This clash between compliance and defiance offers a vivid example of the controversy around freedom of speech and content policy in the digital age, where global platforms, national power, and individual voices collide like tectonic plates on a restless earth.

The spark: protests and a digital crackdown

The unrest began with İmamoğlu’s arrest, a move many saw as a political jab by President Recep Tayyip Erdoğan’s government against a prominent rival. As tear gas clouded the air and chants echoed through Turkish cities, protesters turned to X to organise, share live updates, and amplify their dissent. University students, opposition supporters, and grassroots activists flooded the platform with hashtags and footage: raw, unfiltered glimpses of a nation at odds with itself. But this digital megaphone didn’t go unnoticed. Turkish authorities pinpointed 326 accounts for the takedown, accusing them of ‘inciting hatred’ and destabilising order. X’s response? X has partially fulfilled the Turkish authorities’ alleged requests by ‘likely’ suspending many accounts.

The case isn’t the first where Türkish authorities require platforms to take action. For instance, during the 2013 Gezi Park protests, Twitter (X’s predecessor) faced similar requests. Erdoğan’s administration has long wielded legal provisions like Article 299 of the Penal Code (insulting the president) as a measure of fining platforms that don’t align with the government content policy. Freedom House’s 2024 report labels the country’s internet freedom as ‘not free,’ citing a history of throttling dissent online. Yet, X’s partial obedience here (selectively suspending accounts) hints at a tightrope walk: bowing just enough to keep operating in Türkiye while dodging a complete shutdown that could alienate its user base. For Turks, it’s a bitter pill: a platform they’ve leaned on as a lifeline for free expression now feels like an unreliable ally.

X’s defiant stand: a free speech facade?

Then came the curveball. Posts on X from users like @botella_roberto lit up feeds with news that X had rejected a broader Turkish demand to suspend ‘over 700 accounts,’ calling it ‘illegal’ and doubling down with a statement: ‘X will always defend freedom of speech.’ Such a stance paints X as a guardian of expression, a digital David slinging stones at an authoritarian Goliath.

Either way, one theory, whispered across X posts, is that X faced an ultimatum: suspend the critical accounts or risk a nationwide ban, a fate Twitter suffered in 2014

By complying with a partial measure, X might be playing a calculated game: preserving its Turkish foothold while burnishing its free-speech credibility globally. Musk, after all, has built X’s brand on unfiltered discourse, a stark pivot from Twitter’s pre-2022 moderation-heavy days. Yet, this defiance rings hollow to some. Amnesty International’s Türkiye researcher noted that the suspended accounts (often young activists) were the very voices X claims to champion.

Freedom of speech: a cultural tug-of-war

This saga isn’t just about X or Türkiye; it is an example reflecting the global tussle over what ‘freedom of speech’ means in 2025. In some countries, it is enshrined in laws and fiercely debated on platforms like X, where Musk’s ‘maximally helpful’ ethos thrives. In others, it’s a fragile thread woven into cultural fabrics that prizes collective stability over individual outcry. In Türkiye, the government frames dissent as a threat to national unity, a stance rooted in decades of political upheaval—think coups in 1960 and 1980. Consequently, protesters saw X as a megaphone to challenge that narrative, but when the platform suspended some of their accounts, it was as if the rug had been yanked out from under their feet, reinforcing an infamous sociocultural norm: speak too loud and you’ll be hushed.

Posts on X echo a split sentiment: some laud X for resisting some of the government’s requests, while others decry its compliance as a betrayal. This duality brings us to the conclusion that digital platforms aren’t neutral arbiters in free cyberspace but chameleons, adapting to local laws while trying to project a universal image.

Content policy: the invisible hand

X’s content policy, or lack thereof, adds another layer to this sociocultural dispute. Unlike Meta or YouTube, which lean on thick rulebooks, X under Musk has slashed moderation, betting on user-driven truth over top-down control. Its 2024 transparency report, cited in X posts, shows a global takedown compliance rate of 80%, but Türkiye’s 86% suggests a higher deference to Ankara’s demands. Why? Reuters points to Türkiye’s 2020 social media law, which mandates that platforms appoint local representatives to comply with takedowns or face bandwidth cuts and fines. X’s Istanbul office opened in 2023, signals its intent to play on Turkish ground, but the alleged refusal of government requests shows a line in the sand: comply, but not blindly.

This policy controversy isn’t unique to Türkiye. In Brazil, X faced a 2024 ban over misinformation, only to backtrack after appointing a local representative. In India, X sues Modi’s government over content removal in the new India censorship fight. In the US, X fights court battles to protect user speech. In Türkiye, it bows (partly) to avoid exile. Each case underscores a sociocultural truth: content policy isn’t unchangeable; it’s a continuous legal dispute between big tech, national power and the voice of the people.

Conclusions

As the protests simmer and X navigates Türkiye’s demands, the world watches a sociocultural experiment unfold. Will X double down on defiance, risking a ban that could cost 20 million Turkish users (per 2024 Statista data)? Or will it bend further, cementing its role as a compliant guest in Ankara’s house? The answer could shape future digital dissents and the global blueprint for free speech online. For now, it is a standoff: X holds a megaphone in one hand, a gag in the other, while protesters shout into the fray.

Meta’s use of pirated content in AI development raises legal and ethical challenges

In its quest to develop the Llama 3 AI model, Meta faced significant ethical and legal hurdles regarding sourcing a large volume of high-quality text required for AI training. The company evaluated legal licensing for acquiring books and research papers but dismissed these options due to high costs and delays.

Internal discussions indicated a preference for maintaining legal flexibility by avoiding licensing constraints and pursuing a ‘fair use’ strategy. Consequently, Meta turned to Library Genesis (LibGen), a vast database of pirated books and papers, a move reportedly sanctioned by CEO Mark Zuckerberg.

That decision led to copyright-infringement lawsuits from authors, including Sarah Silverman and Junot Díaz, underlining the complexities of pirated content in AI development. Meta and OpenAI have defended their use of copyrighted materials by invoking ‘fair use’, arguing that their AI systems transform original works into new creations.

Despite this defence, the legality remains contentious, especially as Meta’s internal communications acknowledged the legal risks and outlined measures to reduce exposure, such as removing data marked as pirated.

The situation draws attention to broader issues in the publishing world, where expensive and restricted access to literature and research has fuelled the rise of piracy sites like LibGen and Sci-Hub. While providing wider access, these platforms threaten intellectual creation’s sustainability by bypassing compensation for authors and researchers.

The challenges facing Meta and other AI companies raise important questions about managing the flow of knowledge in the digital era. While LibGen and similar repositories democratise access, they undermine intellectual property rights, disrupting the balance between accessibility and the protection of creators’ contributions.

For more information on these topics, visit diplomacy.edu.

Does Section 230 of the US Communication Decency Act protect users or tech platforms?

Typically, Section 230 of the US Communication Decency Act is considered to protect tech platforms from liability for the content provided. In a recent article, the Electronic Frontier Foundation argues that Section 230 protects users to participate in digital life.

The piece argues that repealing or altering Section 230 could inadvertently strengthen the position of big tech firms by removing the financial burden of litigation that smaller companies and startups cannot bear. Without these protections, smaller services might crumble under expensive legal challenges, stifling innovation and reducing competition in the digital landscape.

Such a scenario would leave big tech with even greater market dominance, which opponents of Section 230 seem to overlook. Additionally, the article addresses the misconception that eliminating Section 230 would enhance content moderation.

It clarifies that the law enables platforms to implement and enforce their standards without fear of increased liability, encouraging responsible moderation. EFF’s article argues that by allowing users and platforms to self-regulate, Section 230 prevents the US government from overreaching into defining acceptable speech, upholding a cornerstone of democratic values.

For more information on these topics, visit diplomacy.edu.

OpenAI unveils new image generator in ChatGPT

OpenAI has rolled out an image generator feature within ChatGPT, enabling users to create realistic images with improved accuracy. The new feature, available for all Plus, Pro, Team, and Free users, is powered by GPT-4o, which now offers distortion-free images and more accurate text generation.

OpenAI shared a sample image of a boarding pass, showcasing the advanced capabilities of the new tool.

Previously, image generation was available through DALL-E, but its results often contained errors and were easily identifiable as AI-generated. Now integrated into ChatGPT, the new tool allows users to describe images with specific details such as colours, aspect ratios, and transparent backgrounds.

The update aims to enhance creative freedom while maintaining a higher standard of image quality.

CEO Sam Altman praised the feature as a ‘new high-water mark’ for creative control, although he acknowledged the potential for some users to create offensive content.

OpenAI plans to monitor how users interact with this tool and adjust as needed, especially as the technology moves closer to artificial general intelligence (AGI).

For more information on these topics, visit diplomacy.edu.

Instagram users react to Meta’s new AI experiment

Meta has come under fire once again, this time over a new AI experiment on Instagram that suggests comments for users. Some users accused the company of using AI to inflate engagement metrics, potentially misleading advertisers and diminishing authentic user interaction.

The feature, spotted by test users, involves a pencil icon next to the comment bar on Instagram posts. Tapping it generates suggested replies based on the image’s content.

Meta has confirmed the feature is in testing but did not reveal plans for a broader launch. The company stated that it is exploring ways to incorporate Meta AI across different parts of its apps, including feeds, comments, groups, and search.

Public reaction has been largely negative, with concerns that AI-generated comments could flood the platform with inauthentic conversations. Social media users voiced fears of fake interactions replacing genuine ones, and some accused Meta of deceiving advertisers through inflated statistics.

Comparisons to dystopian scenarios were common, as users questioned the future of online social spaces.

This isn’t the first time Meta has faced backlash for its AI ventures. Previous attempts included AI personas modelled on celebrities and diverse identities, which were criticised for being disingenuous and engineered by largely homogenous development teams.

The future of AI-generated comments on Instagram remains uncertain as scrutiny continues to mount.

For more information on these topics, visit diplomacy.edu.

Meta agrees to halt targeted ads in landmark UK privacy case

Meta, the owner of Facebook and Instagram, has agreed to stop targeting a UK citizen with personalised adverts as part of a settlement in a landmark privacy case.

The case, which avoided a high court trial, was brought by human rights campaigner Tanya O’Carroll in 2022, who claimed Meta had violated UK data laws by processing her personal data for targeted advertising without her consent.

O’Carroll’s case received support from the UK’s data watchdog, the Information Commissioner’s Office (ICO), which stated that users have the right to opt out of targeted ads.

The settlement has been hailed as a victory for O’Carroll, with potential implications for millions of social media users in the UK. Meta, however, disagreed with the claims. Instead of this, the company was considering introducing a subscription model in the UK for users who want an advert-free version of its platforms.

The ICO’s stance in favour of privacy rights could prompt similar lawsuits in the future, as users are increasingly demanding control over how their data is used online.

O’Carroll argued that the case demonstrated the growing desire for more control over surveillance advertising and said that the ICO’s support could encourage more people to object to targeted ads.

Meta, which generates most of its revenue from advertising, emphasised that it took its privacy obligations seriously and was exploring the option of a paid, ad-free service for UK users.

For more information on these topics, visit diplomacy.edu.

Apple plans to add cameras to future Apple Watch

Apple is reportedly planning to introduce cameras to its Apple Watch lineup within the next two years, integrating advanced AI-powered features like Visual Intelligence.

According to Bloomberg’s Mark Gurman, the standard Apple Watch Series will have a camera embedded within the display, while the Apple Watch Ultra will feature one on the side near the digital crown.

These cameras will allow the smartwatch to observe its surroundings and use AI to provide real-time, useful information to users.

Apple is also exploring similar camera technology for future AirPods, aiming to enhance their functionality with AI-driven capabilities.

The concept builds on the Visual Intelligence feature introduced with the iPhone 16, which allows users to extract details from flyers, identify locations, and more using the phone’s camera.

While the current system relies on external AI models, Apple is working on its in-house AI technology, and it is expected to power these features by 2027, when the camera-equipped Apple Watch and AirPods are likely to be released.

The move is part of Apple’s broader push into AI, led by Mike Rockwell, who previously spearheaded the Vision Pro project.

Rockwell is now overseeing the upgrade of Siri’s language model, which has faced delays, and contributing to visionOS, the operating system expected to support AI-enhanced AR glasses in the future. Apple’s increasing focus on AI suggests a shift towards more intelligent, context-aware wearable devices.

For more information on these topics, visit diplomacy.edu.