X’s Türkiye tangle, between freedom of speech, control, and digital defiance

In the streets of Istanbul and beyond, a storm of unrest swept Türkiye in the past week, sparked by the arrest of Istanbul Mayor Ekrem İmamoğlu, a political figure whose detention has provoked nationwide protests. Amid these events, a digital battlefield has emerged, with X, the social media platform helmed by Elon Musk, thrust into the spotlight. 

Global news reveals that X has suspended many accounts linked to activists and opposition voices sharing protest details. Yet, a twist: X has also publicly rebuffed a Turkish government demand to suspend ‘over 700 accounts,’ vowing to defend free speech. 

This clash between compliance and defiance offers a vivid example of the controversy around freedom of speech and content policy in the digital age, where global platforms, national power, and individual voices collide like tectonic plates on a restless earth.

The spark: protests and a digital crackdown

The unrest began with İmamoğlu’s arrest, a move many saw as a political jab by President Recep Tayyip Erdoğan’s government against a prominent rival. As tear gas clouded the air and chants echoed through Turkish cities, protesters turned to X to organise, share live updates, and amplify their dissent. University students, opposition supporters, and grassroots activists flooded the platform with hashtags and footage: raw, unfiltered glimpses of a nation at odds with itself. But this digital megaphone didn’t go unnoticed. Turkish authorities pinpointed 326 accounts for the takedown, accusing them of ‘inciting hatred’ and destabilising order. X’s response? X has partially fulfilled the Turkish authorities’ alleged requests by ‘likely’ suspending many accounts.

The case isn’t the first where Türkish authorities require platforms to take action. For instance, during the 2013 Gezi Park protests, Twitter (X’s predecessor) faced similar requests. Erdoğan’s administration has long wielded legal provisions like Article 299 of the Penal Code (insulting the president) as a measure of fining platforms that don’t align with the government content policy. Freedom House’s 2024 report labels the country’s internet freedom as ‘not free,’ citing a history of throttling dissent online. Yet, X’s partial obedience here (selectively suspending accounts) hints at a tightrope walk: bowing just enough to keep operating in Türkiye while dodging a complete shutdown that could alienate its user base. For Turks, it’s a bitter pill: a platform they’ve leaned on as a lifeline for free expression now feels like an unreliable ally.

X’s defiant stand: a free speech facade?

Then came the curveball. Posts on X from users like @botella_roberto lit up feeds with news that X had rejected a broader Turkish demand to suspend ‘over 700 accounts,’ calling it ‘illegal’ and doubling down with a statement: ‘X will always defend freedom of speech.’ Such a stance paints X as a guardian of expression, a digital David slinging stones at an authoritarian Goliath.

Either way, one theory, whispered across X posts, is that X faced an ultimatum: suspend the critical accounts or risk a nationwide ban, a fate Twitter suffered in 2014

By complying with a partial measure, X might be playing a calculated game: preserving its Turkish foothold while burnishing its free-speech credibility globally. Musk, after all, has built X’s brand on unfiltered discourse, a stark pivot from Twitter’s pre-2022 moderation-heavy days. Yet, this defiance rings hollow to some. Amnesty International’s Türkiye researcher noted that the suspended accounts (often young activists) were the very voices X claims to champion.

Freedom of speech: a cultural tug-of-war

This saga isn’t just about X or Türkiye; it is an example reflecting the global tussle over what ‘freedom of speech’ means in 2025. In some countries, it is enshrined in laws and fiercely debated on platforms like X, where Musk’s ‘maximally helpful’ ethos thrives. In others, it’s a fragile thread woven into cultural fabrics that prizes collective stability over individual outcry. In Türkiye, the government frames dissent as a threat to national unity, a stance rooted in decades of political upheaval—think coups in 1960 and 1980. Consequently, protesters saw X as a megaphone to challenge that narrative, but when the platform suspended some of their accounts, it was as if the rug had been yanked out from under their feet, reinforcing an infamous sociocultural norm: speak too loud and you’ll be hushed.

Posts on X echo a split sentiment: some laud X for resisting some of the government’s requests, while others decry its compliance as a betrayal. This duality brings us to the conclusion that digital platforms aren’t neutral arbiters in free cyberspace but chameleons, adapting to local laws while trying to project a universal image.

Content policy: the invisible hand

X’s content policy, or lack thereof, adds another layer to this sociocultural dispute. Unlike Meta or YouTube, which lean on thick rulebooks, X under Musk has slashed moderation, betting on user-driven truth over top-down control. Its 2024 transparency report, cited in X posts, shows a global takedown compliance rate of 80%, but Türkiye’s 86% suggests a higher deference to Ankara’s demands. Why? Reuters points to Türkiye’s 2020 social media law, which mandates that platforms appoint local representatives to comply with takedowns or face bandwidth cuts and fines. X’s Istanbul office opened in 2023, signals its intent to play on Turkish ground, but the alleged refusal of government requests shows a line in the sand: comply, but not blindly.

This policy controversy isn’t unique to Türkiye. In Brazil, X faced a 2024 ban over misinformation, only to backtrack after appointing a local representative. In India, X sues Modi’s government over content removal in the new India censorship fight. In the US, X fights court battles to protect user speech. In Türkiye, it bows (partly) to avoid exile. Each case underscores a sociocultural truth: content policy isn’t unchangeable; it’s a continuous legal dispute between big tech, national power and the voice of the people.

Conclusions

As the protests simmer and X navigates Türkiye’s demands, the world watches a sociocultural experiment unfold. Will X double down on defiance, risking a ban that could cost 20 million Turkish users (per 2024 Statista data)? Or will it bend further, cementing its role as a compliant guest in Ankara’s house? The answer could shape future digital dissents and the global blueprint for free speech online. For now, it is a standoff: X holds a megaphone in one hand, a gag in the other, while protesters shout into the fray.

Meta’s use of pirated content in AI development raises legal and ethical challenges

In its quest to develop the Llama 3 AI model, Meta faced significant ethical and legal hurdles regarding sourcing a large volume of high-quality text required for AI training. The company evaluated legal licensing for acquiring books and research papers but dismissed these options due to high costs and delays.

Internal discussions indicated a preference for maintaining legal flexibility by avoiding licensing constraints and pursuing a ‘fair use’ strategy. Consequently, Meta turned to Library Genesis (LibGen), a vast database of pirated books and papers, a move reportedly sanctioned by CEO Mark Zuckerberg.

That decision led to copyright-infringement lawsuits from authors, including Sarah Silverman and Junot Díaz, underlining the complexities of pirated content in AI development. Meta and OpenAI have defended their use of copyrighted materials by invoking ‘fair use’, arguing that their AI systems transform original works into new creations.

Despite this defence, the legality remains contentious, especially as Meta’s internal communications acknowledged the legal risks and outlined measures to reduce exposure, such as removing data marked as pirated.

The situation draws attention to broader issues in the publishing world, where expensive and restricted access to literature and research has fuelled the rise of piracy sites like LibGen and Sci-Hub. While providing wider access, these platforms threaten intellectual creation’s sustainability by bypassing compensation for authors and researchers.

The challenges facing Meta and other AI companies raise important questions about managing the flow of knowledge in the digital era. While LibGen and similar repositories democratise access, they undermine intellectual property rights, disrupting the balance between accessibility and the protection of creators’ contributions.

For more information on these topics, visit diplomacy.edu.

Does Section 230 of the US Communication Decency Act protect users or tech platforms?

Typically, Section 230 of the US Communication Decency Act is considered to protect tech platforms from liability for the content provided. In a recent article, the Electronic Frontier Foundation argues that Section 230 protects users to participate in digital life.

The piece argues that repealing or altering Section 230 could inadvertently strengthen the position of big tech firms by removing the financial burden of litigation that smaller companies and startups cannot bear. Without these protections, smaller services might crumble under expensive legal challenges, stifling innovation and reducing competition in the digital landscape.

Such a scenario would leave big tech with even greater market dominance, which opponents of Section 230 seem to overlook. Additionally, the article addresses the misconception that eliminating Section 230 would enhance content moderation.

It clarifies that the law enables platforms to implement and enforce their standards without fear of increased liability, encouraging responsible moderation. EFF’s article argues that by allowing users and platforms to self-regulate, Section 230 prevents the US government from overreaching into defining acceptable speech, upholding a cornerstone of democratic values.

For more information on these topics, visit diplomacy.edu.

OpenAI unveils new image generator in ChatGPT

OpenAI has rolled out an image generator feature within ChatGPT, enabling users to create realistic images with improved accuracy. The new feature, available for all Plus, Pro, Team, and Free users, is powered by GPT-4o, which now offers distortion-free images and more accurate text generation.

OpenAI shared a sample image of a boarding pass, showcasing the advanced capabilities of the new tool.

Previously, image generation was available through DALL-E, but its results often contained errors and were easily identifiable as AI-generated. Now integrated into ChatGPT, the new tool allows users to describe images with specific details such as colours, aspect ratios, and transparent backgrounds.

The update aims to enhance creative freedom while maintaining a higher standard of image quality.

CEO Sam Altman praised the feature as a ‘new high-water mark’ for creative control, although he acknowledged the potential for some users to create offensive content.

OpenAI plans to monitor how users interact with this tool and adjust as needed, especially as the technology moves closer to artificial general intelligence (AGI).

For more information on these topics, visit diplomacy.edu.

Instagram users react to Meta’s new AI experiment

Meta has come under fire once again, this time over a new AI experiment on Instagram that suggests comments for users. Some users accused the company of using AI to inflate engagement metrics, potentially misleading advertisers and diminishing authentic user interaction.

The feature, spotted by test users, involves a pencil icon next to the comment bar on Instagram posts. Tapping it generates suggested replies based on the image’s content.

Meta has confirmed the feature is in testing but did not reveal plans for a broader launch. The company stated that it is exploring ways to incorporate Meta AI across different parts of its apps, including feeds, comments, groups, and search.

Public reaction has been largely negative, with concerns that AI-generated comments could flood the platform with inauthentic conversations. Social media users voiced fears of fake interactions replacing genuine ones, and some accused Meta of deceiving advertisers through inflated statistics.

Comparisons to dystopian scenarios were common, as users questioned the future of online social spaces.

This isn’t the first time Meta has faced backlash for its AI ventures. Previous attempts included AI personas modelled on celebrities and diverse identities, which were criticised for being disingenuous and engineered by largely homogenous development teams.

The future of AI-generated comments on Instagram remains uncertain as scrutiny continues to mount.

For more information on these topics, visit diplomacy.edu.

Meta agrees to halt targeted ads in landmark UK privacy case

Meta, the owner of Facebook and Instagram, has agreed to stop targeting a UK citizen with personalised adverts as part of a settlement in a landmark privacy case.

The case, which avoided a high court trial, was brought by human rights campaigner Tanya O’Carroll in 2022, who claimed Meta had violated UK data laws by processing her personal data for targeted advertising without her consent.

O’Carroll’s case received support from the UK’s data watchdog, the Information Commissioner’s Office (ICO), which stated that users have the right to opt out of targeted ads.

The settlement has been hailed as a victory for O’Carroll, with potential implications for millions of social media users in the UK. Meta, however, disagreed with the claims. Instead of this, the company was considering introducing a subscription model in the UK for users who want an advert-free version of its platforms.

The ICO’s stance in favour of privacy rights could prompt similar lawsuits in the future, as users are increasingly demanding control over how their data is used online.

O’Carroll argued that the case demonstrated the growing desire for more control over surveillance advertising and said that the ICO’s support could encourage more people to object to targeted ads.

Meta, which generates most of its revenue from advertising, emphasised that it took its privacy obligations seriously and was exploring the option of a paid, ad-free service for UK users.

For more information on these topics, visit diplomacy.edu.

Apple plans to add cameras to future Apple Watch

Apple is reportedly planning to introduce cameras to its Apple Watch lineup within the next two years, integrating advanced AI-powered features like Visual Intelligence.

According to Bloomberg’s Mark Gurman, the standard Apple Watch Series will have a camera embedded within the display, while the Apple Watch Ultra will feature one on the side near the digital crown.

These cameras will allow the smartwatch to observe its surroundings and use AI to provide real-time, useful information to users.

Apple is also exploring similar camera technology for future AirPods, aiming to enhance their functionality with AI-driven capabilities.

The concept builds on the Visual Intelligence feature introduced with the iPhone 16, which allows users to extract details from flyers, identify locations, and more using the phone’s camera.

While the current system relies on external AI models, Apple is working on its in-house AI technology, and it is expected to power these features by 2027, when the camera-equipped Apple Watch and AirPods are likely to be released.

The move is part of Apple’s broader push into AI, led by Mike Rockwell, who previously spearheaded the Vision Pro project.

Rockwell is now overseeing the upgrade of Siri’s language model, which has faced delays, and contributing to visionOS, the operating system expected to support AI-enhanced AR glasses in the future. Apple’s increasing focus on AI suggests a shift towards more intelligent, context-aware wearable devices.

For more information on these topics, visit diplomacy.edu.

Whistle-blower claims Meta is hindering legislative engagement

Former Facebook executive turned whistle-blower Sarah Wynn-Williams says Meta is preventing her from speaking to lawmakers about her experiences at the company following the release of her memoir Careless People. Meta filed for emergency arbitration the day her book was published, claiming it violated a non-disparagement agreement she signed upon leaving.

An arbitrator then temporarily barred her from promoting the book or making any critical remarks about Meta. As a result, Wynn-Williams says she cannot respond to requests from US, UK, and the EU lawmakers who want to speak with her about serious public interest issues raised in her memoir.

These include Meta’s alleged ties with the Chinese government and the platform’s impact on teenage girls. Her lawyers argue the arbitration order unfairly blocks her from contributing to ongoing investigations and legislative inquiries.

Meta maintains it does not intend to interfere with Wynn-Williams’ legal rights and insists the claims in her book are outdated or false. The company also points out that she can still file complaints with government agencies.

Wynn-Williams has filed whistle-blower complaints with the SEC and the Department of Justice. Her memoir, which describes internal controversies at Meta — including sexual harassment claims and the company’s ambitions in China — debuted on the New York Times best-seller list.

Despite Meta’s legal pushback, her legal team argues that silencing her voice is a disservice to the public and lawmakers working to address the social media giant’s influence and accountability.

For more information on these topics, visit diplomacy.edu.

How scammers are using fake Google Maps listings to target customers

Google has removed 10,000 fake business listings from Google Maps and filed a lawsuit against a scam network accused of creating and selling fraudulent profiles.

The legal action was prompted by a complaint from a Texas locksmith who discovered someone had impersonated their business on the platform. That led Google to uncover a broader scheme involving fake listings for profit.

The company warns that scammers are using increasingly advanced methods to trick users. These fake listings may appear legitimate, leading customers to contact or visit them.

Victims are sometimes overcharged for services or misled into paying upfront for services that are never delivered. Scammers also use fake reviews and manipulated Q&As to make the listings seem trustworthy.

In 2023 alone, Google blocked or removed 12 million fake business profiles, an increase of one million from the previous year.

The company has also been cracking down on businesses using fake engagement tactics, including artificial reviews, to inflate their reputations falsely.

Internationally, Google has begun implementing stricter rules in response to growing regulatory pressure, including in the UK, where it restricts deceptive businesses engaged in review manipulation.

For more information on these topics, visit diplomacy.edu.

Google adds Mind Maps to NotebookLM

Google has unveiled a new feature called Mind Maps for its AI-powered research tool, NotebookLM. Mind maps are visual diagrams that help users understand complex subjects by displaying ideas and their connections.

An addition like this follows the recent release of Audio Overviews, which provide AI-generated podcasts summarising key points from documents, articles, and videos.

NotebookLM, which works in both free and paid versions, assists users in summarising content and offering interactive conversations with AI to deepen understanding.

The new Mind Maps feature lets users generate and explore visual connections between ideas. Once created, users can zoom, expand or collapse branches, and click on nodes for detailed information on specific topics.

The feature is particularly useful for students or anyone who needs to absorb a lot of information quickly. With the combined power of Mind Maps and Audio Overviews, NotebookLM offers a multi-faceted approach to learning, making it easier to navigate and retain key insights.

For more information on these topics, visit diplomacy.edu.