Facebook introduces Friends tab for a personalised feed

Facebook is making it easier for users to focus on posts from friends and family with a new Friends tab, with Meta announcing the feature as part of an effort to bring back the ‘OG’ Facebook experience. The latest addition allows users to view a feed filled exclusively with content from their friends.

The Friends tab, located in the tab bar at the bottom of the app, displays posts, stories, and videos from friends, along with birthday reminders, friend requests, and suggested connections. Users in the United States and Canada can customise their tab bar if the feature does not appear automatically.

The move by Meta mirrors similar updates on other social media platforms, such as Threads and X, which introduced dedicated tabs for following-only content.

The change aims to restore Facebook’s original purpose—connecting users with friends and family—by reducing the prominence of algorithm-driven posts from non-followed accounts.

For more information on these topics, visit diplomacy.edu.

Google’s popular search feature gets a rival from Perplexity

AI search company Perplexity is developing a feature similar to Google’s popular Circle to Search, according to CEO Aravind Srinivas. He announced on X that the functionality would be ‘coming soon’ to all Android users, though specific details remain unclear.

A demo video shared by Srinivas showed how users can highlight text in conversations with Perplexity and request further information.

In the demo, a user circled a mention of Roger Federer and asked about his net worth, prompting Perplexity to fetch details from the web. However, since Google has trademarked ‘Circle to Search’, Perplexity may need a different name for its version.

Perplexity has been gaining popularity as an AI-powered search assistant, with some users preferring it over Google’s Gemini. The company recently introduced an AI-driven web browser called Comet, though it remains uncertain whether it will expand beyond smartphones to platforms like Windows and macOS.

For more information on these topics, visit diplomacy.edu.

AI chatbot shows promise in mental health assistance

Dartmouth College researchers have trialled an AI chatbot, Therabot, designed to assist with mental health care. In a groundbreaking clinical trial, the app was tested on individuals with major depressive disorder (MDD), generalised anxiety disorder (GAD), and those at risk for eating disorders.

The results showed encouraging improvements, with users reporting up to a 51% reduction in depression and a 31% decrease in anxiety. These outcomes were comparable to traditional outpatient therapy.

The trial also revealed that Therabot was effective in helping individuals with eating disorder risks, leading to a 19% reduction in harmful thoughts about body image and weight issues.

Researchers noted that after eight weeks of engagement with the app, participants showed significant symptom reduction, marking progress comparable to standard cognitive therapy.

While Therabot’s success offers hope, experts highlight the importance of balancing AI with human oversight, especially in sensitive mental health applications.

The study’s authors emphasised that while AI can help improve access to therapy, particularly for those unable to access in-person care, generative AI tools must be used cautiously, as errors could have serious consequences for individuals at risk of self-harm.

For more information on these topics, visit diplomacy.edu.

WhatsApp wins support in EU fine appeal

WhatsApp has gained support from an adviser to the European Court of Justice in its fight against a higher fine imposed by the EU privacy watchdog.

The Irish Data Protection Authority fined WhatsApp 225 million euros ($242.2 million) in 2021 for privacy breaches.

The fine was increased after the European Data Protection Board (EDPB) intervened.

A lower tribunal had rejected WhatsApp’s challenge, saying the company lacked legal standing. However, WhatsApp appealed to the Court of Justice of the European Union (CJEU).

Advocate General Tamara Capeta disagreed with the tribunal, recommending that the case be referred back to the General Court for further review.

The CJEU usually follows the adviser’s recommendations, and a final ruling is expected soon. This case could have significant implications for the fine imposed on WhatsApp.

For more information on these topics, visit diplomacy.edu.

Trump weighs tariff cuts to secure TikTok deal

US President Donald Trump has indicated he is willing to reduce tariffs on China as part of a deal with ByteDance, TikTok’s Chinese parent company, to sell the popular short-video app.

ByteDance faces an April 5 deadline to divest TikTok’s US operations or risk a nationwide ban over national security concerns.

The law mandating the sale stems from fears in Washington that Beijing could exploit the app for influence operations and data collection on American users.

Trump suggested he may extend the deadline if negotiations require more time and acknowledged China’s role in the deal’s approval. Speaking to reporters, he hinted that tariff reductions could be used as leverage to finalise an agreement.

China’s commerce ministry responded by reaffirming its stance on trade discussions, stating that engagement with Washington should be based on mutual respect and benefit.

The White House has taken an active role in brokering a potential sale, with discussions centring on major non-Chinese investors increasing their stakes to acquire TikTok’s US operations. Vice President JD Vance has expressed confidence that a framework for the deal could be reached by the April deadline.

Free speech advocates, meanwhile, continue to challenge the law, arguing that banning TikTok could violate the First Amendment rights of American users.

For more information on these topics, visit diplomacy.edu.

Trump dismisses Signal leak, supports Waltz

US President Donald Trump on Tuesday downplayed the incident in which sensitive military plans for a strike against Yemen’s Houthis were mistakenly shared in a group chat that included a journalist. Trump referred to it as ‘the only glitch in two months’ and insisted that it was ‘not a serious’ issue.

The development, which surprised him when first questioned by reporters, has sparked criticism from Democratic lawmakers accusing the administration of mishandling sensitive information.

The lapse occurred when US National Security Adviser Mike Waltz unintentionally included Jeffrey Goldberg, editor-in-chief of The Atlantic, in a group chat with 18 high-ranking officials discussing military strike plans.

Waltz admitted to the mistake and accepted full responsibility, stating that an aide had mistakenly added Goldberg’s contact to the conversation.

The incident, which took place over the Signal app, has raised concerns due to the app’s public availability and its use for discussing such sensitive matters.

While Trump continued to express support for Waltz, Democratic critics, including former Secretary of State Hillary Clinton, have voiced strong disapproval.

Clinton, commenting on the breach, highlighted the irony of the situation, given Trump’s previous criticisms of Hillary Clinton’s use of a private email server for sensitive material.

For more information on these topics, visit diplomacy.edu.

X’s Türkiye tangle, between freedom of speech, control, and digital defiance

In the streets of Istanbul and beyond, a storm of unrest swept Türkiye in the past week, sparked by the arrest of Istanbul Mayor Ekrem İmamoğlu, a political figure whose detention has provoked nationwide protests. Amid these events, a digital battlefield has emerged, with X, the social media platform helmed by Elon Musk, thrust into the spotlight. 

Global news reveals that X has suspended many accounts linked to activists and opposition voices sharing protest details. Yet, a twist: X has also publicly rebuffed a Turkish government demand to suspend ‘over 700 accounts,’ vowing to defend free speech. 

This clash between compliance and defiance offers a vivid example of the controversy around freedom of speech and content policy in the digital age, where global platforms, national power, and individual voices collide like tectonic plates on a restless earth.

The spark: protests and a digital crackdown

The unrest began with İmamoğlu’s arrest, a move many saw as a political jab by President Recep Tayyip Erdoğan’s government against a prominent rival. As tear gas clouded the air and chants echoed through Turkish cities, protesters turned to X to organise, share live updates, and amplify their dissent. University students, opposition supporters, and grassroots activists flooded the platform with hashtags and footage: raw, unfiltered glimpses of a nation at odds with itself. But this digital megaphone didn’t go unnoticed. Turkish authorities pinpointed 326 accounts for the takedown, accusing them of ‘inciting hatred’ and destabilising order. X’s response? X has partially fulfilled the Turkish authorities’ alleged requests by ‘likely’ suspending many accounts.

The case isn’t the first where Türkish authorities require platforms to take action. For instance, during the 2013 Gezi Park protests, Twitter (X’s predecessor) faced similar requests. Erdoğan’s administration has long wielded legal provisions like Article 299 of the Penal Code (insulting the president) as a measure of fining platforms that don’t align with the government content policy. Freedom House’s 2024 report labels the country’s internet freedom as ‘not free,’ citing a history of throttling dissent online. Yet, X’s partial obedience here (selectively suspending accounts) hints at a tightrope walk: bowing just enough to keep operating in Türkiye while dodging a complete shutdown that could alienate its user base. For Turks, it’s a bitter pill: a platform they’ve leaned on as a lifeline for free expression now feels like an unreliable ally.

X’s defiant stand: a free speech facade?

Then came the curveball. Posts on X from users like @botella_roberto lit up feeds with news that X had rejected a broader Turkish demand to suspend ‘over 700 accounts,’ calling it ‘illegal’ and doubling down with a statement: ‘X will always defend freedom of speech.’ Such a stance paints X as a guardian of expression, a digital David slinging stones at an authoritarian Goliath.

Either way, one theory, whispered across X posts, is that X faced an ultimatum: suspend the critical accounts or risk a nationwide ban, a fate Twitter suffered in 2014

By complying with a partial measure, X might be playing a calculated game: preserving its Turkish foothold while burnishing its free-speech credibility globally. Musk, after all, has built X’s brand on unfiltered discourse, a stark pivot from Twitter’s pre-2022 moderation-heavy days. Yet, this defiance rings hollow to some. Amnesty International’s Türkiye researcher noted that the suspended accounts (often young activists) were the very voices X claims to champion.

Freedom of speech: a cultural tug-of-war

This saga isn’t just about X or Türkiye; it is an example reflecting the global tussle over what ‘freedom of speech’ means in 2025. In some countries, it is enshrined in laws and fiercely debated on platforms like X, where Musk’s ‘maximally helpful’ ethos thrives. In others, it’s a fragile thread woven into cultural fabrics that prizes collective stability over individual outcry. In Türkiye, the government frames dissent as a threat to national unity, a stance rooted in decades of political upheaval—think coups in 1960 and 1980. Consequently, protesters saw X as a megaphone to challenge that narrative, but when the platform suspended some of their accounts, it was as if the rug had been yanked out from under their feet, reinforcing an infamous sociocultural norm: speak too loud and you’ll be hushed.

Posts on X echo a split sentiment: some laud X for resisting some of the government’s requests, while others decry its compliance as a betrayal. This duality brings us to the conclusion that digital platforms aren’t neutral arbiters in free cyberspace but chameleons, adapting to local laws while trying to project a universal image.

Content policy: the invisible hand

X’s content policy, or lack thereof, adds another layer to this sociocultural dispute. Unlike Meta or YouTube, which lean on thick rulebooks, X under Musk has slashed moderation, betting on user-driven truth over top-down control. Its 2024 transparency report, cited in X posts, shows a global takedown compliance rate of 80%, but Türkiye’s 86% suggests a higher deference to Ankara’s demands. Why? Reuters points to Türkiye’s 2020 social media law, which mandates that platforms appoint local representatives to comply with takedowns or face bandwidth cuts and fines. X’s Istanbul office opened in 2023, signals its intent to play on Turkish ground, but the alleged refusal of government requests shows a line in the sand: comply, but not blindly.

This policy controversy isn’t unique to Türkiye. In Brazil, X faced a 2024 ban over misinformation, only to backtrack after appointing a local representative. In India, X sues Modi’s government over content removal in the new India censorship fight. In the US, X fights court battles to protect user speech. In Türkiye, it bows (partly) to avoid exile. Each case underscores a sociocultural truth: content policy isn’t unchangeable; it’s a continuous legal dispute between big tech, national power and the voice of the people.

Conclusions

As the protests simmer and X navigates Türkiye’s demands, the world watches a sociocultural experiment unfold. Will X double down on defiance, risking a ban that could cost 20 million Turkish users (per 2024 Statista data)? Or will it bend further, cementing its role as a compliant guest in Ankara’s house? The answer could shape future digital dissents and the global blueprint for free speech online. For now, it is a standoff: X holds a megaphone in one hand, a gag in the other, while protesters shout into the fray.

Meta’s use of pirated content in AI development raises legal and ethical challenges

In its quest to develop the Llama 3 AI model, Meta faced significant ethical and legal hurdles regarding sourcing a large volume of high-quality text required for AI training. The company evaluated legal licensing for acquiring books and research papers but dismissed these options due to high costs and delays.

Internal discussions indicated a preference for maintaining legal flexibility by avoiding licensing constraints and pursuing a ‘fair use’ strategy. Consequently, Meta turned to Library Genesis (LibGen), a vast database of pirated books and papers, a move reportedly sanctioned by CEO Mark Zuckerberg.

That decision led to copyright-infringement lawsuits from authors, including Sarah Silverman and Junot Díaz, underlining the complexities of pirated content in AI development. Meta and OpenAI have defended their use of copyrighted materials by invoking ‘fair use’, arguing that their AI systems transform original works into new creations.

Despite this defence, the legality remains contentious, especially as Meta’s internal communications acknowledged the legal risks and outlined measures to reduce exposure, such as removing data marked as pirated.

The situation draws attention to broader issues in the publishing world, where expensive and restricted access to literature and research has fuelled the rise of piracy sites like LibGen and Sci-Hub. While providing wider access, these platforms threaten intellectual creation’s sustainability by bypassing compensation for authors and researchers.

The challenges facing Meta and other AI companies raise important questions about managing the flow of knowledge in the digital era. While LibGen and similar repositories democratise access, they undermine intellectual property rights, disrupting the balance between accessibility and the protection of creators’ contributions.

For more information on these topics, visit diplomacy.edu.

Does Section 230 of the US Communication Decency Act protect users or tech platforms?

Typically, Section 230 of the US Communication Decency Act is considered to protect tech platforms from liability for the content provided. In a recent article, the Electronic Frontier Foundation argues that Section 230 protects users to participate in digital life.

The piece argues that repealing or altering Section 230 could inadvertently strengthen the position of big tech firms by removing the financial burden of litigation that smaller companies and startups cannot bear. Without these protections, smaller services might crumble under expensive legal challenges, stifling innovation and reducing competition in the digital landscape.

Such a scenario would leave big tech with even greater market dominance, which opponents of Section 230 seem to overlook. Additionally, the article addresses the misconception that eliminating Section 230 would enhance content moderation.

It clarifies that the law enables platforms to implement and enforce their standards without fear of increased liability, encouraging responsible moderation. EFF’s article argues that by allowing users and platforms to self-regulate, Section 230 prevents the US government from overreaching into defining acceptable speech, upholding a cornerstone of democratic values.

For more information on these topics, visit diplomacy.edu.

OpenAI unveils new image generator in ChatGPT

OpenAI has rolled out an image generator feature within ChatGPT, enabling users to create realistic images with improved accuracy. The new feature, available for all Plus, Pro, Team, and Free users, is powered by GPT-4o, which now offers distortion-free images and more accurate text generation.

OpenAI shared a sample image of a boarding pass, showcasing the advanced capabilities of the new tool.

Previously, image generation was available through DALL-E, but its results often contained errors and were easily identifiable as AI-generated. Now integrated into ChatGPT, the new tool allows users to describe images with specific details such as colours, aspect ratios, and transparent backgrounds.

The update aims to enhance creative freedom while maintaining a higher standard of image quality.

CEO Sam Altman praised the feature as a ‘new high-water mark’ for creative control, although he acknowledged the potential for some users to create offensive content.

OpenAI plans to monitor how users interact with this tool and adjust as needed, especially as the technology moves closer to artificial general intelligence (AGI).

For more information on these topics, visit diplomacy.edu.