Apple developing AI coach for Health app

Apple is reportedly working on a revamped version of its Health app, which will feature an AI coach designed to help users improve their health instead of simply tracking basic data.

The AI coach will offer personalised advice based on data collected from users’ medical devices, with a particular focus on food tracking.

Bloomberg’s Mark Gurman, who initially reported on the project in 2023, now indicates that development is progressing, with the new feature expected to launch as part of iOS 19.4, possibly by spring or summer 2026.

The AI coach is currently being trained using data from Apple’s physicians, and the company plans to incorporate more medical professionals to provide health-related content, including videos, instead of relying solely on general advice. The new service is reportedly being referred to as Health+.

For more information on these topics, visit diplomacy.edu.

OpenAI faces copyright debate over Ghibli-style images

Studio Ghibli-style artwork has gone viral on social media, with users flocking to ChatGPT’s feature to create or transform images into Japanese anime-inspired versions. Celebrities have also joined the trend, posting Ghibli-style photos of themselves.

However, what began as a fun trend has sparked concerns over copyright infringement and the ethics of AI recreating the work of established artists instead of respecting their intellectual property.

While OpenAI has allowed premium users to create Ghibli-style images, users without subscriptions can still make up to three images for free.

The rise of this feature has led to debates over whether these AI-generated images violate copyright laws, particularly as the style is closely associated with renowned animator Hayao Miyazaki.

Intellectual property lawyer Even Brown clarified that the style itself isn’t explicitly protected, but he raised concerns that OpenAI’s AI may have been trained on Ghibli’s previous works instead of using independent sources, which could present potential copyright issues.

OpenAI has responded by taking a more conservative approach with its tools, introducing a refusal feature when users attempt to generate images in the style of living artists instead of allowing such images.

Despite this, the controversy continues, as artists like Karla Ortiz are suing other AI generators for copyright infringement. Ortiz has criticised OpenAI for not valuing the work and livelihoods of artists, calling the Ghibli trend a clear example of such disregard.

For more information on these topics, visit diplomacy.edu.

Facebook introduces Friends tab for a personalised feed

Facebook is making it easier for users to focus on posts from friends and family with a new Friends tab, with Meta announcing the feature as part of an effort to bring back the ‘OG’ Facebook experience. The latest addition allows users to view a feed filled exclusively with content from their friends.

The Friends tab, located in the tab bar at the bottom of the app, displays posts, stories, and videos from friends, along with birthday reminders, friend requests, and suggested connections. Users in the United States and Canada can customise their tab bar if the feature does not appear automatically.

The move by Meta mirrors similar updates on other social media platforms, such as Threads and X, which introduced dedicated tabs for following-only content.

The change aims to restore Facebook’s original purpose—connecting users with friends and family—by reducing the prominence of algorithm-driven posts from non-followed accounts.

For more information on these topics, visit diplomacy.edu.

Google’s popular search feature gets a rival from Perplexity

AI search company Perplexity is developing a feature similar to Google’s popular Circle to Search, according to CEO Aravind Srinivas. He announced on X that the functionality would be ‘coming soon’ to all Android users, though specific details remain unclear.

A demo video shared by Srinivas showed how users can highlight text in conversations with Perplexity and request further information.

In the demo, a user circled a mention of Roger Federer and asked about his net worth, prompting Perplexity to fetch details from the web. However, since Google has trademarked ‘Circle to Search’, Perplexity may need a different name for its version.

Perplexity has been gaining popularity as an AI-powered search assistant, with some users preferring it over Google’s Gemini. The company recently introduced an AI-driven web browser called Comet, though it remains uncertain whether it will expand beyond smartphones to platforms like Windows and macOS.

For more information on these topics, visit diplomacy.edu.

AI chatbot shows promise in mental health assistance

Dartmouth College researchers have trialled an AI chatbot, Therabot, designed to assist with mental health care. In a groundbreaking clinical trial, the app was tested on individuals with major depressive disorder (MDD), generalised anxiety disorder (GAD), and those at risk for eating disorders.

The results showed encouraging improvements, with users reporting up to a 51% reduction in depression and a 31% decrease in anxiety. These outcomes were comparable to traditional outpatient therapy.

The trial also revealed that Therabot was effective in helping individuals with eating disorder risks, leading to a 19% reduction in harmful thoughts about body image and weight issues.

Researchers noted that after eight weeks of engagement with the app, participants showed significant symptom reduction, marking progress comparable to standard cognitive therapy.

While Therabot’s success offers hope, experts highlight the importance of balancing AI with human oversight, especially in sensitive mental health applications.

The study’s authors emphasised that while AI can help improve access to therapy, particularly for those unable to access in-person care, generative AI tools must be used cautiously, as errors could have serious consequences for individuals at risk of self-harm.

For more information on these topics, visit diplomacy.edu.

WhatsApp wins support in EU fine appeal

WhatsApp has gained support from an adviser to the European Court of Justice in its fight against a higher fine imposed by the EU privacy watchdog.

The Irish Data Protection Authority fined WhatsApp 225 million euros ($242.2 million) in 2021 for privacy breaches.

The fine was increased after the European Data Protection Board (EDPB) intervened.

A lower tribunal had rejected WhatsApp’s challenge, saying the company lacked legal standing. However, WhatsApp appealed to the Court of Justice of the European Union (CJEU).

Advocate General Tamara Capeta disagreed with the tribunal, recommending that the case be referred back to the General Court for further review.

The CJEU usually follows the adviser’s recommendations, and a final ruling is expected soon. This case could have significant implications for the fine imposed on WhatsApp.

For more information on these topics, visit diplomacy.edu.

Trump weighs tariff cuts to secure TikTok deal

US President Donald Trump has indicated he is willing to reduce tariffs on China as part of a deal with ByteDance, TikTok’s Chinese parent company, to sell the popular short-video app.

ByteDance faces an April 5 deadline to divest TikTok’s US operations or risk a nationwide ban over national security concerns.

The law mandating the sale stems from fears in Washington that Beijing could exploit the app for influence operations and data collection on American users.

Trump suggested he may extend the deadline if negotiations require more time and acknowledged China’s role in the deal’s approval. Speaking to reporters, he hinted that tariff reductions could be used as leverage to finalise an agreement.

China’s commerce ministry responded by reaffirming its stance on trade discussions, stating that engagement with Washington should be based on mutual respect and benefit.

The White House has taken an active role in brokering a potential sale, with discussions centring on major non-Chinese investors increasing their stakes to acquire TikTok’s US operations. Vice President JD Vance has expressed confidence that a framework for the deal could be reached by the April deadline.

Free speech advocates, meanwhile, continue to challenge the law, arguing that banning TikTok could violate the First Amendment rights of American users.

For more information on these topics, visit diplomacy.edu.

Trump dismisses Signal leak, supports Waltz

US President Donald Trump on Tuesday downplayed the incident in which sensitive military plans for a strike against Yemen’s Houthis were mistakenly shared in a group chat that included a journalist. Trump referred to it as ‘the only glitch in two months’ and insisted that it was ‘not a serious’ issue.

The development, which surprised him when first questioned by reporters, has sparked criticism from Democratic lawmakers accusing the administration of mishandling sensitive information.

The lapse occurred when US National Security Adviser Mike Waltz unintentionally included Jeffrey Goldberg, editor-in-chief of The Atlantic, in a group chat with 18 high-ranking officials discussing military strike plans.

Waltz admitted to the mistake and accepted full responsibility, stating that an aide had mistakenly added Goldberg’s contact to the conversation.

The incident, which took place over the Signal app, has raised concerns due to the app’s public availability and its use for discussing such sensitive matters.

While Trump continued to express support for Waltz, Democratic critics, including former Secretary of State Hillary Clinton, have voiced strong disapproval.

Clinton, commenting on the breach, highlighted the irony of the situation, given Trump’s previous criticisms of Hillary Clinton’s use of a private email server for sensitive material.

For more information on these topics, visit diplomacy.edu.

X’s Türkiye tangle, between freedom of speech, control, and digital defiance

In the streets of Istanbul and beyond, a storm of unrest swept Türkiye in the past week, sparked by the arrest of Istanbul Mayor Ekrem İmamoğlu, a political figure whose detention has provoked nationwide protests. Amid these events, a digital battlefield has emerged, with X, the social media platform helmed by Elon Musk, thrust into the spotlight. 

Global news reveals that X has suspended many accounts linked to activists and opposition voices sharing protest details. Yet, a twist: X has also publicly rebuffed a Turkish government demand to suspend ‘over 700 accounts,’ vowing to defend free speech. 

This clash between compliance and defiance offers a vivid example of the controversy around freedom of speech and content policy in the digital age, where global platforms, national power, and individual voices collide like tectonic plates on a restless earth.

The spark: protests and a digital crackdown

The unrest began with İmamoğlu’s arrest, a move many saw as a political jab by President Recep Tayyip Erdoğan’s government against a prominent rival. As tear gas clouded the air and chants echoed through Turkish cities, protesters turned to X to organise, share live updates, and amplify their dissent. University students, opposition supporters, and grassroots activists flooded the platform with hashtags and footage: raw, unfiltered glimpses of a nation at odds with itself. But this digital megaphone didn’t go unnoticed. Turkish authorities pinpointed 326 accounts for the takedown, accusing them of ‘inciting hatred’ and destabilising order. X’s response? X has partially fulfilled the Turkish authorities’ alleged requests by ‘likely’ suspending many accounts.

The case isn’t the first where Türkish authorities require platforms to take action. For instance, during the 2013 Gezi Park protests, Twitter (X’s predecessor) faced similar requests. Erdoğan’s administration has long wielded legal provisions like Article 299 of the Penal Code (insulting the president) as a measure of fining platforms that don’t align with the government content policy. Freedom House’s 2024 report labels the country’s internet freedom as ‘not free,’ citing a history of throttling dissent online. Yet, X’s partial obedience here (selectively suspending accounts) hints at a tightrope walk: bowing just enough to keep operating in Türkiye while dodging a complete shutdown that could alienate its user base. For Turks, it’s a bitter pill: a platform they’ve leaned on as a lifeline for free expression now feels like an unreliable ally.

X’s defiant stand: a free speech facade?

Then came the curveball. Posts on X from users like @botella_roberto lit up feeds with news that X had rejected a broader Turkish demand to suspend ‘over 700 accounts,’ calling it ‘illegal’ and doubling down with a statement: ‘X will always defend freedom of speech.’ Such a stance paints X as a guardian of expression, a digital David slinging stones at an authoritarian Goliath.

Either way, one theory, whispered across X posts, is that X faced an ultimatum: suspend the critical accounts or risk a nationwide ban, a fate Twitter suffered in 2014

By complying with a partial measure, X might be playing a calculated game: preserving its Turkish foothold while burnishing its free-speech credibility globally. Musk, after all, has built X’s brand on unfiltered discourse, a stark pivot from Twitter’s pre-2022 moderation-heavy days. Yet, this defiance rings hollow to some. Amnesty International’s Türkiye researcher noted that the suspended accounts (often young activists) were the very voices X claims to champion.

Freedom of speech: a cultural tug-of-war

This saga isn’t just about X or Türkiye; it is an example reflecting the global tussle over what ‘freedom of speech’ means in 2025. In some countries, it is enshrined in laws and fiercely debated on platforms like X, where Musk’s ‘maximally helpful’ ethos thrives. In others, it’s a fragile thread woven into cultural fabrics that prizes collective stability over individual outcry. In Türkiye, the government frames dissent as a threat to national unity, a stance rooted in decades of political upheaval—think coups in 1960 and 1980. Consequently, protesters saw X as a megaphone to challenge that narrative, but when the platform suspended some of their accounts, it was as if the rug had been yanked out from under their feet, reinforcing an infamous sociocultural norm: speak too loud and you’ll be hushed.

Posts on X echo a split sentiment: some laud X for resisting some of the government’s requests, while others decry its compliance as a betrayal. This duality brings us to the conclusion that digital platforms aren’t neutral arbiters in free cyberspace but chameleons, adapting to local laws while trying to project a universal image.

Content policy: the invisible hand

X’s content policy, or lack thereof, adds another layer to this sociocultural dispute. Unlike Meta or YouTube, which lean on thick rulebooks, X under Musk has slashed moderation, betting on user-driven truth over top-down control. Its 2024 transparency report, cited in X posts, shows a global takedown compliance rate of 80%, but Türkiye’s 86% suggests a higher deference to Ankara’s demands. Why? Reuters points to Türkiye’s 2020 social media law, which mandates that platforms appoint local representatives to comply with takedowns or face bandwidth cuts and fines. X’s Istanbul office opened in 2023, signals its intent to play on Turkish ground, but the alleged refusal of government requests shows a line in the sand: comply, but not blindly.

This policy controversy isn’t unique to Türkiye. In Brazil, X faced a 2024 ban over misinformation, only to backtrack after appointing a local representative. In India, X sues Modi’s government over content removal in the new India censorship fight. In the US, X fights court battles to protect user speech. In Türkiye, it bows (partly) to avoid exile. Each case underscores a sociocultural truth: content policy isn’t unchangeable; it’s a continuous legal dispute between big tech, national power and the voice of the people.

Conclusions

As the protests simmer and X navigates Türkiye’s demands, the world watches a sociocultural experiment unfold. Will X double down on defiance, risking a ban that could cost 20 million Turkish users (per 2024 Statista data)? Or will it bend further, cementing its role as a compliant guest in Ankara’s house? The answer could shape future digital dissents and the global blueprint for free speech online. For now, it is a standoff: X holds a megaphone in one hand, a gag in the other, while protesters shout into the fray.

Meta’s use of pirated content in AI development raises legal and ethical challenges

In its quest to develop the Llama 3 AI model, Meta faced significant ethical and legal hurdles regarding sourcing a large volume of high-quality text required for AI training. The company evaluated legal licensing for acquiring books and research papers but dismissed these options due to high costs and delays.

Internal discussions indicated a preference for maintaining legal flexibility by avoiding licensing constraints and pursuing a ‘fair use’ strategy. Consequently, Meta turned to Library Genesis (LibGen), a vast database of pirated books and papers, a move reportedly sanctioned by CEO Mark Zuckerberg.

That decision led to copyright-infringement lawsuits from authors, including Sarah Silverman and Junot Díaz, underlining the complexities of pirated content in AI development. Meta and OpenAI have defended their use of copyrighted materials by invoking ‘fair use’, arguing that their AI systems transform original works into new creations.

Despite this defence, the legality remains contentious, especially as Meta’s internal communications acknowledged the legal risks and outlined measures to reduce exposure, such as removing data marked as pirated.

The situation draws attention to broader issues in the publishing world, where expensive and restricted access to literature and research has fuelled the rise of piracy sites like LibGen and Sci-Hub. While providing wider access, these platforms threaten intellectual creation’s sustainability by bypassing compensation for authors and researchers.

The challenges facing Meta and other AI companies raise important questions about managing the flow of knowledge in the digital era. While LibGen and similar repositories democratise access, they undermine intellectual property rights, disrupting the balance between accessibility and the protection of creators’ contributions.

For more information on these topics, visit diplomacy.edu.