Digital Watch newsletter – Issue 97 – March 2025

 Hourglass

Snapshot: The developments that made waves

AI governance

Chinese companies are increasingly backing DeepSeek‘s AI, marking a pivotal moment for the industry.

The European Commission has launched the OpenEuroLLM Project, a new initiative to develop open-source, multilingual AI models.

Australia has banned Chinese AI startup DeepSeek from all government devices, citing security risks.

World leaders gathered in Paris for the second day of the Artificial Intelligence (AI) Action Summit, where the focus turned to balancing national interests with global cooperation.

In his op-ed, From Hammurabi to ChatGPT, Jovan Kurbalija draws on the ancient Code of Hammurabi to argue for a principle of legal accountability in modern AI regulation and governance.

Technologies

DeepSeek, an up-and-coming AI startup from China, is rapidly pushing forward the release of its latest AI model, R2, following the remarkable success of its predecessor, R1.

Elon Musk’s AI startup, xAI, has unveiled its latest AI model, Grok-3, which the billionaire claims is the most advanced chatbot technology.

The New York Times has officially approved the use of AI tools for its editorial and product teams, allowing AI to assist with tasks such as generating social media copy, writing SEO headlines, and coding.

Microsoft has announced a groundbreaking quantum computing chip, Majorana 1, which it claims could make useful quantum computers a reality within years.

China has warned that the United States‘ efforts to pressure other nations into targeting its semiconductor industry will ultimately backfire.

Infrastructure

A state-of-the-art space lab on the outskirts of Cairo, touted as Africa’s first satellite production facility, has been built with substantial Chinese involvement. 

Elon Musk’s Starlink network is facing increasing competition in the satellite internet market, particularly from SpaceSail, a Shanghai-based company backed by the Chinese government, and Amazon’s Project Kuiper.

Cybersecurity

The EU Commission introduced a proposal aimed at strengthening the EU’s response to large-scale cyber attacks.

Hackers have stolen $1.5 billion from Dubai-based cryptocurrency exchange Bybit in what is believed to be the largest digital heist in history.

Following the recent security breach at Bybit, major cryptocurrency firms have joined forces to combat the attack and mitigate its impact.

Digital rights

Elon Musk has reignited his rivalry with OpenAI by leading a consortium in a staggering $97.4 billion bid to acquire the nonprofit that governs the ChatGPT creator.

South Korea’s National Intelligence Service (NIS) has raised concerns about the Chinese AI app DeepSeek, accusing it of excessively collecting personal data and using it for training purposes.

TikTok has introduced a new method for US Android users to download the app directly from its website, bypassing restrictions imposed by app stores.

South Korea’s data protection authority has suspended new downloads of the Chinese AI app DeepSeek, citing concerns over non-compliance with the country’s privacy laws.

A US federal judge has denied a request to temporarily block Elon Musk’s Department of Government Efficiency (DOGE) from accessing data from seven federal agencies or making further workforce cuts.

Legal

An online education company has filed a lawsuit against Google, claiming its AI-generated search overviews are damaging digital publishing.

The Trump administration is reevaluating the conditions of CHIPS and Science Act subsidies, which allocate $39 billion to boost domestic semiconductor production.

Elon Musk-owned social media platform X has successfully removed a judge from a German court case concerning demands for real-time election data.

Trump Media & Technology Group and Rumble have filed an emergency motion in a US court against Brazilian Supreme Court Justice Alexandre de Moraes.

Internet economy

Chinese investors are flocking to AI-related stocks, betting that the success of home-grown startup DeepSeek will propel China to the forefront of the AI race amid the escalating Sino-US technology conflict. 

In a meeting at the White House on Friday, US President Donald Trump and Nvidia CEO Jensen Huang discussed the emerging challenges posed by China’s AI advancements, particularly the rapid rise of DeepSeek that has disrupted the global tech industry.

Macron announced private sector investments totalling around 109 billion euros ($112.5 billion) in its AI sector at the Paris AI summit, according to President Emmanuel Macron.

Elon Musk, who is leading Donald Trump’s federal cost-cutting initiative, has announced that efforts are underway to shut down the United States Agency for International Development (USAID).

President Donald Trump confirmed on Wednesday that he was in active discussions with China over the future of TikTok, as the US seeks to broker a sale of the popular app.

Elon Musk’s social media company X is currently discussing raising funds from investors at a $44 billion valuation.

Development

Alibaba has announced plans to invest at least 380 billion yuan ($52.44 billion) in cloud computing and AI infrastructure over the next three years.

US drugmaker Amgen has announced a $200 million investment in a new technology centre in southern India, which will focus on using AI and data science to support the development of new medicines.

Sociocultural

A German court has ruled that Elon Musk’s social media platform X must provide researchers with data to track the spread of misinformation ahead of the country’s national election on 23 February.

French prosecutors have launched an investigation into X, formerly known as Twitter, over alleged algorithmic bias.

Meta has launched a new initiative with UNESCO to enhance AI language recognition and translation, focusing on underserved languages.

A recent report by Australia’s eSafety regulator reveals that children in the country are finding it easy to bypass age restrictions on social media platforms.

A Russian court has fined Google 3.8 million roubles (£32,600) for hosting YouTube videos that allegedly instructed Russian soldiers on how to surrender.

Mexico has strongly opposed Google’s decision to rename the Gulf of Mexico as the ‘Gulf of America’ for US Google Maps users.

For more information on cybersecurity, digital policies, AI governance and other related topics, visit diplomacy.edu.


Data Protection Day 2025: A new mandate for data protection

Data Protection Day 2025 brought experts to Brussels and online to discuss the latest privacy challenges. The event featured key speeches, panels, and discussions on data protection enforcement, international cooperation, and the impact of emerging technologies.

In his keynote, Leonardo Cervera Navas warned about algorithms being used to destabilise the EU democracies and the power imbalance caused by tech monopolies. Despite these challenges, he emphasised that European values and regulatory tools like the Digital Clearing House 2.0 could help uphold privacy.

AD 4nXdmpGPG AL98foAOOO5roALPZOPmpQMnQS9nGmVYA8TE1X3h V4FREpOLCvfOzlOhX84Av5lyytjnr1 03pJCSMPxJm8gepCjIT55 ZzeUMmfKcriOJt

A panel moderated by Kait Bolongaro examined data protection priorities and enforcement challenges. European Data Protection Supervisor Wojciech Wiewiorowski stressed the role of strong data protection authorities (DPAs) in ensuring GDPR compliance. Matthias Kloth highlighted the modernisation of Convention 108+ as a key legal advancement. The discussion also covered international cooperation, with panellists agreeing that while global collaboration is necessary, privacy standards must not be diluted.

Audience questions raised concerns about the complexity and cost of enforcing privacy rights. Wiewiorowski saw no major GDPR overhaul in the near future but suggested procedural adjustments. Jo Pierson proposed that civil society organisations assist individuals facing legal barriers. The discussion also touched on China’s DeepSeek AI, with panellists refraining from labelling it a threat but stressing the need for Europe to control AI training datasets. Wiewiorowski likened the AI race to the Cold War’s ‘Sputnik moment,’ questioning if Europe was falling behind.

A session on future data protection challenges, led by Anna Buchta, highlighted regulatory complexity. Marina Kaljurand noted a significant digital knowledge gap among European lawmakers, complicating policy decisions. She also raised concerns about the UK’s adequacy decision, which expires in June 2025, and worsening US-EU relations under Trump. UK Information Commissioner John Edwards emphasised AI’s disruptive impact on biometrics and tracking, while Alberto Di Felice from DIGITALEUROPE criticised excessive bureaucracy, advocating for streamlined regulatory oversight.

Beatriz de Anchorena, head of Argentina’s data protection authority, championed Convention 108+ as a global privacy standard. Argentina, the first non-European country to receive EU adequacy, has remained a leader in data protection reform.

A discussion on neuroscience and privacy, moderated by Ella Mein, explored ethical concerns surrounding brain data. Professor Marcello Ienca warned of potential ‘neuro-discrimination’ and the dangers of exploiting brain data. ECtHR Jurisconsult Anna Austin highlighted legal challenges, noting the high standards required for data waivers.

AD 4nXcD9vY3hv9j7oCdb0G0J3djLV7N9V owSFFxepiiG13uQenhtaLJAW0BQ0qTBaXJAbPEDoIusJTELxEi7NSYVqmAc87RTfHHhUgFc uamsGLzwD2VtcqroGc6UcrU5

The final panel, led by Gabriela Zanfir-Fortuna, addressed the need for stronger enforcement. Johnny Ryan of the Irish Council for Civil Liberties criticised the EU’s slow response to data misuse, while Nora Ni Loideain emphasised the GDPR’s role in giving DPAs greater enforcement power.

The event underscored the need for robust regulation, global cooperation, and better enforcement mechanisms to protect privacy in a rapidly evolving digital landscape.


Legacy media vs social media and alternative media channels

The rapid spread of digital information has transformed communication, offering opportunities and challenges. While social media and alternative platforms have democratised access to information, they have also enabled misinformation, deepfakes, and sensationalism to flourish. The tension between traditional media and these new forms of communication is at the heart of current debates on content policy and media integrity.

The case of Novak Djokovic at the 2025 Australian Open highlights this shift. After a Channel 9 journalist made derogatory remarks about him and his Serbian supporters, Djokovic refused an on-court interview and took to social media to share his perspective. His video went viral, attracting support from figures like Elon Musk, who criticised traditional media as a ‘negativity filter,’ This incident underscored the ability of social media to bypass mainstream media narratives, raising questions about journalistic objectivity, editorial oversight, and the role of direct communication in shaping public discourse.

AD 4nXehrAKIU3EadoVpFZcHaGsCRqgp5YUIbGGiFSd2bEOuE8Nbm7BqgDyG a35 XmdduVCs8BwGEoTbjQuJJmdZqfnwkza2wQmvAXSmigRGS8d7FtowkdzvZ2qMeGYb

Similarly, alternative media’s influence on political discourse was evident in Joe Rogan’s podcast, The Joe Rogan Experience. In 2024, Donald Trump’s appearance on the podcast allowed him to engage with audiences outside traditional news constraints, potentially boosting his presidential campaign. In contrast, Kamala Harris declined an invitation after requesting pre-approved questions. This difference illustrated how politicians navigate new media environments—some embracing unscripted discussions, others preferring controlled narratives. The case of Vladimir Klitschko further demonstrated how alternative media offers international figures a platform for nuanced discussions on global issues.

AD 4nXdYmGquEOcZTUg6pPP0En GQUhtXGWvKtuSBlfeE8 dsTMTOQo8yjLWFlvCN0G9I SPwW8mnfzyTj5czy 0QI72W7z XeDyXDwZfAx35khVizd0HUFL98NDtUPTxbSZdi2Q

Elon Musk’s experience with the media further highlights these dynamics. After traditional media misrepresented a gesture he made at a public event, Musk turned to X (formerly Twitter) to counter the narrative. His criticism of ‘legacy media’ as biassed and slow to adapt resonated with many, reinforcing the growing preference for direct, unfiltered communication. However, this shift presents risks, as social media and alternative platforms lack the editorial oversight that traditional outlets provide, allowing misinformation to spread more easily.

The rise of podcasts, independent media, and social networks has disrupted traditional journalism, offering new ways to engage audiences. While these platforms provide greater freedom of expression, they raise concerns about accuracy, misinformation, and accountability. The challenge remains in balancing openness with factual integrity, ensuring that media—whether traditional or alternative—serves the public responsibly in an era where the boundaries between truth and fabrication continue to blur.


Australian kids overlook social media age checks

A recent report by Australia’s eSafety regulator reveals that children in the country are finding it easy to bypass age restrictions on social media platforms. The findings come ahead of a government ban, set to take effect at the end of 2025, that will prevent children under the age of 16 from using these platforms. The report highlights data from a national survey on social media use among 8 to 15-year-olds and feedback from eight major services, including YouTube, Facebook, and TikTok.

The report shows that 80% of Australian children aged 8 to 12 were using social media in 2024, with YouTube, TikTok, Instagram, and Snapchat being the most popular platforms. While most platforms, except Reddit, require users to enter their date of birth during sign-up, the report indicates that these systems rely on self-declaration, which can be easily manipulated. Despite these weaknesses, 95% of teens under 16 were found to be active on at least one of the platforms surveyed.

While some platforms, such as TikTok, Twitch, and YouTube, have introduced tools to proactively detect underage users, others have not fully implemented age verification technologies. YouTube remains exempt from the upcoming ban, allowing children under 13 to use the platform with parental supervision. However, eSafety Commissioner Julie Inman Grant stressed that there is still significant work needed to enforce the government’s minimum age legislation effectively.

The report also noted that most of the services surveyed had conducted research to improve their age verification processes. However, as the law approaches, there are increasing calls for app stores to take greater responsibility for enforcing age restrictions.

For more information on these topics, visit diplomacy.edu.

Young people rely on social media for political news

A growing number of young Europeans are turning to social media platforms like TikTok, Instagram, and YouTube as their primary news source, surpassing traditional outlets such as TV and print media. According to the latest European Parliament Youth Survey, 42% of people aged 16 to 30 rely on social media for news about politics and social issues. This shift highlights changing preferences toward fast-paced, accessible content but also raises concerns about the growing risk of disinformation among younger generations.

Younger users, especially those aged 16 to 18, are more likely to trust platforms like TikTok and Instagram, while those aged 25 to 30 tend to rely more on Facebook, online press, and radio for their news. However, the rise of social media as a news source has also led to increased exposure to fake news. A report from the Reuters Institute revealed that 27% of TikTok users struggle to identify misleading content, while Instagram has faced criticism for relaxing its fact-checking systems.

Despite being aware of the risks, young Europeans continue to engage with social media for news. A significant 76% of respondents reported encountering fake news in the past week, yet platforms like Instagram remain the most popular news sources. This trend is impacting trust in political institutions, with many young people expressing scepticism toward the EU and skipping elections due to a lack of information.

The reliance on social media for news has shifted political discourse, as fake news and AI-generated content have been used to manipulate public opinion. The constant exposure to sensationalised false information is also having psychological effects, increasing anxiety and confusion among young people and pushing some to avoid news altogether.

For more information on these topics, visit diplomacy.edu.

Russia denies digital rouble expiry rumours

The Russian Central Bank has dismissed claims that unused digital rouble coins in inactive wallets will be erased. Officials say the reports, spreading on social media, are false and have no basis in law. Alla Bakina, a senior bank executive, stressed that digital roubles, like cash, belong entirely to the wallet holder, who can spend them whenever they choose.

Concerns have also surfaced that Russian citizens will be forced to use the digital rouble. However, the Central Bank insists that opening a digital rouble wallet will remain voluntary. Officials criticised social media “pseudo-experts” for spreading misinformation and reassured the public that there is no need to submit formal refusals to banks or government offices.

Despite these reassurances, scepticism remains. Some critics argue that while the bank may not impose expiry dates now, digital currencies allow for future spending restrictions. The digital rouble has been in testing since August 2023, with a full rollout expected before the year’s end.

For more information on these topics, visit diplomacy.edu

Legacy media vs social media and alternative media channels

In today’s digital age, the rapid proliferation of information has empowered and complicated the way societies communicate and stay informed. At its best, this interconnectedness fosters creativity, knowledge-sharing, and transparency. However, it also opens the floodgates for misinformation, disinformation, and the rise of deepfakes, tools that distort truth and challenge our ability to distinguish fact from fiction. These modern challenges are not confined to the fringes of the internet; they infiltrate mainstream platforms, influencing public opinion, political decisions, and cultural narratives on an unprecedented scale.

The emergence of alternative media platforms like podcasts, social media networks, and independent streaming channels has disrupted the traditional gatekeepers of information. While these platforms offer voices outside the mainstream a chance to be heard, they also often lack the editorial oversight of traditional media. This peculiarity has created a complex media ecosystem where authenticity competes with sensationalism, and viral content can quickly overshadow fact-checking.

Content policy has become a battlefield, with platforms struggling to balance free expression and the need to curb harmful or deceptive narratives. The debate is further complicated by the increasing sophistication of deepfake technology and AI-generated content, which can fabricate convincing yet entirely false narratives. Whether it is a politician giving a speech they never delivered, a celebrity endorsing a product they have never used, or a manipulated video sparking social unrest, the stakes are high.

These challenges have sparked fierce debates among tech giants, policymakers, journalists, and users on who should bear responsibility for ensuring accurate and ethical content. Against this backdrop, recent high-profile incidents, such as Novak Djokovic’s response to perceived media bias and Joe Rogan’s defiance of traditional norms, or Elon Musk’s ‘nazi salute’, highlight the tension between established media practices and the uncharted territory of modern communication channels. These case studies shed light on the shifting dynamics of information dissemination in an era where the lines between truth and fabrication are increasingly blurred.

Case study No. 1: The Djokovic incident, traditional media vs social media dynamics

The intersection of media and public discourse took centre stage during the 2025 Australian Open when tennis icon Novak Djokovic decided to boycott an on-court interview with Channel 9, the official broadcaster of the tournament. The decision, rooted in a dispute over comments made by one of its journalists, Tony Jones, highlighted the ongoing tension between traditional media’s content policies and the freedom of expression offered by modern social media platforms.

The incident

Namely, on 19 January 2025, following his victory over Jiri Lehecka in the fourth round of the Australian Open, Novak Djokovic, the 24-time Grand Slam champion, refused to engage in the customary on-court interview for Channel 9, a long-standing practice in tennis that directly connects players with fans. The reason was not due to personal animosity towards the interviewer, Jim Courier, but rather a response to remarks made by Channel 9 sports journalist Tony Jones. During a live broadcast, Jones had mocked Serbian fans chanting for Djokovic, calling the player ‘overrated’ and a ‘has-been,’ and even suggested they ‘kick him out’, a phrase that resonated deeply given Djokovic’s previous deportation from Australia over vaccine mandate issues in 2022.

The response and social media amplification

In his post-match press conference, Djokovic clarified his stance, saying that he would not conduct interviews with Channel 9 until he received an apology from both Jones and the network for what he described as ‘insulting and offensive’ comments. The incident quickly escalated beyond the tennis courts when Djokovic took to X (formerly Twitter) to share a video explaining his actions, directly addressing his fans and the broader public. 

What happened was a protest against the Australian broadcaster and the strategic use of social media to bypass traditional media channels, often seen as gatekeepers of information with their own biases and agendas. The response was immediate; the video went viral, drawing comments from various quarters, including from Elon Musk, the owner of X. Musk retweeted Djokovic’s video with a critique of ‘legacy media’, stating, ‘It’s way better just to talk to the public directly than go through the negativity filter of legacy media.’ Djokovic’s simple reply, ‘Indeed’, underscored his alignment with this view, further fuelling the discussion about media integrity and control.

Content policy and misinformation

The incident brings to light several issues concerning content policy in traditional media. Traditional media like Channel 9 operate under strict content policies where editorial decisions are made to balance entertainment and journalistic integrity. However, remarks like those from Jones can blur this line, leading to public backlash and accusations of bias or misinformation.

The response from Channel 9, an apology after the public outcry, showcases the reactive nature of traditional media when managing content that might be deemed offensive or misinformative, often after significant damage has been done to public perception.

Unlike social media, where anyone can broadcast their viewpoint, traditional media has the infrastructure for fact-checking but can also be accused of pushing a narrative. The Djokovic case has raised questions about whether Jones’s comments were intended as humour or reflected a deeper bias against Djokovic or his nationality.

The role of social media

Social media platforms such as X enable figures like Djokovic to communicate directly with their audience, controlling their narrative without the mediation of traditional media. Direct public exposure can be empowering, but it can also bypass established journalistic checks and balances.

While this incident showcased the power of social media for positive storytelling, it also highlights the platform’s potential for misinformation. Messages can be amplified without context or correction without editorial oversight, leading to public misinterpretation.

Case study No. 2: Alternative media and political discourse – The Joe Rogan experience

As traditional media grapples with issues of trust and relevance, alternative media platforms like podcasts have risen, offering new avenues for information dissemination. Joe Rogan’s podcast, ‘The Joe Rogan Experience’, has become a significant player in this space, influencing political discourse and public opinion, mainly through his interviews with high-profile figures such as Donald Trump and Kamala Harris.

Donald Trump’s podcast appearance

In 2024, Donald Trump’s appearance on Joe Rogan’s podcast was a pivotal moment, often credited with aiding his resurgence in the political arena, leading to his election as the 47th President of the USA. The podcast format allowed for an extended, unscripted conversation, allowing Trump to discuss his policies, personality, and plans without the usual media constraints. 

Unlike traditional media interviews, where questions and answers are often tightly controlled, Rogan’s podcast allowed Trump to engage with audiences more authentically, potentially influencing voters who felt alienated by mainstream media.

Critics argue that such platforms can spread misinformation due to the lack of immediate fact-checking. Yet, supporters laud the format for allowing a deeper understanding of the candidate’s views without the spin of journalists.

Kamala Harris’s conditional interview

Contrastingly, Kamala Harris’s approach to the same platform was markedly different. She requested special conditions for her interview, including pre-approved questions, which Rogan declined. Harris then chose not to participate, highlighting a critical difference in how politicians view and interact with alternative media. Her decision reflects a broader strategy among some politicians to control their media exposure, preferring environments where the narrative can be shaped to their advantage, which is often less feasible in an open podcast format.

Some might see her refusal as avoidance of tough, unfiltered questions, potentially impacting her public image as less transparent than figures like Trump, who embraced the platform.

Vladimir Klitschko’s interview on ‘The Joe Rogan Experience

Adding another layer to this narrative, former Ukrainian boxer and political figure Vladimir Klitschko appeared on Rogan’s show, discussing his athletic career and geopolitical issues affecting Ukraine. This interview showcased how alternative media like podcasts can give a voice to international figures, offering a different perspective on global issues that might be underrepresented or misrepresented in traditional media.

Rogan’s discussions often delve into subjects with educational value, providing listeners with nuanced insights into complex topics, something traditional news might cover in soundbites.

Analysing media dynamics

Content policy in alternative media: While Rogan’s podcast does not adhere to the same content policies as traditional media, it does have its own set of guidelines, which include a commitment to free speech and a responsibility not to platform dangerous misinformation.

Fact-checking and public accountability: Unlike traditional media, where fact-checking can be institutional, podcast listeners often take on this role, leading to community-driven corrections or discussions on platforms like Reddit or X.

The spread of disinformation: Like social media, podcasts can be vectors of misinformation if not moderated or if hosts fail to challenge or correct inaccuracies. However, Rogan’s approach often includes challenging guests, providing a counterbalance.

Impact on journalism: The rise of podcasts challenges traditional journalism by offering alternative narratives, sometimes at the cost of depth or accuracy but gaining in terms of directness and personal connection with the audience.

Case study No. 3: Elon Musk and the ‘Nazi salute’

The evolution of media consumption has been profound, with the rise of social media and alternative channels significantly altering the landscape traditionally dominated by legacy media. The signs of this evolution are poignantly highlighted in a tweet by Elon Musk, where he commented on the dynamics of media interaction:

‘It was astonishing how insanely hard legacy media tried to cancel me for saying “my heart goes out to you” and moving my hand from my heart to the audience. In the end, this deception will just be another nail in the coffin of legacy media.’ – Elon Musk, 24 January 2025, 10:22 UTC 

Legacy media: the traditional gatekeepers

Legacy media, encompassing print, television, and radio, has long been the public’s primary source of news and information. These platforms have established content policies to ensure journalistic integrity, fact-checking, and editorial oversight. However, as Musk’s tweet suggests, they are often perceived as inherently biased, sometimes acting as ‘negativity filters’ that skew public perception. This critique reflects a broader sentiment that legacy media can be slow to adapt, overly cautious, and sometimes accused of pushing an agenda, as seen in Musk’s experience of being ‘cancelled’ over a simple gesture interpreted out of context. The traditional model involves gatekeepers who decide what news reaches the audience, which can lead to a controlled narrative that might not always reflect the full spectrum of public discourse. 

Modern social media: direct engagement

In contrast, social media platforms like X (formerly Twitter) democratise information dissemination by allowing direct communication from individuals to the public, bypassing traditional media gatekeepers. Musk’s use of X to address his audience directly illustrates this shift. Social media provides an unfiltered stage where public figures can share their stories, engage in real-time, and counteract what they see as biassed reporting from legacy media. This directness enhances transparency and authenticity but also poses significant challenges. Without the same level of editorial oversight, misinformation can spread rapidly, as social media algorithms often prioritise engagement over accuracy, potentially amplifying falsehoods or sensational content. 

Alternative media channels: a new frontier

Beyond social media, alternative channels like podcasts, independent streaming services, and blogs have emerged, offering even more diverse voices and perspectives. These platforms often operate with less stringent content policies, emphasising freedom of speech and direct audience interaction. For instance, podcasts like ‘The Joe Rogan Experience’ have become influential by hosting long-form discussions that delve deeper into topics than typical news segments. This format allows for nuanced conversations but lacks the immediate fact-checking mechanisms of traditional media, relying instead on the community or the host’s discretion to challenge or correct misinformation. The rise of alternative media has challenged the monopoly of legacy media, providing platforms where narratives can be shaped by content creators themselves, often leading to a richer, albeit sometimes less regulated, exchange of ideas. 

Content policy and freedom of expression

The tension between content policy and freedom of expression is starkly highlighted in Musk’s tweet. Legacy media’s structured approach to content can sometimes suppress voices or misrepresent intentions, as Musk felt with his gesture. On the other hand, social media and alternative platforms offer broader freedom of expression, yet this freedom comes with the responsibility to manage content that might be misleading or harmful. The debate here revolves around how much control should be exerted over content to prevent harm while preserving the open nature of these platforms. Musk’s situation underscores the need for a balanced approach where the public can engage with authentic expressions without the distortion of ‘legacy media’s negativity filter’. 

To summarise:

The juxtaposition of Djokovic’s media strategies and the political interviews on ‘The Joe Rogan Experience’ illustrates a shift in how information is consumed, controlled, and critiqued. Traditional media continues to wield considerable influence but is increasingly challenged by platforms offering less censorship, potentially more misinformation, and direct, unfiltered communication. 

Elon Musk’s tweet is another vivid example of the ongoing battle between legacy media’s control over narrative and the liberating yet chaotic nature of modern social media and alternative channels. These platforms have reshaped the way information is consumed, offering both opportunities for direct, unmediated communication and challenges in maintaining the integrity of information. 

As society continues to navigate this complex media landscape, the balance between ensuring factual accuracy, preventing misinformation, and respecting freedom of speech will remain a critical discussion point. The future of media lies in finding this equilibrium, where the benefits of both traditional oversight (perhaps through stringent/severe regulatory measures) and modern openness can coexist to serve an informed and engaged public.

Chinese social media boosts DeepSeek AI launch

Chinese state-backed social media accounts played a key role in amplifying the launch of DeepSeek’s AI models last week, according to an analysis by the firm Graphika. These accounts, including those of Chinese diplomats and media outlets, used platforms like X (formerly Twitter), Facebook, Instagram, and Weibo to highlight DeepSeek’s challenge to US dominance in the AI sector. This online activity coincided with a significant drop in US tech stocks, including a record one-day loss for Nvidia, shedding $593 billion in market value.

Graphika’s report suggested that this was part of a broader strategy by China to use AI to enhance its global influence and counter American leadership in critical technological fields. The surge in online discussion about DeepSeek’s AI capabilities was noticeable, especially on X, where it surpassed US rival ChatGPT in downloads from Apple’s app store shortly after its release. DeepSeek’s AI assistant also claimed to have been developed at a much lower cost than US competitors, raising concerns about a potential price war in the sector.

While China celebrates DeepSeek’s advancements as a victory over US efforts to limit its tech growth, the US has raised suspicions about whether the company improperly accessed American technology. The Commerce Department is investigating whether DeepSeek used banned US chips in its models, further intensifying tensions between the two countries over AI and tech competition. Meanwhile, major US companies like Microsoft and Meta continue their AI investments despite the challenges.

EU to test social media safeguards ahead of German elections

The European Commission has invited major social media platforms, including Facebook, TikTok, and X, to participate in a “stress test” on 31 January to assess their efforts in combating disinformation ahead of Germany‘s election next month. The test is part of the Digital Services Act (DSA), which requires companies to implement measures mitigating risks on their platforms. Similar tests were successfully conducted for the European Parliament elections last year.

EU spokesperson Thomas Regnier explained that the exercise would involve various scenarios to evaluate how platforms respond to potential challenges under the DSA. Senior compliance officers and specialists from companies such as Microsoft, LinkedIn, Google, Snap, and Meta have been invited to collaborate with German authorities in the closed-door session.

TikTok has confirmed its participation, while other platforms have yet to comment. The initiative underscores the European Union‘s commitment to ensuring transparency and accountability from tech giants in safeguarding democratic processes during elections.

Hashtag issues add to Meta’s chaotic transition week during US presidency handover

Meta has come under scrutiny after its AI chatbot failed to identify the current US president correctly. Despite Donald Trump’s inauguration on Monday, the chatbot continued to name Joe Biden as president through Thursday. The error led Meta to activate its high-priority troubleshooting protocol, a ‘site event’, to address the issue urgently.

The incident marked at least the third emergency Meta faced this week during the US presidential transition. Other problems included forcing users to re-follow Trump administration profiles on social media and hashtag search errors on Instagram. Meta attributed the re-following issue to delays in transferring White House accounts, which affected ‘unfollow’ requests.

Complaints also arose after searches for Democratic hashtags were blocked while Republican hashtags displayed results normally. Meta acknowledged the issue, claiming it affected searches for various hashtags across the platform. These errors come amid broader platform changes, including scrapping fact-checking programs and reshaping its leadership.

Critics have linked the missteps to perceived shifts in Meta’s political alignment. CEO Mark Zuckerberg’s attendance at Trump’s inauguration and recent strategic moves, such as appointing Trump allies to key positions, have fuelled debate over the platform’s neutrality.

Germany urges social media platforms to tackle disinformation before election

Germany’s interior minister, Nancy Faeser, has called on social media companies to take stronger action against disinformation ahead of the federal parliamentary election on 23 February. Faeser urged platforms like YouTube, Facebook, Instagram, X, and TikTok to label AI-manipulated videos, clearly identify political advertising, and ensure compliance with European laws. She also emphasised the need for platforms to report and remove criminal content swiftly, including death threats.

Faeser met with representatives of major tech firms to underline the importance of transparency in algorithms, warning against the risk of online radicalisation, particularly among young people. Her concerns come amidst growing fears of disinformation campaigns, possibly originating from Russia, that could influence the upcoming election. She reiterated that platforms must ensure they do not fuel societal division through unchecked content.

Calls for greater accountability in the tech industry are gaining momentum. At the World Economic Forum in Davos, Spanish Prime Minister Pedro Sánchez criticised social media owners for enabling algorithms that erode democracy and “poison society.” Faeser’s warnings highlight the growing international demand for stronger regulations on social media to safeguard democratic processes.

South Sudan blocks social media after riots and violence

South Sudan has suspended access to social media platforms for at least 30 days following violent riots triggered by videos allegedly showing the killings of South Sudanese nationals in Sudan’s El Gezira state. The decision, announced by the National Communications Authority on Wednesday, aims to curb the spread of extreme content and prevent further unrest. Mobile operators MTN South Sudan and Zain confirmed that platforms like Facebook and TikTok would be inaccessible for up to 90 days.

The riots, which erupted in the capital, Juba and other cities, led to the deaths of at least 16 Sudanese nationals. Angry youths looted shops, vandalised property, and burned homes belonging to Sudanese nationals, believing Sudan’s military and its allies were involved in the El Gezira killings. South Sudanese authorities have condemned the violence, urging calm and restraint.

The Sudanese army has also criticised what it described as ‘individual violations’ in El Gezira. The social media ban is part of a broader effort to restore order and prevent further acts of retaliation, as tensions remain high between the neighbouring nations.