Australia has imposed sanctions on the extremist online network ‘Terrorgram’ in an effort to combat rising antisemitism and online radicalisation. Foreign Minister Penny Wong stated that engaging with the group would now be a criminal offence, helping to prevent young people from being drawn into far-right extremism. The move follows similar actions by Britain and the US.
Wong described ‘Terrorgram’ as a network that promotes white supremacy and racially motivated violence, making it the first entirely online entity to face Australian counterterrorism financing sanctions. Offenders could face up to 10 years in prison and substantial fines. Sanctions were also renewed against four other right-wing groups, including the Russian Imperial Movement and The Base.
The network primarily operates on the Telegram platform, which stated that it has long banned such content and removed related channels. The US designated ‘Terrorgram’ as a violent extremist group in January, while Britain criminalised affiliation with it in April.
Australia has seen a rise in antisemitic incidents, including attacks on synagogues and vehicles since the Israel-Gaza conflict began in October 2023. Police recently arrested neo-Nazi group members in Adelaide and charged a man for displaying a Nazi symbol on National Day.
The United Kingdom is set to become the first country to criminalise the use of AI to create child sexual abuse images. New offences will target AI-generated explicit content, including tools that ‘nudeify’ real-life images of children. The move follows a sharp rise in AI-generated abuse material, with reports increasing nearly five-fold in 2024, according to the Internet Watch Foundation.
The government warns that predators are using AI to disguise their identities and blackmail children into further exploitation. New laws will criminalise the possession, creation, or distribution of AI tools designed for child abuse material, as well as so-called ‘paedophile manuals’ that provide instructions on using such technology. Websites hosting AI-generated child abuse content will also be targeted, and authorities will gain powers to unlock digital devices for inspection.
The measures will be included in the upcoming Crime and Policing Bill. Earlier this month, Britain also announced plans to outlaw AI-generated ‘deepfake’ pornography, making it illegal to create or share sexually explicit deepfakes. Officials say the new laws will help protect children from emerging online threats.
In today’s digital age, the rapid proliferation of information has empowered and complicated the way societies communicate and stay informed. At its best, this interconnectedness fosters creativity, knowledge-sharing, and transparency. However, it also opens the floodgates for misinformation, disinformation, and the rise of deepfakes, tools that distort truth and challenge our ability to distinguish fact from fiction. These modern challenges are not confined to the fringes of the internet; they infiltrate mainstream platforms, influencing public opinion, political decisions, and cultural narratives on an unprecedented scale.
I’m sure TJ would never ever have said this about Fed or Nadal.
The emergence of alternative media platforms like podcasts, social media networks, and independent streaming channels has disrupted the traditional gatekeepers of information. While these platforms offer voices outside the mainstream a chance to be heard, they also often lack the editorial oversight of traditional media. This peculiarity has created a complex media ecosystem where authenticity competes with sensationalism, and viral content can quickly overshadow fact-checking.
Content policy has become a battlefield, with platforms struggling to balance free expression and the need to curb harmful or deceptive narratives. The debate is further complicated by the increasing sophistication of deepfake technology and AI-generated content, which can fabricate convincing yet entirely false narratives. Whether it is a politician giving a speech they never delivered, a celebrity endorsing a product they have never used, or a manipulated video sparking social unrest, the stakes are high.
These challenges have sparked fierce debates among tech giants, policymakers, journalists, and users on who should bear responsibility for ensuring accurate and ethical content. Against this backdrop, recent high-profile incidents, such as Novak Djokovic’s response to perceived media bias and Joe Rogan’s defiance of traditional norms, or Elon Musk’s ‘nazi salute’, highlight the tension between established media practices and the uncharted territory of modern communication channels. These case studies shed light on the shifting dynamics of information dissemination in an era where the lines between truth and fabrication are increasingly blurred.
Case study No. 1: The Djokovic incident, traditional media vs social media dynamics
The intersection of media and public discourse took centre stage during the 2025 Australian Open when tennis icon Novak Djokovic decided to boycott an on-court interview with Channel 9, the official broadcaster of the tournament. The decision, rooted in a dispute over comments made by one of its journalists, Tony Jones, highlighted the ongoing tension between traditional media’s content policies and the freedom of expression offered by modern social media platforms.
Novak Djokovic did not do a post-match interview after his win at Australian Open.
He gave a quick statement, signed some autographs, and was booed by some of the crowd as he left.
Namely, on 19 January 2025, following his victory over Jiri Lehecka in the fourth round of the Australian Open, Novak Djokovic, the 24-time Grand Slam champion, refused to engage in the customary on-court interview for Channel 9, a long-standing practice in tennis that directly connects players with fans. The reason was not due to personal animosity towards the interviewer, Jim Courier, but rather a response to remarks made by Channel 9 sports journalist Tony Jones. During a live broadcast, Jones had mocked Serbian fans chanting for Djokovic, calling the player ‘overrated’ and a ‘has-been,’ and even suggested they ‘kick him out’, a phrase that resonated deeply given Djokovic’s previous deportation from Australia over vaccine mandate issues in 2022.
The response and social media amplification
In his post-match press conference, Djokovic clarified his stance, saying that he would not conduct interviews with Channel 9 until he received an apology from both Jones and the network for what he described as ‘insulting and offensive’ comments. The incident quickly escalated beyond the tennis courts when Djokovic took to X (formerly Twitter) to share a video explaining his actions, directly addressing his fans and the broader public.
What happened was a protest against the Australian broadcaster and the strategic use of social media to bypass traditional media channels, often seen as gatekeepers of information with their own biases and agendas. The response was immediate; the video went viral, drawing comments from various quarters, including from Elon Musk, the owner of X. Musk retweeted Djokovic’s video with a critique of ‘legacy media’, stating, ‘It’s way better just to talk to the public directly than go through the negativity filter of legacy media.’ Djokovic’s simple reply, ‘Indeed’, underscored his alignment with this view, further fuelling the discussion about media integrity and control.
It’s way better just to talk to the public directly than go through the negativity filter of legacy media https://t.co/QYDJXWAC5r
The incident brings to light several issues concerning content policy in traditional media. Traditional media like Channel 9 operate under strict content policies where editorial decisions are made to balance entertainment and journalistic integrity. However, remarks like those from Jones can blur this line, leading to public backlash and accusations of bias or misinformation.
The response from Channel 9, an apology after the public outcry, showcases the reactive nature of traditional media when managing content that might be deemed offensive or misinformative, often after significant damage has been done to public perception.
Unlike social media, where anyone can broadcast their viewpoint, traditional media has the infrastructure for fact-checking but can also be accused of pushing a narrative. The Djokovic case has raised questions about whether Jones’s comments were intended as humour or reflected a deeper bias against Djokovic or his nationality.
The role of social media
Social media platforms such as X enable figures like Djokovic to communicate directly with their audience, controlling their narrative without the mediation of traditional media. Direct public exposure can be empowering, but it can also bypass established journalistic checks and balances.
While this incident showcased the power of social media for positive storytelling, it also highlights the platform’s potential for misinformation. Messages can be amplified without context or correction without editorial oversight, leading to public misinterpretation.
Case study No. 2: Alternative media and political discourse – The Joe Rogan experience
As traditional media grapples with issues of trust and relevance, alternative media platforms like podcasts have risen, offering new avenues for information dissemination. Joe Rogan’s podcast, ‘The Joe Rogan Experience’, has become a significant player in this space, influencing political discourse and public opinion, mainly through his interviews with high-profile figures such as Donald Trump and Kamala Harris.
Donald Trump’s podcast appearance
In 2024, Donald Trump’s appearance on Joe Rogan’s podcast was a pivotal moment, often credited with aiding his resurgence in the political arena, leading to his election as the 47th President of the USA. The podcast format allowed for an extended, unscripted conversation, allowing Trump to discuss his policies, personality, and plans without the usual media constraints.
Unlike traditional media interviews, where questions and answers are often tightly controlled, Rogan’s podcast allowed Trump to engage with audiences more authentically, potentially influencing voters who felt alienated by mainstream media.
Critics argue that such platforms can spread misinformation due to the lack of immediate fact-checking. Yet, supporters laud the format for allowing a deeper understanding of the candidate’s views without the spin of journalists.
Kamala Harris’s conditional interview
Contrastingly, Kamala Harris’s approach to the same platform was markedly different. She requested special conditions for her interview, including pre-approved questions, which Rogan declined. Harris then chose not to participate, highlighting a critical difference in how politicians view and interact with alternative media. Her decision reflects a broader strategy among some politicians to control their media exposure, preferring environments where the narrative can be shaped to their advantage, which is often less feasible in an open podcast format.
Some might see her refusal as avoidance of tough, unfiltered questions, potentially impacting her public image as less transparent than figures like Trump, who embraced the platform.
Vladimir Klitschko’s interview on ‘The Joe Rogan Experience‘
Adding another layer to this narrative, former Ukrainian boxer and political figure Vladimir Klitschko appeared on Rogan’s show, discussing his athletic career and geopolitical issues affecting Ukraine. This interview showcased how alternative media like podcasts can give a voice to international figures, offering a different perspective on global issues that might be underrepresented or misrepresented in traditional media.
Rogan’s discussions often delve into subjects with educational value, providing listeners with nuanced insights into complex topics, something traditional news might cover in soundbites.
Analysing media dynamics
Content policy in alternative media: While Rogan’s podcast does not adhere to the same content policies as traditional media, it does have its own set of guidelines, which include a commitment to free speech and a responsibility not to platform dangerous misinformation.
Fact-checking and public accountability: Unlike traditional media, where fact-checking can be institutional, podcast listeners often take on this role, leading to community-driven corrections or discussions on platforms like Reddit or X.
The spread of disinformation: Like social media, podcasts can be vectors of misinformation if not moderated or if hosts fail to challenge or correct inaccuracies. However, Rogan’s approach often includes challenging guests, providing a counterbalance.
Impact on journalism: The rise of podcasts challenges traditional journalism by offering alternative narratives, sometimes at the cost of depth or accuracy but gaining in terms of directness and personal connection with the audience.
Case study No. 3: Elon Musk and the ‘Nazi salute’
The evolution of media consumption has been profound, with the rise of social media and alternative channels significantly altering the landscape traditionally dominated by legacy media. The signs of this evolution are poignantly highlighted in a tweet by Elon Musk, where he commented on the dynamics of media interaction:
‘It was astonishing how insanely hard legacy media tried to cancel me for saying “my heart goes out to you” and moving my hand from my heart to the audience. In the end, this deception will just be another nail in the coffin of legacy media.’ – Elon Musk, 24 January 2025, 10:22 UTC
It was astonishing how insanely hard legacy media tried to cancel me for saying “my heart goes out to you” and moving my hand from my heart to the audience.
In the end, this deception will just be another nail in the coffin of legacy media. https://t.co/RKa3UsB7sd
Legacy media, encompassing print, television, and radio, has long been the public’s primary source of news and information. These platforms have established content policies to ensure journalistic integrity, fact-checking, and editorial oversight. However, as Musk’s tweet suggests, they are often perceived as inherently biased, sometimes acting as ‘negativity filters’ that skew public perception. This critique reflects a broader sentiment that legacy media can be slow to adapt, overly cautious, and sometimes accused of pushing an agenda, as seen in Musk’s experience of being ‘cancelled’ over a simple gesture interpreted out of context. The traditional model involves gatekeepers who decide what news reaches the audience, which can lead to a controlled narrative that might not always reflect the full spectrum of public discourse.
Modern social media: direct engagement
In contrast, social media platforms like X (formerly Twitter) democratise information dissemination by allowing direct communication from individuals to the public, bypassing traditional media gatekeepers. Musk’s use of X to address his audience directly illustrates this shift. Social media provides an unfiltered stage where public figures can share their stories, engage in real-time, and counteract what they see as biassed reporting from legacy media. This directness enhances transparency and authenticity but also poses significant challenges. Without the same level of editorial oversight, misinformation can spread rapidly, as social media algorithms often prioritise engagement over accuracy, potentially amplifying falsehoods or sensational content.
Alternative media channels: a new frontier
Beyond social media, alternative channels like podcasts, independent streaming services, and blogs have emerged, offering even more diverse voices and perspectives. These platforms often operate with less stringent content policies, emphasising freedom of speech and direct audience interaction. For instance, podcasts like ‘The Joe Rogan Experience’ have become influential by hosting long-form discussions that delve deeper into topics than typical news segments. This format allows for nuanced conversations but lacks the immediate fact-checking mechanisms of traditional media, relying instead on the community or the host’s discretion to challenge or correct misinformation. The rise of alternative media has challenged the monopoly of legacy media, providing platforms where narratives can be shaped by content creators themselves, often leading to a richer, albeit sometimes less regulated, exchange of ideas.
Content policy and freedom of expression
The tension between content policy and freedom of expression is starkly highlighted in Musk’s tweet. Legacy media’s structured approach to content can sometimes suppress voices or misrepresent intentions, as Musk felt with his gesture. On the other hand, social media and alternative platforms offer broader freedom of expression, yet this freedom comes with the responsibility to manage content that might be misleading or harmful. The debate here revolves around how much control should be exerted over content to prevent harm while preserving the open nature of these platforms. Musk’s situation underscores the need for a balanced approach where the public can engage with authentic expressions without the distortion of ‘legacy media’s negativity filter’.
To summarise:
The juxtaposition of Djokovic’s media strategies and the political interviews on ‘The Joe Rogan Experience’ illustrates a shift in how information is consumed, controlled, and critiqued. Traditional media continues to wield considerable influence but is increasingly challenged by platforms offering less censorship, potentially more misinformation, and direct, unfiltered communication.
Elon Musk’s tweet is another vivid example of the ongoing battle between legacy media’s control over narrative and the liberating yet chaotic nature of modern social media and alternative channels. These platforms have reshaped the way information is consumed, offering both opportunities for direct, unmediated communication and challenges in maintaining the integrity of information.
As society continues to navigate this complex media landscape, the balance between ensuring factual accuracy, preventing misinformation, and respecting freedom of speech will remain a critical discussion point. The future of media lies in finding this equilibrium, where the benefits of both traditional oversight (perhaps through stringent/severe regulatory measures) and modern openness can coexist to serve an informed and engaged public.
Mexico has objected to Google’s decision to rename the Gulf of Mexico as the Gulf of America for US users on Google Maps. President Claudia Sheinbaum confirmed on Wednesday that her government will send an official letter to the tech giant demanding clarification.
The name change follows an announcement by the US government that it had officially rebranded the body of water. In response, Google stated that its platform displays local official names when they differ across countries.
The move has sparked concerns in Mexico over sovereignty and historical recognition. With the government pressing for an explanation, the issue highlights the growing tension between technology firms and national identities in the digital space.
South Sudan has lifted a temporary ban on Facebook and TikTok, imposed following the spread of graphic videos allegedly showing the killings of South Sudanese nationals in Sudan. The National Communications Authority confirmed on 27 January that the disturbing content, which had sparked violent protests and retaliatory killings across South Sudan, has been removed from the platforms.
The videos, which documented ethnically targeted attacks in Sudan’s El Gezira state, had led to widespread outrage. Rights groups blamed the Sudanese army and its allies for the violence, while the army denounced the incidents as isolated violations. South Sudanese authorities urged for a balanced approach to addressing online incitement while protecting the public’s rights.
The unrest highlights the volatile relationship between social media and violence in the region. Authorities continue to call for action to address the root causes of such content while promoting accountability and safety.
The UK government has demanded urgent action from major social media platforms to remove violent and extremist content following the Southport killings. Home Secretary Yvette Cooper criticised the ease with which Axel Rudakubana, who murdered three children and attempted to kill ten others, accessed an al-Qaeda training manual and other violent material online. She described the availability of such content as “unacceptable” and called for immediate action.
Rudakubana, jailed last week for his crimes, had reportedly used techniques from the manual during the attack and watched graphic footage of a similar incident before carrying it out. While platforms like YouTube and TikTok are expected to comply with the UK‘s Online Safety Act when it comes into force in March, Cooper argued that companies have a ‘moral responsibility’ to act now rather than waiting for legal enforcement.
The Southport attack has intensified scrutiny on gaps in counter-terrorism measures and the role of online content in fostering extremism. The government has announced a public inquiry into missed opportunities to intervene, revealing that Rudakubana had been referred to the Prevent programme multiple times. Cooper’s call for immediate action underscores the urgent need to prevent further tragedies linked to online extremism.
The European Commission has concluded its preliminary investigation into social media platform X and is poised to decide on a fine amounting to millions of euros, according to reports from Germany’s Handelsblatt newspaper. The probe’s findings and implications are expected to be revealed soon.
The investigation, conducted under the European Union‘s strict digital regulations, signals the bloc’s commitment to ensuring compliance from major tech companies operating within Europe. Details about the specific breaches or concerns raised during the probe have not yet been disclosed.
The European Commission has not commented on the report. The decision to impose a substantial fine would mark a significant move in enforcing its Digital Services Act, aimed at holding tech platforms accountable.
The European Commission has invited major social media platforms, including Facebook, TikTok, and X, to participate in a “stress test” on 31 January to assess their efforts in combating disinformation ahead of Germany‘s election next month. The test is part of the Digital Services Act (DSA), which requires companies to implement measures mitigating risks on their platforms. Similar tests were successfully conducted for the European Parliament elections last year.
EU spokesperson Thomas Regnier explained that the exercise would involve various scenarios to evaluate how platforms respond to potential challenges under the DSA. Senior compliance officers and specialists from companies such as Microsoft, LinkedIn, Google, Snap, and Meta have been invited to collaborate with German authorities in the closed-door session.
TikTok has confirmed its participation, while other platforms have yet to comment. The initiative underscores the European Union‘s commitment to ensuring transparency and accountability from tech giants in safeguarding democratic processes during elections.
Germany’s interior minister, Nancy Faeser, has called on social media companies to take stronger action against disinformation ahead of the federal parliamentary election on 23 February. Faeser urged platforms like YouTube, Facebook, Instagram, X, and TikTok to label AI-manipulated videos, clearly identify political advertising, and ensure compliance with European laws. She also emphasised the need for platforms to report and remove criminal content swiftly, including death threats.
Faeser met with representatives of major tech firms to underline the importance of transparency in algorithms, warning against the risk of online radicalisation, particularly among young people. Her concerns come amidst growing fears of disinformation campaigns, possibly originating from Russia, that could influence the upcoming election. She reiterated that platforms must ensure they do not fuel societal division through unchecked content.
Calls for greater accountability in the tech industry are gaining momentum. At the World Economic Forum in Davos, Spanish Prime Minister Pedro Sánchez criticised social media owners for enabling algorithms that erode democracy and “poison society.” Faeser’s warnings highlight the growing international demand for stronger regulations on social media to safeguard democratic processes.
South Sudan has suspended access to social media platforms for at least 30 days following violent riots triggered by videos allegedly showing the killings of South Sudanese nationals in Sudan’s El Gezira state. The decision, announced by the National Communications Authority on Wednesday, aims to curb the spread of extreme content and prevent further unrest. Mobile operators MTN South Sudan and Zain confirmed that platforms like Facebook and TikTok would be inaccessible for up to 90 days.
The riots, which erupted in the capital, Juba and other cities, led to the deaths of at least 16 Sudanese nationals. Angry youths looted shops, vandalised property, and burned homes belonging to Sudanese nationals, believing Sudan’s military and its allies were involved in the El Gezira killings. South Sudanese authorities have condemned the violence, urging calm and restraint.
The Sudanese army has also criticised what it described as ‘individual violations’ in El Gezira. The social media ban is part of a broader effort to restore order and prevent further acts of retaliation, as tensions remain high between the neighbouring nations.