Sweden and Denmark tackle gang recruitment on social media

Tech platforms are under increasing pressure from Sweden and Denmark to address the rising issue of gang recruitment ads targeting young Swedes. These ads, often found on platforms like Telegram and TikTok, are being used to recruit individuals for violent crimes across the Nordic region. Concerns have grown as Swedish gang violence has begun spilling over into neighbouring countries, with incidents of Swedish gang members being hired for violent acts in Denmark.

The justice ministers of both countries announced their plans to summon tech companies to discuss their role in enabling these activities. They will demand that the platforms take greater responsibility and implement stronger measures to prevent gang-related content. If the responses from these companies are deemed insufficient, further action may be considered to increase pressure on them.

Danish Minister of Justice Peter Hummelgaard highlighted the challenges posed by encrypted services and social media, which are often used to facilitate criminal activities. Although current legal frameworks do not allow for geoblocking or shutting down such platforms, efforts are being made to explore new avenues to curb their misuse.

Sweden, which has the highest rate of gun violence in the European Union, recently announced plans to strengthen police cooperation across the Nordic region. The country is also increasing security measures at its borders with Denmark to prevent further cross-border gang activity. The growing concern over gang-related violence underscores the urgent need for coordinated efforts between governments and tech platforms.

X, a lone warrior for freedom of speech?

Let’s start with a quote…

‘2024 will be marked by an interplay between change, which is the essence of technological development, and continuity, which characterises digital governance efforts.’, said Dr Jovan Kurbalija in one of his interviews, predicting the year 2024 at its beginning. 

Judging by developments in the social media realm, the year 2024 indeed appears to be the year of change, especially in the legal field, with disputes and implementations of newborn digital policies long in the ‘ongoing’ phase. Dr Kurbalija’s prediction connects us to some of the main topics Diplo and its Digital Watch Observatory are following, such as the issue of content moderation and freedom of speech in the social media world. 

This taxonomic dichotomy could easily make us think of how, in the dimly lit corridors of power, where influence and control intertwine like the strands of a spider’s web, the role of social media has become a double-edged sword. On the one hand, platforms like 𝕏 stand as bastions of free speech, allowing voices to be heard that might otherwise be silenced. On the other hand, they are powerful instruments in the hands of those who control them, with the potential to shape public discourse narratives, influence public opinion, and even ignite conflicts. That is why the scrutiny 𝕏 faces for hosting extremist content raises essential questions about whether it is merely a censorship-free network, or a tool wielded by its enigmatic owner, Elon Musk, to further his agenda.

The story begins with the digital revolution, when the internet was hailed as the great equaliser, giving everyone a voice. Social media platforms emerged as the town squares of the 21st century, where ideas could be exchanged freely, unfiltered by traditional gatekeepers like governments or mainstream media. Under Musk’s ownership, 𝕏 has taken this principle to its extreme, often resisting calls for tighter content moderation to protect free speech. But as with all freedoms, this one also comes with a price.

The platform’s hands-off approach to content moderation has led to widespread concerns about its role in amplifying extremist content. The issue here is not just about spreading harmful material; it touches on the core of digital governance. Governments around the world are increasingly alarmed by the potential for social media platforms to become breeding grounds for radicalisation and violence. The recent scrutiny of 𝕏 is just the latest chapter in an ongoing struggle between the need for free expression and the imperative to maintain public safety.

The balance between these two forces is incredibly delicate in countries like Türkiye, for example, where the government has a history of cracking down on dissent. The Turkish government’s decision to block instagram for nine days in August 2024 after the platform failed to comply with local laws and sensitivities is a stark reminder of the power dynamics at play. In this context, 𝕏’s refusal to bow to similar pressures can be seen as both a defiant stand for free speech and a dangerous gamble that could have far-reaching consequences.

But the story does not end there. The influence of social media extends far beyond any one country’s borders. In the UK, the recent riots have highlighted the role of platforms like 𝕏 and Meta in both facilitating and exacerbating social unrest. While Meta has taken a more proactive approach to content moderation, removing inflammatory material and attempting to prevent the spread of misinformation, 𝕏’s more relaxed policies have allowed a more comprehensive range of content to circulate. Such an approach has included not just legitimate protest organisations but also harmful rhetoric that has fuelled violence and division.

The contrast between the two platforms is stark. Meta, with its more stringent content policies, has been criticised for stifling free speech and suppressing dissenting voices. Yet, in the context of the British riots, its approach may have helped prevent the situation from escalating further. On the other hand, 𝕏 has been lauded for its commitment to free expression, but this freedom comes at a price. The platform’s role in the riots has drawn sharp criticism, with some accusing it of enabling the very violence it claims to oppose as the government officials have vowed action against tech platforms, even though Britain’s Online Safety Act will not be fully effective until next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect and will allegedly serve as a backup in similar disputes.

The British riots also serve as a cautionary tale about the power of social media to shape public discourse. In an age where information spreads at lightning speed, the ability of platforms like 𝕏 and Meta to influence events in real time is unprecedented. This kind of lever of power is not just a threat to governments but also a powerful tool that can be used to achieve political ends. For Musk, acquiring 𝕏 represents a business opportunity and a chance to shape the global discourse in ways that align with his future vision.

Musk did not even hesitate to accuse the European Commission of attempting to pull off what he describes as an ‘illegal secret deal’ with 𝕏. In one of his posts, he claimed the EU, with its stringent new regulations aimed at curbing online extremist content and misinformation, allegedly tried to coax 𝕏 into quietly censoring content to sidestep hefty fines. Other tech giants, according to Musk, nodded in agreement, but not 𝕏. The platform stood its ground, placing its unwavering belief in free speech above all else.

While the European Commission fired back, accusing 𝕏 of violating parts of the EU’s Digital Services Act, Musk’s bold stance has ignited a fiery debate. And here, it is not just about rules and fines anymore—it is a battle over the very soul of digital discourse. How far should governmental oversight go? And at what point does it start to choke the free exchange of ideas? Musk’s narrative paints 𝕏 as a lone warrior, holding the line against mounting pressure, and in doing so, forces us to confront the delicate dance between regulation and the freedom to speak openly in today’s digital world.

Furthermore, the cherry on top of the cake, in this case, is Musk’s close contact and support for the potential new president of the USA, Donald Trump, generating additional doubts about the concentration and acquisition of power by social media owners, respectively, tech giants and their allies. Namely, in an interview with Donald Trump, Elon Musk openly endorsed the candidate for the US presidency, discussing, among others, topics such as regulatory policies and the juridical system, thus fueling speculation about his platform 𝕏 as a powerful oligarchic lever of power.

At this point, it is already crystal clear that governments are grappling with how to regulate these platforms and the difficult choices they are faced with. On the one hand, there is a clear need to implement optimal measures in order to achieve greater oversight in preventing the spread of extremist content and protecting public safety. On the other hand, too much regulation risks stifling the very freedoms that social media platforms were created to protect. This delicate dichotomy is at the heart of the ongoing debate about the role of tech giants in a modern, digital society.

The story of 𝕏 and its role in hosting extremist content is more than just the platform itself. It is about the power of technology to shape our world, for better or worse. As the digital landscape continues to evolve, the questions raised by 𝕏’s approach to content moderation will only become more urgent. And in the corridors of power, where decisions that shape our future are made, answers to those questions will determine the fate of the internet itself.

Anthropic faces lawsuit for copyright infringement

Three authors have filed a class-action lawsuit against the AI company Anthropic in a California federal court, accusing the firm of illegally using their books and hundreds of thousands of others to train its AI chatbot, Claude. The lawsuit, initiated by writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, claims that Anthropic utilised pirated versions of their works to develop the chatbot’s ability to respond to human prompts.

Anthropic, which has received financial backing from major companies like Amazon and Google, acknowledged the lawsuit but declined to comment further due to the ongoing litigation. The legal action against Anthropic is part of a broader trend, with other content creators, including visual artists and news outlets, also suing tech companies over using their copyrighted material in training AI models.

This is not the first time Anthropic has faced such accusations. Music publishers previously sued the company for allegedly misusing copyrighted song lyrics to train Claude. The authors involved in the current case argue that Anthropic has built a multibillion-dollar business by exploiting its intellectual property without permission.

The lawsuit demands financial compensation for the authors and a court order to permanently prevent Anthropic from using their work unlawfully. As the case progresses, it highlights the growing tension between content creators and AI companies over using copyrighted material in developing AI technologies.

OpenAI and Condé Nast team up for AI-powered news delivery

OpenAI, led by Sam Altman, has entered a multi-year partnership with Condé Nast to integrate content from brands like Vogue and The New Yorker into its AI products, including ChatGPT and the newly launched SearchGPT. Although the deal’s financial terms remain undisclosed, the collaboration follows similar agreements with prominent media outlets such as Time magazine, Financial Times, and Le Monde.

These partnerships are crucial for training AI models but have sparked controversy. Some media organisations, like The New York Times, have taken legal action against OpenAI, citing copyright concerns over the use of their content. OpenAI’s COO, Brad Lightcap, emphasised the company’s commitment to maintaining accuracy and integrity in news delivery as AI becomes increasingly integral to this process.

Roger Lynch, CEO of Condé Nast, highlighted the financial pressures news and digital media faced in recent years, attributing them to tech companies undermining publishers’ ability to monetise content. He sees the partnership with OpenAI as a step toward reclaiming some of that lost revenue.

OpenAI’s introduction of SearchGPT in July, a search engine with real-time internet access, marks a significant move into territory traditionally dominated by Google. The company is actively collaborating with its news partners to gather feedback and refine the performance of SearchGPT, aiming to enhance its role in the evolving landscape of digital news consumption.

Video game actors fight for job security amid AI’s impact on the industry

In the world of video game development, the rise of AI has sparked concern among performers who fear it could threaten their jobs. Motion capture actors like Noshir Dalal, who perform the physical movements that bring game characters to life, worry that AI could be used to replicate their performances without their consent, potentially reducing job opportunities and diminishing the value of their work.

Dalal, who has played characters in the most popular video games like ‘Star Wars Jedi: Survivor,’ highlights the physical toll and skill required in motion capture work. He argues that AI could allow studios to bypass hiring actors for new projects by reusing data from past performances. The concern is central to the ongoing strike by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), which represents video game performers and other media professionals. The union is demanding stronger protections against unregulated AI use in the industry.

Why does this matter?

AI’s ability to generate new animations and voices based on existing data is at the heart of the issue. While studios argue that they have offered meaningful AI protections, performers remain sceptical. They worry that the use of AI could lead to ethical dilemmas, such as their likenesses being used in ways they do not endorse, as seen in the controversy surrounding game modifications that use AI to create inappropriate content.

Video game companies have offered wage increases and other benefits as negotiations continue, but the debate over AI protections remains unresolved. Performers like Dalal and others argue that AI could strip away the artistry and individuality that actors bring to their roles without strict controls, leaving them vulnerable to exploitation. The outcome of this dispute could set a precedent for how AI is regulated in the entertainment industry, impacting the future of video game development and beyond.

OpenAI cracks down on Iranian influence campaign

OpenAI has intensified its efforts to prevent the misuse of AI, especially in light of the numerous elections scheduled for 2024. The company recently identified and turned off a cluster of ChatGPT accounts linked to an Iranian covert influence operation named Storm-2035. The operation aimed to manipulate public opinion during the US presidential election using AI-generated content on social media and websites but failed to gain significant engagement or reach a broad audience.

The operation generated articles and social media comments on various topics, including US politics, global events, and the conflict in Gaza. The content was published on websites posing as news outlets and shared on platforms like X and Instagram. Despite their efforts, the operation saw minimal interaction, with most posts receiving little to no attention.

OpenAI’s investigation into this operation was bolstered by information from Microsoft, and it revealed that the influence campaign was largely ineffective, scoring low on a scale assessing the impact of covert operations. The company remains vigilant against such threats and has shared its findings with government and industry stakeholders.

OpenAI is committed to collaborating with industry, civil society, and government to counter these influence operations. The company emphasises the importance of transparency and continues to monitor and disrupt any attempts to exploit its AI technologies for manipulative purposes.

Trump shares fake AI-generated images of Swift fans

Donald Trump has shared AI-generated images on social media, showing Taylor Swift fans endorsing his presidential campaign. The images, which are clearly fake, have sparked controversy, particularly since Swift has not publicly supported any candidates in the 2024 US election.

Trump, however, embraced the images, responding with ‘I accept!’ on his platform. The posts were also shared by an account that reposts his content on X (formerly Twitter). Despite their obvious fabrication, the posts have drawn significant attention online.

Taylor Swift, who endorsed Joe Biden in the last election, has not commented on these fake images. Her history with AI-generated content has been fraught, including deepfake videos that once led to a temporary ban on her searches on X.

Swift’s potential legal actions against AI content providers remain a topic of interest. However, the source of these recent fake posts remains unknown, raising concerns about the use of AI in political propaganda.

Misinformation fuels boycotts of major US companies

Amid the heated political landscape in the United States, major companies like Google and Netflix are facing calls for boycotts due to alleged political affiliations. These online campaigns, mainly driven by false information, suggest that these companies support Kamala Harris in the upcoming election. However, these claims are baseless and have been debunked by fact-checkers.

The boycott calls have gained traction on platforms like X, owned by Elon Musk, who has shown support for Donald Trump. Fake accounts on X have broadly spread these false narratives, leading to widespread calls for users to cancel their Netflix subscriptions and avoid Google’s services. Despite Netflix’s clarification that any donations were personal and not connected to the company, the misinformation has continued to spread, illustrating the vulnerability of brands in today’s politically charged environment.

The disinformation campaigns highlight how quickly false information can manipulate public opinion and consumer behaviour, especially in the lead-up to an election. Musk’s influence on X and his criticisms of companies like Google have fueled these misleading narratives.

Surveys indicate that many consumers prefer companies to stay neutral in political matters, yet the polarised environment makes this difficult. The controversy has also led to a decline in advertising on X as brands seek to distance themselves from platforms that enable disinformation.

The impact of these boycotts and the broader disinformation campaigns underscores the challenges companies face in maintaining their reputation and trust in an increasingly divided society. As the election approaches, the risk of such campaigns influencing public opinion and consumer actions remains high.

Meta disrupts Russia’s AI-driven misinformation campaigns

According to a Meta security report, Russia’s use of generative AI in online deception campaigns could have been more effective. Meta, the parent company of Facebook and Instagram, reported that while AI-powered tactics offer some productivity and content-generation gains for malicious actors, they have yet to advance these efforts significantly. Despite growing concerns about generative AI being used to manipulate elections, Meta has successfully disrupted such influence operations.

The report highlights that Russia remains a leading source of ‘coordinated inauthentic behaviour’ on social media, particularly since its invasion of Ukraine in 2022. These operations have primarily targeted Ukraine and its allies, with expectations that as the US election nears, Russia-backed campaigns will increasingly attack candidates who support Ukraine. Meta’s approach to detecting these campaigns focuses on account behaviour rather than content alone, as influence operations often span multiple online platforms.

Meta has observed that posts on X are sometimes used to bolster fabricated content. While Meta shares its findings with other internet companies, it notes that X has significantly reduced its content moderation efforts, making it a haven for disinformation. Researchers have also raised concerns about X, now owned by Elon Musk, being a platform for political misinformation. Musk, who supports Donald Trump, has been criticised for using his influence on the platform to spread falsehoods, including sharing an AI-generated deepfake video of Vice President Kamala Harris.

X shuts down operations in Brazil over censorship dispute

Elon Musk’s media platform X announced last Saturday that it would cease operations in Brazil immediately, citing ‘censorship orders’ from Brazilian judge Alexandre de Moraes. According to X, de Moraes allegedly threatened to arrest one of the company’s legal representatives in Brazil if they did not comply with orders to remove certain content from the platform. X shared images of a document purportedly signed by the judge, stating that the representative, Rachel Nova Conceicao, would face a daily fine and possible arrest if the platform did not comply.

In response, X decided to close its operations in Brazil to protect its staff, although the service remains available to Brazilian users. The Brazilian Supreme Court, where de Moraes serves, declined to comment on the authenticity of the document shared by X.

Musk’s decision follows earlier orders by de Moraes to block specific accounts on X as part of an investigation into ‘digital militias’ accused of spreading fake news during former President Jair Bolsonaro’s government. Musk criticised de Moraes’ decisions, calling them ‘unconstitutional,’ and X initially resisted these rulings.

However, after Musk’s objections, X eventually assured Brazil’s Supreme Court that it would comply with the legal orders, although technical issues reportedly allowed some blocked users to remain active. Musk has since condemned de Moraes as a ‘disgrace to justice’ and rejected the judge’s alleged ‘secret censorship’ demands.