Halliburton, a major US oilfield services company, experienced a cyberattack on Wednesday, affecting certain systems and disrupting business operations at its north Houston campus and global networks. The company is working with external experts to resolve the issue and has advised some staff not to connect to internal networks as they investigate the cause and impact of the attack.
Cyberattacks have become a significant concern for the energy sector following high-profile incidents like the 2021 Colonial Pipeline ransomware attack that led to fuel shortages and price spikes. Although details about the Halliburton attack remain unclear, ransomware attacks typically involve hackers encrypting data and demanding payment for its release, with threats to leak confidential information if their demands are not met.
Halliburton, one of the largest oilfield services firms globally, is now the latest in a series of major US companies targeted by cybercriminals, raising further alarm in an industry already on high alert for such threats.
Switzerland has announced its decision to join the European Cyber Security Organisation (ECSO) to bolster its defences against cyber threats. By becoming a member, Switzerland will gain access to valuable information on technological advancements and be able to collaborate with a network of experts across Europe, enhancing its ability to combat online attacks.
The ECSO, which includes 300 members such as companies, universities, research centres, and European governments, provides a platform for sharing expertise and resources in cybersecurity. Switzerland’s move comes in response to a notable rise in cyberattacks and disinformation campaigns earlier this year, particularly surrounding a summit focused on establishing peace in Ukraine.
This membership reflects Switzerland’s proactive approach to strengthening its cybersecurity infrastructure, ensuring it remains resilient despite evolving digital threats.
‘2024 will be marked by an interplay between change, which is the essence of technological development, and continuity, which characterises digital governance efforts.’, said Dr Jovan Kurbalija in one of his interviews, predicting the year 2024 at its beginning.
Judging by developments in the social media realm, the year 2024 indeed appears to be the year of change, especially in the legal field, with disputes and implementations of newborn digital policies long in the ‘ongoing’ phase. Dr Kurbalija’s prediction connects us to some of the main topics Diplo and its Digital Watch Observatory are following, such as the issue of content moderation and freedom of speech in the social media world.
This taxonomic dichotomy could easily make us think of how, in the dimly lit corridors of power, where influence and control intertwine like the strands of a spider’s web, the role of social media has become a double-edged sword. On the one hand, platforms like 𝕏 stand as bastions of free speech, allowing voices to be heard that might otherwise be silenced. On the other hand, they are powerful instruments in the hands of those who control them, with the potential to shape public discourse narratives, influence public opinion, and even ignite conflicts. That is why the scrutiny 𝕏 faces for hosting extremist content raises essential questions about whether it is merely a censorship-free network, or a tool wielded by its enigmatic owner, Elon Musk, to further his agenda.
The story begins with the digital revolution, when the internet was hailed as the great equaliser, giving everyone a voice. Social media platforms emerged as the town squares of the 21st century, where ideas could be exchanged freely, unfiltered by traditional gatekeepers like governments or mainstream media. Under Musk’s ownership, 𝕏 has taken this principle to its extreme, often resisting calls for tighter content moderation to protect free speech. But as with all freedoms, this one also comes with a price.
The platform’s hands-off approach to content moderation has led to widespread concerns about its role in amplifying extremist content. The issue here is not just about spreading harmful material; it touches on the core of digital governance. Governments around the world are increasingly alarmed by the potential for social media platforms to become breeding grounds for radicalisation and violence. The recent scrutiny of 𝕏 is just the latest chapter in an ongoing struggle between the need for free expression and the imperative to maintain public safety.
The balance between these two forces is incredibly delicate in countries like Türkiye, for example, where the government has a history of cracking down on dissent. The Turkish government’s decision to block instagram for nine days in August 2024 after the platform failed to comply with local laws and sensitivities is a stark reminder of the power dynamics at play. In this context, 𝕏’s refusal to bow to similar pressures can be seen as both a defiant stand for free speech and a dangerous gamble that could have far-reaching consequences.
But the story does not end there. The influence of social media extends far beyond any one country’s borders. In the UK, the recent riots have highlighted the role of platforms like 𝕏 and Meta in both facilitating and exacerbating social unrest. While Meta has taken a more proactive approach to content moderation, removing inflammatory material and attempting to prevent the spread of misinformation, 𝕏’s more relaxed policies have allowed a more comprehensive range of content to circulate. Such an approach has included not just legitimate protest organisations but also harmful rhetoric that has fuelled violence and division.
The contrast between the two platforms is stark. Meta, with its more stringent content policies, has been criticised for stifling free speech and suppressing dissenting voices. Yet, in the context of the British riots, its approach may have helped prevent the situation from escalating further. On the other hand, 𝕏 has been lauded for its commitment to free expression, but this freedom comes at a price. The platform’s role in the riots has drawn sharp criticism, with some accusing it of enabling the very violence it claims to oppose as the government officials have vowed action against tech platforms, even though Britain’s Online Safety Act will not be fully effective until next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect and will allegedly serve as a backup in similar disputes.
The British riots also serve as a cautionary tale about the power of social media to shape public discourse. In an age where information spreads at lightning speed, the ability of platforms like 𝕏 and Meta to influence events in real time is unprecedented. This kind of lever of power is not just a threat to governments but also a powerful tool that can be used to achieve political ends. For Musk, acquiring 𝕏 represents a business opportunity and a chance to shape the global discourse in ways that align with his future vision.
Musk did not even hesitate to accuse the European Commission of attempting to pull off what he describes as an ‘illegal secret deal’ with 𝕏. In one of his posts, he claimed the EU, with its stringent new regulations aimed at curbing online extremist content and misinformation, allegedly tried to coax 𝕏 into quietly censoring content to sidestep hefty fines. Other tech giants, according to Musk, nodded in agreement, but not 𝕏. The platform stood its ground, placing its unwavering belief in free speech above all else.
The European Commission offered 𝕏 an illegal secret deal: if we quietly censored speech without telling anyone, they would not fine us.
While the European Commission fired back, accusing 𝕏 of violating parts of the EU’s Digital Services Act, Musk’s bold stance has ignited a fiery debate. And here, it is not just about rules and fines anymore—it is a battle over the very soul of digital discourse. How far should governmental oversight go? And at what point does it start to choke the free exchange of ideas? Musk’s narrative paints 𝕏 as a lone warrior, holding the line against mounting pressure, and in doing so, forces us to confront the delicate dance between regulation and the freedom to speak openly in today’s digital world.
Furthermore, the cherry on top of the cake, in this case, is Musk’s close contact and support for the potential new president of the USA, Donald Trump, generating additional doubts about the concentration and acquisition of power by social media owners, respectively, tech giants and their allies. Namely, in an interview with Donald Trump, Elon Musk openly endorsed the candidate for the US presidency, discussing, among others, topics such as regulatory policies and the juridical system, thus fueling speculation about his platform 𝕏 as a powerful oligarchic lever of power.
At this point, it is already crystal clear that governments are grappling with how to regulate these platforms and the difficult choices they are faced with. On the one hand, there is a clear need to implement optimal measures in order to achieve greater oversight in preventing the spread of extremist content and protecting public safety. On the other hand, too much regulation risks stifling the very freedoms that social media platforms were created to protect. This delicate dichotomy is at the heart of the ongoing debate about the role of tech giants in a modern, digital society.
The story of 𝕏 and its role in hosting extremist content is more than just the platform itself. It is about the power of technology to shape our world, for better or worse. As the digital landscape continues to evolve, the questions raised by 𝕏’s approach to content moderation will only become more urgent. And in the corridors of power, where decisions that shape our future are made, answers to those questions will determine the fate of the internet itself.
OpenAI has intensified its efforts to prevent the misuse of AI, especially in light of the numerous elections scheduled for 2024. The company recently identified and turned off a cluster of ChatGPT accounts linked to an Iranian covert influence operation named Storm-2035. The operation aimed to manipulate public opinion during the US presidential election using AI-generated content on social media and websites but failed to gain significant engagement or reach a broad audience.
According to Reuters’ latest news..
The US has accused Iran of launching cyber and influence operations aimed at the campaigns of US presidential candidates and sowing political discord among the American public. A joint statement from the FBI, the Office of the Director of National Intelligence, and the Cybersecurity and Infrastructure Security Agency highlighted increasingly aggressive Iranian activity during the election cycle. The statement follows earlier allegations from Donald Trump’s campaign regarding an Iranian hack on one of its websites. Iran has denied the accusations, describing them as ‘unsubstantiated and devoid of any standing.’ The US intelligence community remains confident in its assessment, citing attempts to access individuals within the presidential campaigns and activities intended to influence the election process.
The operation generated articles and social media comments on various topics, including US politics, global events, and the conflict in Gaza. The content was published on websites posing as news outlets and shared on platforms like X and Instagram. Despite their efforts, the operation saw minimal interaction, with most posts receiving little to no attention.
OpenAI’s investigation into this operation was bolstered by information from Microsoft, and it revealed that the influence campaign was largely ineffective, scoring low on a scale assessing the impact of covert operations. The company remains vigilant against such threats and has shared its findings with government and industry stakeholders.
OpenAI is committed to collaborating with industry, civil society, and government to counter these influence operations. The company emphasises the importance of transparency and continues to monitor and disrupt any attempts to exploit its AI technologies for manipulative purposes.
Japan’s Defense Ministry is preparing to launch a new research institute in Tokyo this October to develop cutting-edge defence technologies with the potential to transform future warfare. The institute, which will be housed at the Ebisu Garden Place commercial complex, is inspired by the US Defense Advanced Research Projects Agency (DARPA) and will collaborate closely with the private sector. With a team of around 100 personnel, half of whom will be experts from outside the ministry, the institute will focus on key areas like AI, robotics, and advanced particle research.
The new institute, provisionally named the Defense Innovation Technology Institute, aims to drive ‘breakthrough research’ by deploying innovative defence technologies within three years using existing technologies. Projects may include the development of autonomous uncrewed vehicles and advanced submarine detection methods. Additionally, the institute will serve as a think tank, monitoring global trends in cutting-edge technologies and managing subsidies for dual-use technologies that have applications in both defence and civilian sectors.
The initiative is part of Japan’s broader National Defense Strategy, which emphasises finding and developing multi-use technologies to bolster the country’s defence capabilities. The creation of the institute, backed by a 21.7 billion yen budget for the current fiscal year, marks a significant step in Japan’s largest defence buildup since World War II, driven by concerns over growing influence from China and nuclear and missile threats from North Korea.
The Turkish National Intelligence Organization (MIT), in collaboration with the Turkish Gendarmerie General Command and the National Cyber Incident Response Center (USOM), has dismantled a global cyber espionage network responsible for stealing personal data from thousands of individuals worldwide, including in Turkey. The operation, led by the Ankara Chief Public Prosecutor’s Office, resulted in the arrest of 11 suspects.
According to MIT, the network had international ties and was sharing stolen data with various entities, including terrorist organisations. The network had been under long-term surveillance, during which MIT discovered that the stolen information was being used to support terrorist activities.
As part of the operation, several websites associated with the network were shut down, 11 suspected criminals were arrested, and the investigation continues, with the seized data undergoing thorough examination. MIT has announced plans to expand its cyber operations to protect sensitive personal data and investigate the network’s international connections further.
Bluesky, a social media platform, has reported a significant increase in signups in the United Kingdom recently as users look for alternatives to Elon Musk’s X. The increase follows Musk’s controversial remarks on ongoing riots in the UK, which have driven users, including several Members of Parliament, to explore other platforms. The company announced that it had experienced a 60% rise in activity from UK accounts.
Musk has faced criticism for inflaming tensions after riots in Britain were sparked by misinformation surrounding the murder of three girls in northern England. The Tesla CEO allegedly used X to disseminate misleading information to his vast audience, including a post claiming that civil war in Britain was ‘inevitable.’ The case has prompted Prime Minister Keir Starmer to respond and increased calls for the government to accelerate the implementation of online content regulations.
Bluesky highlighted that the UK had the most signups of any country for five of the last seven days. Once supported by Twitter co-founder Jack Dorsey, the platform is among the many apps vying to replace Twitter after Musk’s turbulent takeover in late 2022.
As of July, Bluesky’s monthly active user base was approximately 688,568, which is small compared to X’s 76.9 million users, according to Similarweb, a digital market intelligence firm. Despite its smaller size, the recent surge in UK signups to Bluesky appears to be a growing interest in alternative social media platforms.
The British government is considering revisions to the Online Safety Act in response to a recent wave of racist riots allegedly fueled by misinformation spread online. The act, passed in October but not yet enforced, currently allows the government to fine social media companies up to 10% of their global turnover if they fail to remove illegal content, such as incitements to violence or hate speech. However, proposed changes could extend these penalties to platforms that permit ‘legal but harmful’ content, like misinformation, to thrive.
Britain’s Labour government inherited the act from the Conservatives, who had spent considerable time adjusting the bill to balance free speech with the need to curb online harms. A recent YouGov poll found that 66% of adults believe social media companies should be held accountable for posts inciting criminal behaviour, and 70% feel these companies are not sufficiently regulated. Additionally, 71% of respondents criticised social media platforms for not doing enough to combat misinformation during the riots.
In response to these concerns, Cabinet Office Minister Nick Thomas-Symonds announced that the government is prepared to revisit the act’s framework to ensure its effectiveness. London Mayor Sadiq Khan also voiced his belief that the law is not ‘fit for purpose’ and called for urgent amendments in light of the recent unrest.
Why does it matter?
The riots, which spread across Britain last week, were triggered by false online claims that the perpetrator of a 29 July knife attack, which killed three young girls, was a Muslim migrant. As tensions escalated, X owner Elon Musk contributed to the chaos by sharing misleading information with his large following, including a statement suggesting that civil war in Britain was ‘inevitable.’ Prime Minister Keir Starmer’s spokesperson condemned these comments, stating there was ‘no justification’ for such rhetoric.
Concerns are mounting over content shared by the Palestinian militant group Hamas on X, the social media platform owned by Elon Musk. The Global Internet Forum to Counter Terrorism (GIFCT), which includes major companies like Facebook, Microsoft, and YouTube, is reportedly worried about X’s continued membership and position on its board, fearing it undermines the group’s credibility.
The Sunday Times reported that X has become the most accessible platform to find Hamas propaganda videos, along with content from other UK-proscribed terrorist groups like Hezbollah and Palestinian Islamic Jihad. Researchers were able to locate such videos within minutes on X.
Why does it matter?
These concerns come as X faces criticism for reducing its content moderation capabilities. The GIFCT’s independent advisory committee expressed alarm in its 2023 report, citing significant reductions in online trust and safety measures on specific platforms, implicitly pointing to X.
Elon Musk’s approach to turning X into a ‘free speech’ platform has included reinstating previously banned extremists, allowing paid verification, and cutting much of the moderation team. The shift has raised fears about X’s ability to manage extremist content effectively. Despite being a founding member of GIFCT, X still needs to meet its financial obligations.
Additionally, the criticism Musk faced in Great Britain indicates the complex and currently unsolvable policy governance question: whether to save the freedom of speech or scrutinise in addition the big tech social media owners and focus on community safety?
Great Britain scenario..
Elon Musk faced criticism for his social media posts, which many believe have fueled ongoing riots in Britain. Musk shared riot footage and made controversial statements, including predicting a civil war and criticizing Prime Minister Keir Starmer for focusing on speech policing instead of community safety. The unrest was sparked by false online claims that a Taylor Swift-themed dance class stabbing involved an illegal Muslim immigrant, though the suspect is actually a 17-year-old from Cardiff with Rwandan Christian roots. The misinformation allegedly led to anti-immigrant protests and civil disorder across Britain, with violence targeting mosques, asylum seeker housing, and police. Prime Minister Starmer condemned social media companies, particularly X, for enabling the spread of violent disinformation. Government officials, including Technology Secretary Peter Kyle and Home Secretary Yvette Cooper, have vowed action against tech platforms, though Britain’s Online Safety Act won’t be fully effective until next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect.
Elon Musk is under fire for his social media posts, which many believe have exacerbated the ongoing riots in Britain. Musk, known for his provocative online presence, has shared riot footage on his platform, X, and made controversial remarks, including predicting a ‘civil war’ and criticising Prime Minister Keir Starmer and the British government for prioritising speech policing over community safety.
The unrest began after a stabbing at a Taylor Swift-themed dance class in Southport, England, resulted in the deaths of three young girls. Allegedly, false information spread online suggested the attacker was an illegal Muslim immigrant. However, the suspect, Axel Rudakubana, is a 17-year-old born in Cardiff, Wales, with unknown religious affiliation, though his parents are from predominantly Christian Rwanda.
Despite the facts, anti-immigrant protests have erupted in at least 15 cities across Britain, leading to the most significant civil disorder since 2011. Rioters have targeted mosques and hotels housing asylum seekers, with much violence directed at the police.
Prime Minister Starmer has criticised social media companies for allowing violent disinformation to spread. He specifically called out Musk for reinstating banned far-right figures, including activist Tommy Robinson. Technology Secretary Peter Kyle has met with representatives from major tech companies like TikTok, Meta, Google, and X to stress their duty to curb the spread of harmful misinformation.
Publicly, Musk has argued that the government should focus on its duties, mocking Starmer and questioning the UK’s approach to policing speech.
Home Secretary Yvette Cooper has stated that social media has amplified disinformation, promising government action against tech giants and online criminality. However, Britain’s Online Safety Act, which mandates platforms to address illegal content, will be fully effective next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect.