UK considers revising Online Safety Act amid riots

The British government is considering revisions to the Online Safety Act in response to a recent wave of racist riots allegedly fueled by misinformation spread online. The act, passed in October but not yet enforced, currently allows the government to fine social media companies up to 10% of their global turnover if they fail to remove illegal content, such as incitements to violence or hate speech. However, proposed changes could extend these penalties to platforms that permit ‘legal but harmful’ content, like misinformation, to thrive.

Britain’s Labour government inherited the act from the Conservatives, who had spent considerable time adjusting the bill to balance free speech with the need to curb online harms. A recent YouGov poll found that 66% of adults believe social media companies should be held accountable for posts inciting criminal behaviour, and 70% feel these companies are not sufficiently regulated. Additionally, 71% of respondents criticised social media platforms for not doing enough to combat misinformation during the riots.

In response to these concerns, Cabinet Office Minister Nick Thomas-Symonds announced that the government is prepared to revisit the act’s framework to ensure its effectiveness. London Mayor Sadiq Khan also voiced his belief that the law is not ‘fit for purpose’ and called for urgent amendments in light of the recent unrest.

Why does it matter?

The riots, which spread across Britain last week, were triggered by false online claims that the perpetrator of a 29 July knife attack, which killed three young girls, was a Muslim migrant. As tensions escalated, X owner Elon Musk contributed to the chaos by sharing misleading information with his large following, including a statement suggesting that civil war in Britain was ‘inevitable.’ Prime Minister Keir Starmer’s spokesperson condemned these comments, stating there was ‘no justification’ for such rhetoric.

X faces scrutiny for hosting extremist content

Concerns are mounting over content shared by the Palestinian militant group Hamas on X, the social media platform owned by Elon Musk. The Global Internet Forum to Counter Terrorism (GIFCT), which includes major companies like Facebook, Microsoft, and YouTube, is reportedly worried about X’s continued membership and position on its board, fearing it undermines the group’s credibility.

The Sunday Times reported that X has become the most accessible platform to find Hamas propaganda videos, along with content from other UK-proscribed terrorist groups like Hezbollah and Palestinian Islamic Jihad. Researchers were able to locate such videos within minutes on X.

Why does it matter?

These concerns come as X faces criticism for reducing its content moderation capabilities. The GIFCT’s independent advisory committee expressed alarm in its 2023 report, citing significant reductions in online trust and safety measures on specific platforms, implicitly pointing to X.

Elon Musk’s approach to turning X into a ‘free speech’ platform has included reinstating previously banned extremists, allowing paid verification, and cutting much of the moderation team. The shift has raised fears about X’s ability to manage extremist content effectively. Despite being a founding member of GIFCT, X still needs to meet its financial obligations.

Additionally, the criticism Musk faced in Great Britain indicates the complex and currently unsolvable policy governance question: whether to save the freedom of speech or scrutinise in addition the big tech social media owners and focus on community safety?

OpenAI appoints AI safety expert as director

One of the largest AI research organizations has appointed Zico Kolter, a distinguished professor and director of the machine learning department at Carnegie Mellon University, to its board of directors. Renowned for his focus on AI safety, Kolter will also join the company’s safety and security committee, which is tasked with overseeing the safe deployment of OpenAI’s projects. The appointment comes as OpenAI’s board undergoes changes in response to growing concerns about the safety of generative AI, which has seen rapid adoption across various sectors.

Following the departure of co-founder John Schulman, Kolter’s addition to the OpenAI board underscores a commitment to addressing these safety concerns. He brings a wealth of experience from his roles as the chief expert at Bosch and chief technical adviser at Gray Swan, a startup dedicated to AI safety. Notably, Kolter has contributed to developing methods that automatically assess the safety of large language models, a crucial area as AI systems become increasingly sophisticated. His expertise will be invaluable in guiding OpenAI as it navigates the challenges posed by the widespread use of generative AI technologies such as ChatGPT.

The formation of the safety and security committee in May, preceded by Ilya Sutskever‘s leaving, which includes Kolter alongside CEO Sam Altman and other directors, underlines OpenAI’s proactive approach to ensuring AI is developed and deployed responsibly. The committee is responsible for making recommendations on safety decisions across all of OpenAI’s projects, reflecting the company’s recognition of the potential risks associated with AI advancements.

In a related move, Microsoft relinquished its board observer seat at OpenAI in July, aiming to address antitrust concerns from regulators in the United States and the United Kingdom. This decision was seen as a step towards maintaining a balance of power within OpenAI, as the company continues to play a leading role in the rapidly evolving AI landscape.

UK riots escalate as Elon Musk stirs tensions with conspiracy theory

The CEO of Tesla has drawn criticism after labelling UK Prime Minister Keir Starmer as ‘#TwoTierKier’ and promoting a far-right conspiracy theory that claims white rioters are treated more harshly by the police than minorities. His comments have coincided with rising tensions and violent protests across the UK, where asylum centres are being boarded up as a precaution. Amidst the unrest, six thousand police officers are on standby to protect dozens of targeted locations, including asylum centres and law firms, from far-right attacks.

Elon Musk’s tweets have intensified the situation, with officials struggling to get posts removed from X, formerly known as Twitter, that are deemed threats to national security. The riots were triggered by the recent deaths of three children in Southport, leading to a surge in conspiracy theories and far-right activity on social media platforms, particularly Telegram. The messaging app has taken some action by removing a channel promoting violent protests, though it’s unclear whether this was prompted by UK authorities.

United Kingdom law enforcement has been cracking down on those inciting violence online, with arrests already being made. One high-profile arrest involved the wife of a Northampton councillor who called for asylum seeker hotels to be set on fire in a post on X. Meanwhile, rioters have been using TikTok Live to broadcast their actions, providing police with evidence to prosecute and charge over 100 individuals, with some already facing court proceedings.

Critics argue that Musk‘s influence is exacerbating the situation by amplifying extremist voices, including those who had been previously banned from social media. Courts Minister Heidi Alexander condemned Musk’s actions, calling them ‘irresponsible’ and ‘unconscionable.’ Meanwhile, Starmer has focused on the broader issue of online radicalisation, stressing the importance of legal consequences for those promoting violence.

YouTube faces uncertain future in Russia

As Russia tightens its grip on independent media, YouTube remains a vital platform for free expression, particularly for opposition voices. However, this may only last for a while longer. Recent mass outages reported by Russian internet services signal a possible shift, with lawmakers blaming Google’s outdated infrastructure for the slowdowns—a claim Google disputes.

The video platform, which has served as a key outlet for dissenting opinions, faces potential blocking in Russia. With independent media largely banned, YouTube has become a crucial source of opposition content, such as the widely viewed video by the late Alexei Navalny accusing President Vladimir Putin of corruption.

Experts warn that banning YouTube could severely impact online freedom and disrupt Russia’s internet connectivity. The widespread use of VPNs to bypass restrictions could also strain the country’s internet infrastructure, further complicating the situation.

Why does it matter?

The Russian government has historically throttled internet traffic to silence dissent, but it now relies on a more sophisticated censorship system. Despite the growing pressure, YouTube remains accessible, likely due to fears of public backlash and the potential strain on Russia’s networks.

As Moscow encourages users to switch to domestic platforms like VK Video, the future of YouTube in Russia hangs in the balance. While some non-political content creators may migrate, opposition channels could struggle to maintain their reach if forced off YouTube.

EU scrutiny of X could expand due to UK riots

The European Commission’s ongoing investigation into social media platform X, owned by Elon Musk, could factor in the company’s handling of harmful content during the recent UK riots.

Charges against X were issued last month under the Digital Services Act (DSA), which mandates stricter controls on illegal content and public security risks for large online platforms.

Although the UK is no longer part of the EU, content shared in Britain that violates DSA rules might still reach European users, potentially breaching the law. Recent events in Britain, where far-right and anti-Muslim groups exploited the fatal stabbing of three young girls to spread disinformation and incite violence, have raised concerns.

The European Commission acknowledged that while the DSA does not cover actions outside the EU, content visible in Europe from the UK could influence their proceedings against X. The company has yet to respond to these developments.

Maduro blocks X in Venezuela amid election dispute

President Nicolás Maduro has imposed a 10-day block on access to the social media platform X (formerly Twitter) in Venezuela, accusing its owner, Elon Musk, of using the platform to promote hatred following the country’s disputed presidential election. Reports from Caracas indicated that by Thursday night, posts on X were no longer loading on several major telephone services, including the state-owned Movilnet.

Maduro, in a speech after a pro-government march, claimed that Musk violated the platform’s own rules and incited hatred. He also accused X of being used by his political opponents to create unrest in Venezuela. As part of his response, Maduro signed a resolution from the National Telecommunications Commission (Conatel) to remove X from circulation in the country for ten days. However, he did not elaborate on the process involved.

The tension between Maduro and Musk escalated after the disputed 28 July presidential election, where Venezuelan electoral authorities declared Maduro the winner. However, opposition candidate Edmundo González claimed victory, citing records from 80% of the electronic voting machines. Musk criticised Maduro on X, calling him a dictator and accusing him of electoral fraud. Since the election, Maduro has expressed a desire to regulate social media in Venezuela, alleging that platforms like X are being used to threaten his supporters and create anxiety across the country.

AI tool Silvia improves Spanglish transcription

A new AI assistant is addressing a common frustration for bilingual speakers by accurately transcribing ‘Spanglish,’ a blend of Spanish and English that often confounds other language processing tools. Developed by Mansidak Singh, a product engineer at reAI, Silvia allows users to fluidly switch between languages in a single sentence without losing any context. Singh was inspired to create the app after a conversation highlighted the limitations of existing language assistants, which typically ignore or misinterpret mixed-language input.

Silvia integrates with your keyboard and supports both Spanish and English, with plans to expand to other languages such as French, German, and Dutch soon. Singh utilised iOS 18’s new Translation API and OpenAI’s Whisper technology to create a solution that is not only effective but also fast and secure, with no data storage involved. The app is designed to be used seamlessly in everyday conversations, making it easier for bilingual users to communicate without constantly switching settings or keyboards.

While the current version of Silvia is limited to languages that use the Roman alphabet, Singh’s approach reflects a practical and thoughtful application of AI to solve real-world problems. The app, which has been approved by Apple, will be available for download at the end of the month, offering a more accurate and user-friendly experience for those who speak in a mix of languages.

In an era where AI is often associated with grand promises, Silvia stands out for its simplicity and focus on improving everyday communication. As multilingual AI tools like Silvia and Nigeria’s new multilingual large language model continue to emerge, the future of AI in language processing looks increasingly inclusive and adaptable to the needs of diverse users.

Ireland takes legal action against X over data privacy

The Irish Data Protection Commission (DPC) has launched legal action against the social media platform X, formerly Twitter, in a case that revolves around processing user data to train Musk’s AI large language model called Grok. The AI tool or chatbot was developed by xAI, a company founded by Elon Musk, and is used as a search assistant for premium users on the platform.

The DPC is seeking a court order to stop or limit the processing of user data by X for training its AI systems, expressing concerns that this could violate the European Union’s General Data Protection Regulation (GDPR). The case may be referred to the European Data Protection Board for further review.

The legal dispute is part of a broader conflict between Big Tech companies and regulators over using personal data to develop AI technologies. Consumer organisations have accused X of breaching GDPR, a claim the company has vehemently denied, calling the DPC’s actions unwarranted and overly broad.

The Irish DPC has an important role in overseeing X’s compliance with the EU data protection laws since the platform’s operations in the EU are managed from Dublin. The current legal proceedings could significantly shift how Ireland enforces GDPR against large tech firms.

The DPC is also concerned about X’s plans to launch a new version of Grok, which is reportedly being trained using data from the EU and European Economic Area users. The privacy watchdog argues that this could worsen existing issues with data processing.

Despite X implementing some mitigation measures, such as offering users an opt-out option, these steps were not in place when the data processing began, leading to further scrutiny from the DPC. X has resisted the DPC’s requests to halt data processing or delay the release of the new Grok version, leading to an ongoing court battle.

The outcome of this case could set a precedent for how AI and data protection issues are handled across Europe.

YouTube faces widespread outage in Russia

YouTube experienced a mass outage in Russia on Thursday, with users reporting that the platform was inaccessible without using virtual private networks (VPNs). The outage comes amid increasing criticism from Russian authorities, who have been targeting the platform for its role in hosting content from Kremlin opponents, which has been largely removed from other social media sites within Russia. Reuters journalists in Russia confirmed the issue, with access only available through some mobile devices.

Russian internet monitoring services, including Sboi.rf, reported thousands of glitches affecting YouTube. Despite these issues, neither Google, the parent company of YouTube, nor Russia’s state communications watchdog Roskomnadzor, provided immediate comments on the situation.

In recent weeks, YouTube’s download speeds in Russia have noticeably slowed, a development blamed by Russian lawmakers on Google’s alleged failure to invest in local infrastructure. Alexander Khinshtein, head of a parliamentary committee on information policy, warned that YouTube speeds could drop by as much as 70%, labelling it necessary to pressure the platform into complying with Russian legislation. YouTube, however, rejected these claims, maintaining that the issues were not due to any technical actions on its part.