X agrees to Brazil court orders amid social media ban

Elon Musk’s social media platform, X, is taking steps to comply with Brazil’s Supreme Court in an effort to lift its ban in the country. The platform was banned in Brazil in August for failing to moderate hate speech and meet court orders. The court had ordered the company to appoint a legal representative and block certain accounts deemed harmful to Brazil’s democracy. X’s legal team has now agreed to follow these directives, appointing Rachel de Oliveira Villa Nova Conceicao as its representative and committing to block the required accounts.

Despite previous defiance and criticism of the court’s orders by Musk and his company, X has shifted its stance. The court gave X five days to submit proof of the appointment and two days to confirm that the necessary accounts had been blocked. Once all compliance is verified, the court will decide whether to extend or lift the ban on X in Brazil.

Additionally, X has agreed to pay fines exceeding $3 million and begin blocking specific accounts involved in a hate speech investigation. This represents a shift in the company’s stance, which had previously denounced the court orders as censorship. X briefly became accessible in Brazil last week after a network update bypassed the ban, though the court continues to enforce its block until all conditions are met.

X names legal representative to resume Brazil operations

Elon Musk’s social media platform, X, has moved to address legal requirements in Brazil by appointing a new legal representative, Rachel de Oliveira Conceicao. Musk’s step follows orders from Brazil’s Supreme Court, which had previously blocked the platform after it failed to comply with local regulations, including naming a legal representative after its office closure in mid-August. X’s decision to appoint Conceicao aims to fulfil Brazilian law, which requires foreign companies to establish local legal representation to operate in the country.

The platform faced a complete shutdown in Brazil when mobile and internet providers were ordered to block X in late August. The order came after months of disputes between Musk and Brazilian Supreme Court Justice Alexandre de Moraes, centring around X’s reluctance to remove content spreading hate speech and misinformation. Musk had criticised the court’s demands, calling them censorship, and the platform’s refusal to comply escalated tensions.

X’s legal team in Brazil announced that the company has begun complying with court orders to remove harmful content, a key demand from the country’s top court. The decision signals a shift in Musk’s approach to Brazil’s strict content regulations and could pave the way for the platform to resume full operations.

The legal battles between X and Brazil highlight the broader tension between free speech and government regulation as nations like Brazil take stronger stances on monitoring harmful content online. At the same time, platforms face the challenge of balancing compliance with global standards.

US Senate scrutinises tech giants’ preparedness against foreign disinformation ahead of elections

On 18 September 2024, US Senate Intelligence Committee members questioned top tech executives from Google, Microsoft, and Meta about their plans to combat foreign disinformation ahead of the November elections. The executives, including Microsoft’s Brad Smith and Meta’s Nick Clegg, acknowledged the heightened risk during the 48 hours surrounding Election Day, with Smith emphasising the period as particularly vulnerable. Senator Mark Warner echoed this concern, noting that the time immediately after the polls close could also be crucial, especially in a tight race.

During the hearing, lawmakers discussed recent tactics used by foreign actors, including fake news websites mimicking reputable US media outlets and fabricated recordings from elections in other countries. Senators pressed the tech companies for detailed data on how many people were exposed to such content and the extent of its promotion. While the companies have adopted measures like labeling and watermarking to address deepfakes and misinformation, they were urged to enhance their efforts to prevent the spread of harmful content during this sensitive period.

X briefly accessible in Brazil despite court-ordered ban

Surprisingly, social media platform X, owned by Elon Musk, briefly became accessible to users in Brazil despite a Supreme Court order to block it. The brief resurrection follows a heated standoff between Musk and Brazilian Justice Alexandre de Moraes, leading to X’s nationwide shutdown last month. However, Brazilians quickly regained access to the platform, sparking cheers among users who saw it as Musk’s defiance of the law.

X later clarified that the restoration of access was unintentional. The platform’s Global Affairs team explained that a switch in network providers caused the issue, allowing some users in Brazil to log back in due to a rerouting of infrastructure supporting Latin America. The temporary fix was not deliberate, and the company expects the block to be reinstated soon.

According to the Brazilian Association of Internet and Telecommunications Providers (Abrint), the update routed users through third-party cloud services outside the country. This allowed them to bypass local restrictions without needing virtual private networks (VPNs).

Brazil’s telecom regulator, Anatel, is now working to enforce the original block more effectively. However, the situation remains complex, as blocking access to cloud services could inadvertently impact other critical sectors like government and financial services, posing additional challenges for regulators.

AI-powered fact-checking tech in development by NEC

The Japanese Technology Corporation, NEC (Nippon Electric Company), is developing an AI technology designed to analyze and verify the trustworthiness of online information. The project, launched under Japan’s Ministry of Internal Affairs and Communications, aims to help combat false and misleading content on the internet. The system will be tested by fact-checking organizations, including the Japan Fact-check Center and major media outlets, with the goal of making it widely available by 2025.

The AI uses Large Language Models (LLMs) to assess different types of content such as text, images, video, and audio, detecting whether they have been manipulated or are misleading. The system then evaluates the information’s reliability, looking for inconsistencies and ensuring accurate sources. These reports allow for user-driven adjustments, such as removing unreliable information or adding new details, to enhance fact-checking operations helping organizations streamline their verification processes.

As the project progresses, NEC hopes to refine its AI system to assist fact-checkers more effectively, ensuring that false information can be identified and addressed in real time. The technology could become a vital tool for media and fact-checking organizations, addressing the growing problem of misinformation online.

Telegram’s Pavel Durov faces criminal probe in France under LOPMI law

France has taken a bold legal step with its new law, targeting tech executives whose platforms enable illegal activities. The pioneering legislation, enacted in January 2023, puts France at the forefront of efforts to curb cybercrime. The law allows for criminal charges against tech leaders, like Telegram CEO Pavel Durov, for complicity in crimes committed through their platforms. Durov is under formal investigation in France, facing potential charges that could carry a 10-year prison sentence and a €500,000 fine. He denies Telegram’s role in facilitating illegal transactions, stating the platform complies with the EU regulations.

The so-called LOPMI (Loi d’Orientation et de Programmation du Ministère de l’Intérieur) 2023-22 law, unique in its scope, is yet to be tested in court, making France the first country to target tech executives in this way directly. Legal experts point out that no similar laws exist in the US or elsewhere in the Western world.

While the US has prosecuted individuals like Ross Ulbricht, founder of the Silk Road marketplace, those cases required proof of active involvement in criminal activity. However, French law seeks to hold platform operators accountable for illegal actions facilitated through their sites, even if they were not directly involved.

Prosecutors in Paris, led by Laure Beccuau, have praised the law as a powerful tool in their fight against organised cybercrime, including child exploitation, credit card trafficking, and denial-of-service attacks. The recent high-profile arrest of Durov and the shutdown of other criminal platforms like Coco highlight France’s aggressive stance in combating online crime. The J3 cybercrime unit overseeing Durov’s case has been involved in other relevant investigations, including the notorious case of Dominique Pelicot, who used the anonymous chat forum Coco to orchestrate heinous crimes.

While the law gives French authorities unprecedented power, legal and academic experts caution that its untested nature could lead to challenges in court. Nonetheless, France’s new cybercrime law seriously escalates the global battle against online criminal activity.

TikTok faces legal battle over potential US ban

TikTok and its parent company ByteDance are locked in a high-stakes legal battle with the US government to prevent a looming ban on the app, used by 170 million Americans. The legal confrontation revolves around a US law that mandates ByteDance divest its US assets by 19 January or face a complete ban. Lawyers for TikTok argue that the law violates free speech and is an unprecedented move that contradicts America’s tradition of fostering an open internet. A federal appeals court in Washington recently heard arguments from both sides, with TikTok’s legal team pushing for an injunction to halt the law’s implementation.

The US government, represented by the Justice Department, contends that TikTok’s Chinese ownership poses a significant national security threat, citing the potential for China to access American user data or manipulate the flow of information. This concern is at the core of the new legislation passed by Congress earlier this year, highlighting the risks of having a popular social media platform under foreign control. The White House, while supportive of curbing Chinese influence, has stopped short of advocating for an outright ban.

ByteDance maintains that divesting TikTok is neither technologically nor commercially feasible, casting uncertainty over the app’s future as it faces potentially severe consequences amid a politically charged environment.

The case comes at a pivotal moment in the US political landscape, with both presidential candidates, Donald Trump and Kamala Harris, actively using TikTok to engage younger voters. The judges expressed concerns over the complexities involved, especially with monitoring the massive codebase that powers TikTok, making it difficult to assess risks in real-time. As the legal wrangling continues, a ruling is expected by 6 December, and the case may eventually reach the US Supreme Court.

Brazil unfreezes Starlink and X accounts after fine payment

Brazil’s Supreme Court has lifted the freeze on the bank accounts of Starlink and X after transferring 18.35 million reais ($3.3 million) to the national treasury. The decision follows a legal standoff between Justice Alexandre de Moraes and billionaire Elon Musk, who owns X and 40% of Starlink’s parent company, SpaceX. The fines, initially imposed over noncompliance, have now been settled, prompting the unfreezing of the accounts.

The dispute began when Moraes ordered X, the social media platform formerly known as Twitter, to block certain accounts accused of spreading misinformation and hate speech, which he deemed a threat to Brazil’s democracy. Musk resisted these orders, labelling them as ‘censorship.’ In response, the court moved to freeze Starlink’s accounts, as X had failed to comply with the demands, including appointing a local legal representative as required by Brazilian law.

Despite the resolution of the fines, Moraes has not lifted his order to block access to X in Brazil, the platform’s sixth-largest market. The restriction is tied to the platform’s failure to meet other legal obligations, such as removing specific content and appointing a legal representative in the country.

As the legal tussle continues, Musk’s companies remain under scrutiny in Brazil, a key battleground in the global debate over the regulation of social media and the balance between free speech and public safety.

Judge blocks Utah’s social media law targeting minors

A federal judge has temporarily halted a new Utah law designed to protect minors’ mental health by regulating social media use. The law, set to go into effect on 1 October, would have required social media companies to verify users’ ages and impose restrictions on accounts used by minors. Chief US District Judge Robert Shelby granted a preliminary injunction, stating that the law likely violates the First Amendment rights of social media companies by overly restricting their free speech.

The lawsuit, filed by tech industry group NetChoice, argued that the law unfairly targets social media platforms while exempting other websites, creating content-based restrictions. NetChoice represents major tech firms, including Meta, YouTube, Snapchat, and X (formerly Twitter). The court found their arguments convincing, highlighting that the law failed to meet the high scrutiny required for laws regulating speech.

Utah officials expressed disappointment with the ruling but affirmed their commitment to protecting children from the harmful effects of social media. Attorney General Sean Reyes stated that his office is reviewing the decision and is considering further steps. Governor Spencer Cox signed the law in March, hoping to shield minors from the negative impact of social media. Still, the legal battle underscores the complexity of balancing free speech with safeguarding children online.

The ruling is part of a broader national debate, with courts blocking similar laws in states like California, Texas, and Arkansas. Chris Marchese, director of NetChoice’s litigation centre, hailed the decision as a victory, emphasising that the law is deeply flawed and should be permanently struck down. This ongoing legal struggle reveals the challenge of finding solutions to address growing concerns over the effects of social media on youth without infringing on constitutional rights.

Russia to invest $660 million in modernising internet censorship

Russia is ramping up its efforts to control the internet by allocating nearly 60 billion roubles ($660 million) over the next five years to upgrade its web censorship system, known as TSPU. The system, developed by state regulator Roskomnadzor, is designed to filter and block content deemed harmful or illegal by the government. The funding, part of a broader ‘Cybersecurity Infrastructure’ project, will acquire new software and hardware and expand the system’s capabilities.

The initiative is seen as part of Moscow’s broader crackdown on online freedoms, which has intensified since Russia‘s invasion of Ukraine in 2022. The government has been targeting independent media and social media platforms, blocking websites, and cracking down on using Virtual Private Networks (VPNs), which many Russians use to bypass government restrictions. Roskomnadzor has been increasingly influential in blocking access to these tools, with officials planning to enhance the system’s efficiency further.

The TSPU system was introduced under a 2019 law that requires internet service providers to install government-controlled equipment to monitor and manage web traffic. As of late 2022, over 6,000 TSPU devices had been deployed across Russian networks. The new funding will modernise this infrastructure and improve the system’s ability to detect and block VPN services, making it harder for Russians to access uncensored content.

Why does this matter?

While the Kremlin continues to position these measures as necessary for national security, critics see them as a blatant attack on free speech. Digital rights activists, including those from Roskomsvoboda, warn that while new investments in censorship technology will tighten government control, it is unlikely to eliminate access to independent information. Developers of VPNs and other circumvention tools remain determined, stating that innovation and motivation are essential in the ongoing struggle between censorship and free access.

Russia’s battle with VPNs and independent media is part of a broader campaign against what it calls Western information warfare. Despite the government’s efforts to clamp down, demand for alternative ways to access the internet remains high. Developers are working on more resilient tools, even as the state pours resources into strengthening its censorship apparatus. This tug-of-war between government control and free access to information seems set to continue, with both sides ramping up their efforts.