Mozambique and Mauritius to block social media

Mozambique and Mauritius are facing criticism for recent social media shutdowns amid political crises, with many arguing these actions infringe on digital rights. In Mozambique, platforms like Facebook and WhatsApp were blocked following protests over disputed election results.

Meanwhile, in Mauritius, the government imposed a similar blackout until after the 10 November election, following a wiretapping scandal involving leaked conversations of high-profile figures. Furthermore, digital rights groups such as Access Now and the #KeepItOn coalition have condemned these actions, arguing that they violate international human rights standards, including the African Commission on Human and Peoples’ Rights (ACHPR) resolution 580 and the International Covenant on Civil and Political Rights (ICCPR), as well as national constitutions.

In response, digital rights advocates are calling on telecommunications providers, including Emtel and Mauritius Telecom, to resist government orders to enforce the shutdowns. By maintaining internet connectivity, these companies could help preserve citizens’ access to information and uphold democratic principles in politically sensitive times.

Additionally, rights organisations argue that internet service providers have a unique role in supporting transparency and accountability, which are vital to democratic societies.

The US federal agency investigates how Meta uses consumer financial data for targeted advertising

The Consumer Financial Protection Bureau (CFPB) has informed Meta of its intention to consider ‘legal action’ concerning allegations that the tech giant improperly acquired consumer financial data from third parties for its targeted advertising operations. This federal investigation was revealed in a recent filing that Meta submitted to the Securities and Exchange Commission (SEC).

The filing indicates that the CFPB notified Meta on 18 September that it evaluated whether the company’s actions violate the Consumer Financial Protection Act, designed to protect consumers from unfair and deceptive financial practices. The status of the investigation remains uncertain, with the filing noting that the CFPB could initiate a lawsuit soon, seeking financial penalties and equitable relief.

Meta, the parent company of Instagram and Facebook, is facing increased scrutiny from regulators and state attorneys general regarding various concerns, including its privacy practices.

In the SEC filing, Meta disclosed that the CFPB has formally notified the company about an investigation focusing on the alleged receipt and use for advertising of financial information from third parties through specific advertising tools. The inquiry targets explicitly advertising related to ‘financial products and services,’ although it remains to be seen whether the scrutiny pertains to Facebook, Instagram, or both platforms.

While a Meta spokesperson refrained from commenting on the matter, the company stated in the filing that it disputes the allegations and believes any enforcement action would be unjustified. The CFPB also opted not to provide additional comments.

Amid this scrutiny, Meta recently reported $41 billion in revenue for the third quarter, a 19 percent increase from the previous year. A significant portion of this revenue is generated from its targeted advertising business, which has faced criticism from the Federal Trade Commission (FTC) and European regulators for allegedly mishandling user data and violating privacy rights.

In 2019, Meta settled privacy allegations related to the Cambridge Analytica scandal by paying the FTC $5 billion after it was revealed that the company had improperly shared Facebook user data with the firm for voter profiling. Last year, the European Union fined Meta $1.3 billion for improperly transferring user data from Europe to the United States.

Kremlin seeks end to YouTube ban on Russian state media

The Kremlin has called on Google to lift its restrictions on Russian broadcasters on YouTube, highlighting mounting legal claims against the tech giant as potential leverage. Google blocked more than a thousand Russian channels and over 5.5 million videos, including state-funded media, after halting ad services in Russia following its invasion of Ukraine in 2022.

Russia’s legal actions against Google, initiated by 17 Russian TV channels, have led to compound fines based on the company’s revenue in Russia, accumulating to a staggering figure reportedly in the “undecillions,” according to Russian media. Kremlin spokesperson Dmitry Peskov described this enormous number as symbolic but urged Google to take these legal pressures seriously and reconsider its restrictions.

In response, Google has not commented on these demands. Russian officials argue that such restrictions infringe on the country’s broadcasters and hope the significant financial claims will compel Google to restore access to Russian media content on YouTube.

TikTok faces lawsuit in France after teen suicides linked to platform

Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds. Filed at the Créteil judicial court, this grouped case seeks to hold TikTok accountable for what the families describe as dangerous content promoting self-harm, eating disorders, and suicide.

The families’ lawyer, Laure Boutron-Marmion, argues that TikTok, as a company offering its services to minors, must address its platform’s risks and shortcomings. She emphasised the need for TikTok’s legal liability to be recognised, especially given that its algorithm is often blamed for pushing disturbing content. TikTok, like Meta’s Facebook and Instagram, faces multiple lawsuits worldwide accusing these platforms of targeting minors in ways that harm their mental health.

TikTok has previously stated it is committed to protecting young users’ mental well-being and has invested in safety measures, according to CEO Shou Zi Chew’s remarks to US lawmakers earlier this year.

OpenAI adds search capabilities to ChatGPT

OpenAI has introduced new search functions to its popular ChatGPT, making it a direct competitor with Google, Microsoft’s Bing, and other emerging AI-driven search tools. Instead of launching a separate search engine, OpenAI chose to integrate search capabilities directly into ChatGPT, which will pull information from the web and relevant sources based on user questions.

Initially, ChatGPT’s search feature will be available to Plus and Team users, with plans to expand access to enterprise and educational users, as well as free users, in the coming months. OpenAI’s partnerships with major publishers like Condé Nast, Time, and the Financial Times aim to provide a rich pool of content for ChatGPT’s search.

This launch follows OpenAI’s selective testing of SearchGPT, an AI-based search prototype, earlier in the year. With its recent funding round boosting its valuation to an estimated $157 billion, OpenAI continues to strengthen its standing as a leading private AI company.

Indonesia bans Google and Apple smartphone sales

Indonesia has banned sales of Google’s Pixel smartphones due to regulations requiring a minimum of 40% locally manufactured components in devices sold within the country. This decision follows a similar ban on Apple’s iPhone 16 for failing to meet these content standards. According to Febri Hendri Antoni Arief, a spokesperson for Indonesia’s industry ministry, the rules aim to ensure fairness among investors by promoting local sourcing and partnerships.

Google stated that its Pixel phones are not officially distributed in Indonesia, though consumers can still import them independently if they pay applicable taxes. Officials are also considering measures to deactivate unauthorised imports to enforce compliance.

Despite Google and Apple not being leading brands in Indonesia, the market holds significant potential for global tech firms due to its large, tech-savvy population. However, Bhima Yudhistira from the Centre of Economic and Law Studies warned that these restrictions may deter foreign investment, creating what he calls ‘pseudo protectionism’ that could dampen investor sentiment in the region.

EU moves to formalise disinformation code under DSA

The EU‘s voluntary code of practice on disinformation will soon become a formal set of rules under the Digital Services Act (DSA). According to Paul Gordon, assistant director at Ireland’s media regulator Coimisiúin na Meán, efforts are underway to finalise the transition by January. He emphasised that the new regulations should lead to more meaningful engagement from platforms, moving beyond mere compliance.

Originally established in 2022 and signed by 44 companies, including Google, Meta, and TikTok, the code outlines commitments to combat online disinformation, such as increasing transparency in political advertising and enhancing cooperation during elections. A spokesperson for the European Commission confirmed that the code aims to be recognised as a ‘Code of Conduct’ under the DSA, which already mandates content moderation measures for online platforms.

The DSA, which applies to all platforms since February, imposes strict rules on the largest online services, requiring them to mitigate risks associated with disinformation. The new code will help these platforms demonstrate compliance with the DSA’s obligations, as assessed by the Commission and the European Board of Digital Services. However, no specific timeline has been provided for the code’s formal implementation.

AI robocall threats loom over US election

Election officials across the US are intensifying efforts to counter deepfake robocalls as the 2024 election nears, worried about AI-driven disinformation campaigns. Unlike visible manipulated images or videos, fake audio calls targeting voters are harder to detect, leaving officials bracing for the impact on public trust. A recent incident in New Hampshire, where a robocall falsely claimed to be from President Biden urging people to skip voting, highlighted how disruptive these AI-generated calls can be.

Election leaders have developed low-tech methods to counter this high-tech threat, such as unique code words to verify identities in sensitive phone interactions. In states like Colorado, officials have been trained to respond quickly to suspicious calls, including hanging up and verifying information directly with their offices. Colorado’s Secretary of State Jena Griswold and other leaders are urging election directors to rely on trusted contacts to avoid being misled by convincing deepfake messages.

To counter misinformation, some states are also enlisting local leaders and community figures to help debunk false claims. Officials in states like Minnesota and Illinois have collaborated with media outlets and launched public awareness campaigns, warning voters about potential disinformation in the lead-up to the election. These campaigns, broadcasted widely on television and radio, aim to preempt misinformation by providing accurate, timely information.

While no confirmed cases show that robocalls have swayed voters, election officials regard the potential impact as severe. Local efforts to counteract these messages, such as public statements and community outreach, serve as a reminder of the new and evolving risks AI technology brings to election security.

AI chatbots mimicking deceased teens spark outrage

The discovery of AI chatbots resembling deceased teenagers Molly Russell and Brianna Ghey on Character.ai has drawn intense backlash, with critics denouncing the platform’s moderation. Character.ai, which lets users create digital personas, faced criticism after ‘sickening’ replicas of Russell, who died by suicide at 14, and Ghey, who was murdered in 2023, appeared on the platform. The Molly Rose Foundation, a charity named in Russell’s memory, described these chatbots as a ‘reprehensible’ failure of moderation.

Concerns about the platform’s handling of sensitive content have already led to legal action in the US, where a mother is suing Character.ai after claiming her 14-year-old son took his own life following interactions with a chatbot. Character.ai insists it prioritises safety and actively moderates avatars in line with user reports and internal policies. However, after being informed of the Russell and Ghey chatbots, it removed them from the platform, saying it strives to ensure user protection but acknowledges the challenges in regulating AI.

Amidst rapid advancements in AI, experts stress the need for regulatory oversight of platforms hosting user-generated content. Andy Burrows, head of the Molly Rose Foundation, argued stronger regulation is essential to prevent similar incidents, while Brianna Ghey’s mother, Esther Ghey, highlighted the manipulation risks in unregulated digital spaces. The incident underscores the emotional and societal harm that can arise from unsupervised AI-generated personas.

The case has sparked wider debates over the responsibilities of companies like Character.ai, which states it bans impersonation and dangerous content. Despite automated tools and a growing trust and safety team, the platform faces calls for more effective safeguards. AI moderation remains an evolving field, but recent cases have underscored the pressing need to address risks linked to online platforms and user-created chatbots.

WhatsApp group exposes students to explicit content

Clacton County High School in Essex, UK, has issued a warning to parents about a WhatsApp group called ‘Add Everyone,’ which reportedly exposes children to explicit and inappropriate material. In a Facebook post, the school advised parents to ensure their children avoid joining the group, urging them to block and report it if necessary. The warning comes amid rising concern about online safety for young people, though the school noted it had no reports of its students joining the group.

Parents have reacted strongly to the warning, with many sharing experiences of their children being added to groups containing inappropriate content. One parent described it as ‘absolutely disgusting’ and ‘scary’ that young users could be added so easily, while others expressed relief that their children left the group immediately. A similar alert was issued by Clacton Coastal Academy, which posted on social media about explicit content circulating in WhatsApp groups, though it clarified that no students at their academy had reported it.

UK, Essex Police are also investigating reports from the region about unsolicited and potentially illegal content being shared via WhatsApp. Police emphasised that, while WhatsApp can be useful for staying connected, it can also be a channel for unsolicited and abusive material. The police have encouraged parents and students to use online reporting tools to report harmful content and reminded parents to discuss online safety measures with their children.