Google accused of censorship against African stream

Google is facing accusations of censorship after locking the pan-African media platform African Stream out of its Gmail Workspace, resulting in the loss of two years’ worth of emails and files stored in the cloud. The organisation has claimed that this action is part of a broader crackdown by US tech companies on its content, which is dedicated to providing a voice for Africans and challenging negative stereotypes.

The controversy escalated following remarks from US Secretary of State Antony Blinken, who suggested that African Stream is influenced by Russian propaganda, labelling the outlet as ‘Kremlin propagandists.’ Within the last two weeks, African Stream pointed out that it had also been banned from other platforms, including YouTube, Facebook, TikTok, Instagram, and Threads, and criticised Google for not providing any credible reasons for the ban.

In response to the allegations, African Stream has denied any wrongdoing and questioned why major tech companies would bow to a single speech by a US official. The organisation emphasises its commitment to delivering African-centered content and amplifying African voices globally, raising significant concerns about the implications of censorship and the influence of political narratives on the policies of major tech firms.

Social platform X must pay fines before Brazil ban is lifted

Brazil’s Supreme Court has ruled that social platform X, formerly known as Twitter, must pay $5 million in pending fines before being allowed to resume operations in the country. The platform, owned by Elon Musk, was suspended in Brazil after failing to comply with court orders to block accounts spreading hate speech and to appoint a legal representative.

Judge Alexandre de Moraes said the fines, totalling 18.3 million reais ($3.4 million), remain unpaid, alongside an additional fine of 10 million reais ($1.8 million) imposed after X became briefly accessible to some users last week. The court can use frozen funds from X and Starlink accounts in Brazil, but Starlink must first withdraw its appeal against the fund freeze.

X has since complied with court orders, blocking the accounts as instructed and naming a legal representative in Brazil. A source close to the company suggested that while X is likely to pay the original fines, it may contest the extra penalty imposed after the platform ban.

The platform has been unavailable in Brazil since late August. Musk had initially criticised the court’s actions as censorship but began complying with the rulings last week.

Meta battles scam ads in Australia

Meta and Australian banks have worked together to remove 8,000 fraudulent ‘celeb bait’ advertisements from Facebook and Instagram. The scams, often using AI-generated images of celebrities, deceive users into investing in fake schemes. Australian banks flagged 102 such cases since April.

The rise in these scams has led Australia to draft a new anti-scam law, which could impose fines of up to A$50 million on companies that fail to combat online fraud. Reports in 2023 show that Australians lost a staggering A$2.7 billion to various scams.

Meta is currently facing legal challenges in Australia, including a lawsuit for allowing cryptocurrency ads featuring celebrities like Russell Crowe and Nicole Kidman. Despite these issues, Meta continues its efforts to fight fraudulent ads.

Meta, alongside Australian banks, believes that early signs within ads could help detect wider scam activity. The company is reviewing Australia’s draft legislation, signalling a continued focus on anti-scam measures in the future.

Spain’s new travel data law sparks concern

A new piece of legislation in Spain, scheduled to come into force on 1 October 2024, mandates that hoteliers, travel agencies, and private rental landlords collect and share sensitive information about travellers with the Ministry of the Interior. The law requires the collection of extensive personal details, including payment methods, financial transactions, credit card numbers, contract specifics, and personal contact information, affecting both domestic and international tourists.

The Spanish Confederation of Hotels and Tourist Accommodation (CEHAT), representing over 16,000 businesses and around 1.8 million accommodation options, has expressed strong opposition, citing concerns about data collection, storage, and privacy. CEHAT argues that the law is impractical, may increase errors due to manual processing, and could heighten operational costs for hospitality businesses. The industry’s primary concerns also include the privacy rights of travellers and the potential economic disadvantage compared to other EU markets.

However, the Ministry of the Interior defends the legislation as necessary for enhancing public safety and combating terrorism and organised crime, asserting that detailed traveller information will improve security efforts. Despite this, the tourism sector in Spain, already dealing with challenges such as anti-tourism protests, fears the law could further harm its economic contributions. Travel agencies have requested either exclusion from the law’s requirements or clear limits on its application to mitigate confusion and potential privacy infringements.

Why does it matter?

With the implementation date approaching, anxiety within the industry is growing due to the lack of clarity over data submission processes and the potential legal ramifications of non-compliance. As the debate continues, industry is urgently calling on the government to provide clearer guidelines and reconsider certain aspects of the legislation.

Musk criticised that nearly a third of his posts on X spreads false information

Elon Musk, the billionaire owner of the social media platform X (formerly Twitter), recently shared a debunked rumour about a bomb threat near a New York rally where former US President Donald J. Trump was scheduled to speak. Despite the inaccuracy, Musk amplified the rumour to his nearly 200 million followers. Over five days, a New York Times analysis revealed that nearly a third of Musk’s 171 posts on X contained misleading or false information.

Experts monitoring misinformation have long been concerned about the impact of Musk’s ownership of X on the spread of false information. Since buying the platform in 2022, Musk has elevated unfounded claims and embraced a more conservative political stance, including endorsing Trump’s presidential campaign in July. The analysis showed that Musk’s posts, often politically motivated, were seen more than 800 million times, underscoring his influential role as the platform’s most-followed account.

Musk’s misleading posts included claims that US Democrats wanted to make memes ‘illegal’ and falsely asserted that they aimed to ‘open the border’ to gain votes from illegal immigrants.

Why does it matter?

Experts worry that Musk, since acquiring X in 2022, has increasingly used his influential position to spread misinformation, particularly in support of conservative politics, as well as undermine credible sources. The significant reach and influence of Musk’s account highlight the dangers of high-profile figures spreading misinformation, raising concerns about public discourse and democratic processes.

California halts AI bill amid industry concerns

California Governor Gavin Newsom has vetoed a contentious AI safety bill, citing concerns that it might stifle innovation and drive companies out of the state. The bill, proposed by Senator Scott Wiener, aimed to impose strict regulations on AI systems, including safety testing and methods for deactivating advanced AI models. Newsom acknowledged the need for oversight but criticised the bill for applying uniform standards to all AI systems, regardless of their specific risk levels.

Despite the veto, Newsom emphasised his commitment to AI safety, directing state agencies to assess the risks of potential catastrophic events tied to AI use. He has also called on AI experts to help develop regulations that are science-based and focus on actual risks. With AI technology advancing rapidly, he plans to work on a more tailored approach with the legislature in the next session.

The AI bill faced mixed reactions from both the tech industry and lawmakers. While companies like Google, Microsoft, and Meta opposed the measure, Tesla’s Elon Musk supported it, arguing that stronger regulations are essential before AI becomes too powerful. The tech industry praised Newsom’s decision, stating that California’s tech economy thrives on competition and openness.

Newsom’s veto has raised questions about the future of AI regulation, both in California and across the US. With federal efforts to regulate AI still stalled, the debate over how best to balance innovation and safety continues.

Google blocks new Russian accounts and faces more pressure over restrictions

Google has restricted the creation of new accounts for Russian users, according to Russia‘s digital ministry. The move follows mounting pressure on the tech giant over its failure to remove content deemed illegal by Moscow and for blocking Russian media channels on YouTube following the invasion of Ukraine. Telecom operators have also reported a sharp decline in the number of SMS messages sent by Google to Russian users.

The digital ministry warned there is no guarantee that two-factor authentication SMS confirmations will continue functioning for Google services. It advised users to back up their data and consider alternative authentication methods or domestic platforms. Google had already deactivated AdSense accounts in Russia in August and halted serving ads in the country in March 2022.

Google has blocked over 1,000 YouTube channels linked to state-sponsored Russian media, as well as more than 5.5 million videos. Slower speeds on YouTube in Russia have been recorded recently, with Russian lawmakers blaming the issue on Google’s equipment, a claim the company disputes.

Advanced human trainers in demand for AI

AI models, including ChatGPT and Cohere, once depended on low-cost workers to perform basic fact-checking. Today, these models require human trainers with specialised knowledge in fields like medicine, finance, and quantum physics. Invisible Tech, one of the leading companies in this space, partners with major AI firms to reduce errors in AI-generated outputs, such as hallucinations, where the model provides inaccurate information.

Invisible Tech employs thousands of remote experts, offering significant pay for high-level expertise. Advanced knowledge in subjects like quantum physics can command rates as high as $200 per hour. Companies like Cohere and Microsoft are also leveraging these trainers to improve their AI systems.

This shift from basic fact-checking to advanced training is vital as AI models like ChatGPT continue to face challenges in distinguishing between fact and fiction. The demand for human trainers has surged, with many AI firms competing to reduce errors and improve their models.

With this growth, companies such as Scale AI and Invisible Tech have established themselves as key players in the industry. As AI continues to evolve, more businesses are emerging, catering to the increasing need for human expertise in AI training.

FCC fines consultant $7.7m for fake Biden robocalls

A political consultant has been fined $7.7 million by the Federal Communications Commission (FCC) for using AI to generate robocalls mimicking President Biden’s voice. The calls, aimed at New Hampshire voters, urged them not to vote in the Democratic primary, sparking significant controversy.

Steven Kramer, the consultant behind the scheme, worked for a challenger to Biden in the primaries. He admitted to paying $500 for the calls to highlight the dangers of AI in political campaigns. Kramer’s actions violated FCC regulations prohibiting misleading caller ID information.

The FCC has given Kramer 30 days to pay the fine, warning that further legal action will follow if he fails to comply. The commission continues to raise concerns over AI’s potential misuse in elections, pushing for stricter regulations to prevent fraud.

Cyberattack disrupts Wi-Fi at major UK railway stations

British police announced on Thursday that they are investigating a cyberattack that displayed an Islamophobic message on Wi-Fi services at major railway stations. Passengers trying to connect to the Wi-Fi encountered a message referencing terror attacks, leading to the immediate shutdown of the system managed by communications group Telent. The British Transport Police reported that they received notifications about the incident at approximately 5:03 p.m. on September 25.

The incident occurred amid heightened tensions in Britain, where anti-Muslim riots erupted over the summer following the tragic killing of three young girls. Misinformation initially blamed the attack on an Islamist migrant, further inflaming community tensions. In response, the police are working closely with Network Rail to investigate the cyberattack promptly.

Following the incident, which impacted 19 stations including London Bridge, London Euston, Manchester Piccadilly, and Edinburgh Waverley, Network Rail confirmed that the Wi-Fi service remained offline. Telent stated that no personal data was compromised in the hack, explaining that an unauthorised change was made to the Network Rail landing page using a legitimate administrator account. As a precaution, Telent temporarily suspended all Global Reach services to verify that other customers were not affected. Network Rail expects the Wi-Fi service to be restored over the weekend after conducting final security checks.