US FCC to implement new rules for robocalls and robotexts

The US Federal Communications Commission (FCC) has announced new rules to enhance consumer protections against unwanted robocalls and robotexts, which are increasingly becoming a nuisance for individuals across the nation. Set to take effect on 11 April 2025, these guidelines will allow consumers to revoke their consent for receiving such communications in ‘any reasonable way.’

Specifically, this includes using automated opt-out mechanisms during calls, replying ‘stop’ to text messages, or visiting a designated website or phone number provided by the caller. Moreover, companies must process opt-out requests within a maximum of 10 business days from receipt, and they can send a one-time confirmation text to acknowledge the opt-out request, provided that it does not contain any marketing content.

These rules are particularly significant for the mortgage industry, which has faced criticism for practices like ‘trigger leads,’ where companies purchase consumer information for solicitation. Consequently, by incorporating the Homebuyers Privacy Protection Act of 2024 into the National Defense Authorization Act, the FCC reinforces its commitment to consumer privacy and trust in the mortgage sector, encouraging companies to adopt ethical marketing strategies.

Overall, these new measures represent significant steps toward empowering consumers and enhancing their overall experience with telecommunications services. Implementing these guidelines holds companies accountable for adhering to updated regulations, ensuring that consumers can effectively manage their communication preferences. The proactive approach addresses consumer concerns and fosters a more transparent and trustworthy environment in electronic communications.

Australia to restrict teen social media use

The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.

Refugee youth like 14-year-old Tereza Hussein, who relies on social media to connect with distant family, fear the policy will cut off essential lifelines. Experts argue that banning platforms could increase mental health struggles, especially for teens already managing anxiety or isolation. Youth advocates are calling for better content moderation instead of blanket bans.

Government of Australia aims to trial age verification as a first step, though the specific platforms and age limits remain unclear. Similar attempts elsewhere, including in France and the US, have faced challenges with tech-savvy users bypassing restrictions through virtual private networks (VPNs).

Prime Minister Anthony Albanese has promoted the idea, highlighting parents’ desire for children to be more active offline. Critics, however, suggest the ban reflects outdated nostalgia, with experts cautioning that social media plays a crucial role in the daily lives of young people today. Legislation is expected by the end of the year.

Meta faces lawsuits over teen mental health concerns

A federal judge in California has ruled that Meta must face lawsuits from several US states alleging that Facebook and Instagram contribute to mental health problems among teenagers. The states argue that Meta’s platforms are deliberately designed to be addictive, harming young users. Over 30 states, including California, New York, and Florida, filed these lawsuits last year.

Judge Yvonne Gonzalez Rogers rejected Meta’s attempt to dismiss the cases, though she did limit some claims. Section 230 of US law, which offers online platforms legal protections, shields Meta from certain accusations. However, the judge found enough evidence to allow the lawsuits to proceed, enabling the plaintiffs to gather further evidence and pursue a potential trial.

The decision also impacts personal injury cases filed by individual users against Meta, TikTok, YouTube, and Snapchat. Meta is the only company named in the state lawsuits, with plaintiffs seeking damages and changes to allegedly harmful business practices. California Attorney General Rob Bonta welcomed the ruling, stating that Meta should be held accountable for the harm it has caused to young people.

Meta disagrees with the decision, insisting it has developed tools to support parents and teenagers, such as new Teen Accounts on Instagram. Google also refuted the allegations, saying its efforts to create a safer online experience for young people remain a priority. Many other lawsuits across the US accuse social media platforms of fuelling anxiety, depression, and body-image concerns through addictive algorithms.

Thousands of users impacted by Facebook and Instagram outage

On Monday, Meta Platforms’ social media platforms Facebook and Instagram experienced a significant outage affecting thousands of users across the US. According to Downdetector, a website that tracks service interruptions, the outage peaked around 1:35 p.m. ET, with over 12,000 users reporting issues with Facebook and more than 5,000 for Instagram.

By 2:09 p.m. ET, the number of reported problems had decreased significantly to around 659 for Facebook and 450 for Instagram. Downdetector’s data is based on user-submitted reports, so the actual number of impacted users may differ.

Meta Platforms did not respond to requests for comment. Earlier this year, a similar issue disrupted services globally for more than two hours, affecting hundreds of thousands of users. That event saw 550,000 disruption reports for Facebook and around 92,000 for Instagram.

Data breach at Intesa Sanpaolo under investigation

Intesa Sanpaolo has confirmed it alerted Italy’s data protection authority regarding a data breach caused by one of its employees after carrying out detailed investigations into the incident. The bank explained that the notification was made only after conducting careful checks on the events surrounding the violation.

Despite media reports, Intesa has not yet received any formal communication from prosecutors. News agency ANSA previously reported that both the bank and its employee are being investigated following the data breach.

The breach, which is said to have affected thousands of customers, includes the personal data of high-profile individuals such as Prime Minister Giorgia Meloni. The investigation has raised concerns about data security at one of Italy‘s largest financial institutions.

As the situation develops, the bank faces increasing scrutiny over its handling of the breach, with both authorities and the public awaiting further details on the investigation.

India investigates WhatsApp’s privacy policy

WhatsApp is facing potential sanctions from India’s Competition Commission (CCI) over its controversial 2021 privacy policy update, which has raised significant privacy concerns. The CCI is reportedly preparing to take action against the messaging platform, owned by Meta, for allegedly breaching antitrust laws related to user data handling. The policy, which allows WhatsApp to share certain user data with Meta, has faced widespread criticism from regulators and users who view it as intrusive and unfair.

The CCI’s investigation suggests that WhatsApp’s data-sharing practices, particularly involving business transaction data, may give Meta an unfair competitive advantage, violating provisions against the abuse of dominance. A draft order has been prepared to penalise both WhatsApp and Meta, as the CCI’s director general has submitted findings indicating these violations.

In response, WhatsApp stated that the case is still under judicial review and defended its privacy policy by noting that users had the choice to accept the update without losing access to their accounts. If sanctions are imposed, this could represent a pivotal moment in India’s efforts to regulate major tech firms and establish precedents for the intersection of privacy and competition laws in the digital age.

Apple faces accusations over worker rights violations

The US National Labor Relations Board (NLRB) has accused Apple of violating workers’ rights by restricting the use of Slack and social media for discussions about working conditions. According to the NLRB complaint, Apple implemented policies that limited how employees could use workplace messaging and fired one worker for advocating for change. The complaint also claims Apple created the impression that workers were being monitored on social media.

This is the second complaint filed against Apple this month. The earlier case accused the company of forcing employees to sign illegal non-compete and confidentiality agreements. Apple has denied the accusations, stating it is committed to maintaining an inclusive work environment and respects employees’ rights to discuss issues like pay and working conditions.

The case stems from a 2021 complaint by former employee Janneke Parrish, who claims she was fired for leading workplace activism efforts. Parrish’s lawyer said Apple’s actions were unlawful and violated workers’ rights to protest discrimination. If a settlement isn’t reached, a hearing will be held in February 2024.

RBI highlights risks of AI in banking and private credit markets

The increasing use of AI and machine learning in financial services globally could lead to financial stability risks, according to the Governor of the Reserve Bank of India (RBI), Shaktikanta Das. Speaking at an event in New Delhi, Das cautioned that the reliance on a small number of technology providers could lead to concentration risks in the sector.

Disruptions or failures in these AI-driven systems could trigger cascading effects throughout the financial industry, amplifying systemic risks, Das warned. In India, financial institutions are already employing AI to improve customer experience, reduce operational costs, and enhance risk management through services like chatbots and personalised banking.

However, AI adoption comes with vulnerabilities, including increased exposure to cyber attacks and data breaches. Das also raised concerns about the ‘opacity’ of AI algorithms, which makes them difficult to audit and could lead to unpredictable market consequences.

Das further emphasised the risks posed by the rapid growth of private credit markets, which operate with limited regulation. He warned that these markets have not been tested under economic downturns, presenting potential challenges to financial stability.

Privacy concerns rise as UK plans digital currency pilot

The UK is set to launch a Central Bank Digital Currency (CBDC) pilot in 2025, but critics are sounding alarms over privacy concerns. While the Bank of England promises to modernise the financial system, experts, including Big Brother Watch, question whether enough has been done to protect citizens’ freedoms.

Susanna Copson, legal and Policy Officer at Big Brother Watch, argues that the case for a CBDC remains unclear, especially with risks to privacy and equality. She warns that a digital pound without anonymity could lead to government overreach, turning the currency into what she describes as a ‘digital spy coin.’

As awareness remains low, organisations like Big Brother Watch push for public participation in government consultations. They urge citizens to contact their MPs and engage in discussions to protect their freedoms in the face of this looming digital shift.

Hacker demands ransom from India’s largest health insurer after data leak

Star Health, India‘s largest health insurer, has revealed it received a $68,000 ransom demand following a data breach that exposed customer details, including medical records. The cyberhacker used Telegram chatbots and a website to leak sensitive information, leading to significant reputational damage and a drop in the company’s stock value.

The hacker, who made the ransom demand in August, sent the request to Star Health’s managing director and CEO. While the company has launched an internal investigation, it also faces allegations that its chief security officer was involved in the data leak, although no evidence of wrongdoing has been found so far.

Star Health has taken legal action against both the hacker and Telegram, which has not permanently banned the accounts linked to the hacker. The company has sought help from Indian cybersecurity authorities to identify the individual behind the attack.

Telegram has not responded to requests for comment but previously removed the chatbots linked to the hack after Reuters brought them to its attention. The investigation continues as Star Health works to contain the damage from the breach.