Meta proposes EU standards for teen safety online

Meta has proposed a unified system for age verification and safety standards across the EU to better protect teenagers online. The plan includes requiring parental approval for app downloads by users under 16, with app stores notifying parents for consent. Meta also advocates for consistent age-appropriate content guidelines and supervision tools for teens that parents can manage.

The proposal follows calls from incoming EU technology commissioner Henna Virkkunen, who emphasised protecting minors as a priority. Meta’s global head of safety, Antigone Davis, highlighted the fragmented nature of current European regulations, urging the adoption of uniform rules to ensure better protections for teens.

Although some EU frameworks like the Digital Services Act and Audiovisual Media Services Directive touch on youth safety, the lack of EU-wide standards leaves much to member states. Meta’s proposal aligns with ongoing discussions around the Child Sexual Abuse Material regulation, which aims to enhance online protections for minors.

Australia pushes age limits on social platforms

Australia plans to enforce a ban on social media use for anyone under 16, requiring platforms to verify user ages through methods such as biometrics or government IDs. Prime Minister Anthony Albanese emphasised strict privacy protections, mandating the destruction of personal data once age verification is complete.

The proposed laws, among the toughest globally, would impact platforms like Instagram, TikTok, X, and Snapchat. They include no exemptions for parental consent or existing accounts, with non-compliance penalties of up to $32 million. Critics, including Elon Musk, argue the measures could restrict internet access for Australians.

The government aims to fast-track the legislation and pass it by Thursday, marking a significant step in global efforts to regulate social media and protect minors.

Elon Musk criticises Australia’s plan to ban social media for kids

Elon Musk has spoken out against Australia’s proposed law to ban social media use for children under 16, calling it a “backdoor way to control access to the Internet by all Australians.” The legislation, introduced by Australia’s centre-left government, includes fines of up to A$49.5 million ($32 million) for systemic breaches by platforms and aims to enforce an age-verification system.

Australia’s plan is among the world’s strictest, banning underage access without exceptions for parental consent or existing accounts. By contrast, countries like France and the US allow limited access for minors with parental approval or data protections for children. Critics argue Australia’s proposal could set a precedent for tougher global controls.

Musk, who has previously clashed with Prime Minister Anthony Albanese’s government, is a vocal advocate for free speech. His platform, X, has faced tensions with Australia, including a legal challenge to content regulation orders earlier this year. Albanese has called Musk an “arrogant billionaire,” underscoring their rocky relationship.

Snap challenges New Mexico lawsuit alleging child exploitation risks

Snap Inc., the parent company of Snapchat, has filed a motion to dismiss a New Mexico lawsuit accusing it of enabling child sexual exploitation on its platform. The lawsuit, brought by Attorney General Raul Torrez in September, claims Snapchat exposed minors to abuse and failed to warn parents about sextortion risks. Snap refuted the allegations, calling them ‘patently false,’ and argued that the state’s decoy investigation misrepresented key facts.

The lawsuit stems from a broader push by US lawmakers to hold tech firms accountable for harm to minors. Investigators claimed a decoy account for a 14-year-old girl received explicit friend suggestions despite no user activity. Snap countered that the account actively sent friend requests, disputing the state’s findings.

Snap further argued that the lawsuit violates Section 230 of the 1996 Communications Decency Act, which shields platforms from liability for user-generated content. It also invoked the First Amendment, stating the company cannot be forced to provide warnings about subjective risks without clear guidelines.

Defending its safety efforts, Snap highlighted its increased investment in trust and safety teams and collaboration with law enforcement. The company said it remains committed to protecting users while contesting what it views as an unjustified legal challenge.

Australia introduces groundbreaking bill to ban social media for children under 16

Australia’s government introduced a bill to parliament aiming to ban social media use for children under 16, with potential fines of up to A$49.5 million ($32 million) for platforms that fail to comply. The law would enforce age verification, possibly using biometrics or government IDs, setting the highest global age limit for social media use without exemptions for parental consent or existing accounts.

Prime Minister Anthony Albanese described the reforms as a response to the physical and mental health risks social media poses, particularly for young users. Harmful content, such as body image issues targeting girls and misogynistic content aimed at boys, has fueled the government’s push for strict measures. Messaging services, gaming, and educational platforms like Google Classroom and Headspace would remain accessible under the proposal.

While opposition parties support the bill, independents and the Greens are calling for more details. Communications Minister Michelle Rowland emphasised that the law places responsibility on platforms, not parents or children, to implement robust age-verification systems. Privacy safeguards, including mandatory destruction of collected data, are also part of the proposed legislation. Australia’s policy would be among the world’s strictest, surpassing similar efforts in France and the US.

Tighter messaging controls for under-13 players on Roblox

Roblox has announced new measures to protect users under 13, permanently removing their ability to send messages outside of games. In-game messaging will remain available, but only with parental consent. Parents can now remotely manage accounts, oversee friend lists, set spending controls, and enforce screen time limits.

The gaming platform, which boasts 89 million users, has faced scrutiny over claims of child abuse on its service. In August, Turkish authorities blocked Roblox, citing concerns over user-generated content. A lawsuit filed in 2022 accused the company of facilitating exploitation, including sexual and financial abuse of a young girl in California.

New rules also limit communication for younger players, allowing under-13 users to receive public broadcast messages only within specific games. Roblox will implement updated content descriptors such as ‘Minimal’ and ‘Restricted’ to classify games, restricting access for users under nine to appropriate experiences.

Access to restricted content will now require users to be at least 17 years old and verify their age. These changes aim to enhance child safety amid growing concerns and highlight Roblox’s efforts to address ongoing challenges in its community.

Google launches Imagen 3 and Gemini on iPhones

Google has rolled out Imagen 3, its advanced text-to-image generation model, directly within Google Docs. The tool allows users to create realistic or stylised images by simply typing prompts. Workspace customers with specific Gemini add-ons will be the first to access the feature, which is gradually being made available. The addition aims to help users enhance communication by generating customised images without tedious searches.

Imagen 3 initially faced setbacks due to historical inaccuracies in generated images, causing Google to delay its release. Following improvements, the feature launched quietly earlier this year and is now integrated into the Gemini platform. The company emphasises the tool’s ability to streamline creativity and simplify the visual content creation process.

Google has also introduced its Gemini app for iPhone users, following its February release on Android. The app boasts advanced features like Gemini Live in multiple languages and seamless integration of popular Google services such as Gmail, Calendar, and YouTube. Users can also access the powerful Imagen 3 tool within the app.

The Gemini app is designed as an AI-powered personal assistant, bringing innovation and convenience to mobile users globally. Google’s Brian Marquardt highlights the app’s capability to transform everyday tasks, offering users an intuitive and versatile digital companion.

Turkey sanctions Twitch for user data breach

Turkey‘s Personal Data Protection Board (KVKK) has fined Amazon’s gaming platform Twitch 2 million lira ($58,000) following a significant data breach, the Anadolu Agency reported. The breach, involving a leak of 125 GB of data, affected 35,274 individuals in Türkiye.

KVKK’s investigation revealed that Twitch failed to implement adequate security measures before the breach and conducted insufficient risk and threat assessments. The platform only addressed vulnerabilities after the incident occurred. As a result, KVKK imposed a 1.75 million lira fine for inadequate security protocols and an additional 250,000 lira for failing to report the breach promptly.

This penalty underscores the increasing scrutiny and regulatory actions against companies handling personal data in Türkiye, highlighting the importance of robust cybersecurity measures to protect user information.

FTC’s Holyoak raises concerns over AI and kids’ data

Federal Trade Commissioner Melissa Holyoak has called for closer scrutiny of how AI products handle data from younger users, raising concerns about privacy and safety. Speaking at an American Bar Association meeting in Washington, Holyoak questioned what happens to information collected from children using AI tools, comparing their interactions to asking advice from a toy like a Magic 8 Ball.

The FTC, which enforces the Children’s Online Privacy Protection Act, has previously sued platforms like TikTok over alleged violations. Holyoak suggested the agency should evaluate its authority to investigate AI privacy practices as the sector evolves. Her remarks come as the FTC faces a leadership change with President-elect Donald Trump set to appoint a successor to Lina Khan, known for her aggressive stance against corporate consolidation.

Holyoak, considered a potential acting chair, emphasised that the FTC should avoid a rigid approach to mergers and acquisitions, while also predicting challenges to the agency’s worker noncompete ban. She noted that a Supreme Court decision on the matter could provide valuable clarity.

Google launches AI scam detector for Pixel phones

Google has started rolling out its AI-powered Scam Detection feature for the Pixel Phone app, initially available only in the beta version for US users. First announced during Google I/O 2024, the feature uses onboard AI to help users identify potential scam calls. Currently, the update is accessible to Pixel 6 and newer models, with plans to expand to other Android devices in the future.

Scam Detection analyses the audio from incoming calls directly on the device, issuing alerts if suspicious activity is detected. For example, if a caller claims to be from a bank and pressures the recipient to transfer funds urgently, the app provides visual and audio warnings. The processing occurs locally on the phone, utilising the Pixel 9’s Gemini Nano chip or similar on-device machine learning models on earlier Pixel versions, ensuring no data is sent to the cloud.

This feature is part of Google’s ongoing efforts to tackle digital fraud, as the rise in generative AI has made scam calls more sophisticated. It joins the suite of security tools on the Pixel Phone app, including Call Screen, which uses a bot to screen calls before involving the user. Google’s localised approach aims to keep users’ information secure while enhancing their safety.

Currently, Scam Detection requires manual activation through the app’s settings, as it isn’t enabled by default. Google is seeking feedback from early adopters to refine the feature further before a wider release to other Android devices.