Google has started rolling out its AI-powered Scam Detection feature for the Pixel Phone app, initially available only in the beta version for US users. First announced during Google I/O 2024, the feature uses onboard AI to help users identify potential scam calls. Currently, the update is accessible to Pixel 6 and newer models, with plans to expand to other Android devices in the future.
Scam Detection analyses the audio from incoming calls directly on the device, issuing alerts if suspicious activity is detected. For example, if a caller claims to be from a bank and pressures the recipient to transfer funds urgently, the app provides visual and audio warnings. The processing occurs locally on the phone, utilising the Pixel 9’s Gemini Nano chip or similar on-device machine learning models on earlier Pixel versions, ensuring no data is sent to the cloud.
This feature is part of Google’s ongoing efforts to tackle digital fraud, as the rise in generative AI has made scam calls more sophisticated. It joins the suite of security tools on the Pixel Phone app, including Call Screen, which uses a bot to screen calls before involving the user. Google’s localised approach aims to keep users’ information secure while enhancing their safety.
Currently, Scam Detection requires manual activation through the app’s settings, as it isn’t enabled by default. Google is seeking feedback from early adopters to refine the feature further before a wider release to other Android devices.
The UK government is considering fines of up to £10,000 for social media executives who fail to remove illegal knife advertisements from their platforms. This proposal is part of Labour’s effort to halve knife crime in the next decade by addressing the ‘unacceptable use’ of online spaces to market illegal weapons and promote violence.
Under the plans, police would have the power to issue warnings to online companies and require the removal of specific content, with further penalties imposed on senior officials if action is not taken swiftly.The government also aims to tighten laws around the sale of ninja swords, following the tragic case of 16-year-old Ronan Kanda, who was killed with a weapon bought online.
Home Secretary Yvette Cooper stated that these new sanctions are part of a broader mission to reduce knife crime, which has devastated many communities. The proposals, backed by a coalition including actor Idris Elba, aim to ensure that online marketplaces take greater responsibility in preventing the sale of dangerous weapons.
Australian Prime Minister Anthony Albanese announced a groundbreaking proposal on Thursday to implement a social media ban for children under 16. The proposed legislation would require social media platforms to verify users’ ages and ensure that minors are not accessing their services. Platforms that fail to comply would face substantial fines, while users or their parents would not face penalties for violating the law. Albanese emphasised that this initiative aims to protect children from the harmful effects of social media, stressing that parents and families could count on the government’s support.
The bill would not allow exemptions for children whose parents consent to their use of social media, and it would not ‘grandfather’ existing users who are underage. Social media platforms such as Instagram, TikTok, Facebook, X, and YouTube would be directly affected by the legislation. Minister for Communications, Michelle Rowland, mentioned that these platforms had been consulted on how the law could be practically enforced, but no exemptions would be granted.
While some experts have voiced concerns about the blanket nature of the proposed ban, suggesting that it might not be the most effective solution, social media companies, including Meta (the parent company of Facebook and Instagram), have expressed support for age verification and parental consent tools. Last month, over 140 international experts signed an open letter urging the government to reconsider the approach. This debate echoes similar discussions in the US, where there have been efforts to restrict children’s access to social media for mental health reasons.
The Australian government has announced plans to introduce a ban on social media access for children under 16, with legislation expected to pass by late next year. Prime Minister Anthony Albanese described the move as part of a world-leading initiative to combat the harms social media inflicts on children, particularly the negative impact on their mental and physical health. He highlighted concerns over the influence of harmful body image content for girls and misogynistic material directed at boys.
Australia is also testing age-verification systems, such as biometrics and government ID, to ensure that children cannot access social media platforms. The new legislation will not allow exemptions, including for children with parental consent or those with pre-existing accounts. Social media platforms will be held responsible for preventing access to minors, rather than placing the burden on parents or children.
The proposed ban includes major platforms such as Meta’s Instagram and Facebook, TikTok, YouTube, and X (formerly Twitter). While some digital industry representatives, like the Digital Industry Group, have criticised the plan, arguing it could push young people toward unregulated parts of the internet, Australian officials stand by the measure, emphasising the need for strong protections against online harm.
This move positions Australia as a leader in regulating children’s access to social media, with no other country implementing such stringent age-verification methods. The new rules will be introduced into parliament this year and are set to take effect 12 months after ratification.
Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds. Filed at the Créteil judicial court, this grouped case seeks to hold TikTok accountable for what the families describe as dangerous content promoting self-harm, eating disorders, and suicide.
The families’ lawyer, Laure Boutron-Marmion, argues that TikTok, as a company offering its services to minors, must address its platform’s risks and shortcomings. She emphasised the need for TikTok’s legal liability to be recognised, especially given that its algorithm is often blamed for pushing disturbing content. TikTok, like Meta’s Facebook and Instagram, faces multiple lawsuits worldwide accusing these platforms of targeting minors in ways that harm their mental health.
TikTok has previously stated it is committed to protecting young users’ mental well-being and has invested in safety measures, according to CEO Shou Zi Chew’s remarks to US lawmakers earlier this year.
The discovery of AI chatbots resembling deceased teenagers Molly Russell and Brianna Ghey on Character.ai has drawn intense backlash, with critics denouncing the platform’s moderation. Character.ai, which lets users create digital personas, faced criticism after ‘sickening’ replicas of Russell, who died by suicide at 14, and Ghey, who was murdered in 2023, appeared on the platform. The Molly Rose Foundation, a charity named in Russell’s memory, described these chatbots as a ‘reprehensible’ failure of moderation.
Concerns about the platform’s handling of sensitive content have already led to legal action in the US, where a mother is suing Character.ai after claiming her 14-year-old son took his own life following interactions with a chatbot. Character.ai insists it prioritises safety and actively moderates avatars in line with user reports and internal policies. However, after being informed of the Russell and Ghey chatbots, it removed them from the platform, saying it strives to ensure user protection but acknowledges the challenges in regulating AI.
Amidst rapid advancements in AI, experts stress the need for regulatory oversight of platforms hosting user-generated content. Andy Burrows, head of the Molly Rose Foundation, argued stronger regulation is essential to prevent similar incidents, while Brianna Ghey’s mother, Esther Ghey, highlighted the manipulation risks in unregulated digital spaces. The incident underscores the emotional and societal harm that can arise from unsupervised AI-generated personas.
The case has sparked wider debates over the responsibilities of companies like Character.ai, which states it bans impersonation and dangerous content. Despite automated tools and a growing trust and safety team, the platform faces calls for more effective safeguards. AI moderation remains an evolving field, but recent cases have underscored the pressing need to address risks linked to online platforms and user-created chatbots.
Clacton County High School in Essex, UK, has issued a warning to parents about a WhatsApp group called ‘Add Everyone,’ which reportedly exposes children to explicit and inappropriate material. In a Facebook post, the school advised parents to ensure their children avoid joining the group, urging them to block and report it if necessary. The warning comes amid rising concern about online safety for young people, though the school noted it had no reports of its students joining the group.
Parents have reacted strongly to the warning, with many sharing experiences of their children being added to groups containing inappropriate content. One parent described it as ‘absolutely disgusting’ and ‘scary’ that young users could be added so easily, while others expressed relief that their children left the group immediately. A similar alert was issued by Clacton Coastal Academy, which posted on social media about explicit content circulating in WhatsApp groups, though it clarified that no students at their academy had reported it.
UK, Essex Police are also investigating reports from the region about unsolicited and potentially illegal content being shared via WhatsApp. Police emphasised that, while WhatsApp can be useful for staying connected, it can also be a channel for unsolicited and abusive material. The police have encouraged parents and students to use online reporting tools to report harmful content and reminded parents to discuss online safety measures with their children.
Sierra, a young AI software startup co-founded by former Salesforce co-CEO Bret Taylor, has secured $175 million in new funding led by Greenoaks Capital. This latest round gives the company a valuation of $4.5 billion, a significant jump from its earlier valuation of nearly $1 billion. Investors such as Thrive Capital, Iconiq, Sequoia, and Benchmark have also backed the firm.
Founded just a year ago, Sierra has already crossed $20 million in annualised revenue, focusing on selling AI-powered customer service chatbots to enterprises. It works with major clients, including WeightWatchers and Sirius XM. The company claims its technology reduces ‘hallucinations’ in large language models, ensuring reliable AI interactions for businesses.
The rising valuation reflects investor enthusiasm for applications in AI that generate steady revenue, shifting from expensive foundational models to enterprise solutions. Sierra operates in a competitive space, facing rivals such as Salesforce and Forethought, but aims to stand out through more dependable AI performance.
Bret Taylor, who also chairs OpenAI’s board, co-founded Sierra alongside former Google executive Clay Bavor. Taylor previously held leadership roles at Salesforce and oversaw Twitter’s board during its takeover by Elon Musk. Bavor, who joined Google in 2005, played key roles managing Gmail and Google Drive.
In a landmark case for AI and criminal justice, a UK man has been sentenced to 18 years in prison for using AI to create child sexual abuse material (CSAM). Hugh Nelson, 27, from Bolton, used an app called Daz 3D to turn regular photos of children into exploitative 3D imagery, according to reports. In several cases, he created these images based on photographs provided by individuals who personally knew the children involved.
Nelson sold the AI-generated images on various online forums, reportedly making around £5,000 (roughly $6,494) over an 18-month period. His activities were uncovered when he attempted to sell one of his digital creations to an undercover officer, charging £80 (about $103) per image.
Following his arrest, Nelson faced multiple charges, including encouraging the rape of a child, attempting to incite a minor in sexual acts, and distributing illegal images. This case is significant as it highlights the dark side of AI misuse and underscores the growing need for regulation around technology-enabled abuse.
A Florida mother is suing the AI chatbot startup Character.AI, alleging it played a role in her 14-year-old son’s suicide by fostering an unhealthy attachment to a chatbot. Megan Garcia claims her son Sewell became ‘addicted’ to Character.AI and formed an emotional dependency on a chatbot, which allegedly represented itself as a psychotherapist and a romantic partner, contributing to his mental distress.
According to the lawsuit filed in Orlando, Florida, Sewell shared suicidal thoughts with the chatbot, which reportedly reintroduced these themes in later conversations. Garcia argues the platform’s realistic nature and hyper-personalised interactions led her son to isolate himself, suffer from low self-esteem, and ultimately feel unable to live outside of the world the chatbot created.
Character.AI offered condolences and noted it has since implemented additional safety features, such as prompts for users expressing self-harm thoughts, to improve protection for younger users. Garcia’s lawsuit also names Google, alleging it extensively contributed to Character.AI’s development, although Google denies involvement in the product’s creation.
The lawsuit is part of a wider trend of legal claims against tech companies by parents concerned about the impact of online services on teenage mental health. While Character.AI, with an estimated 20 million users, faces unique claims regarding its AI-powered chatbot, other platforms such as TikTok, Instagram, and Facebook are also under scrutiny.