Google has rolled out Imagen 3, its advanced text-to-image generation model, directly within Google Docs. The tool allows users to create realistic or stylised images by simply typing prompts. Workspace customers with specific Gemini add-ons will be the first to access the feature, which is gradually being made available. The addition aims to help users enhance communication by generating customised images without tedious searches.
Imagen 3 initially faced setbacks due to historical inaccuracies in generated images, causing Google to delay its release. Following improvements, the feature launched quietly earlier this year and is now integrated into the Gemini platform. The company emphasises the tool’s ability to streamline creativity and simplify the visual content creation process.
Google has also introduced its Gemini app for iPhone users, following its February release on Android. The app boasts advanced features like Gemini Live in multiple languages and seamless integration of popular Google services such as Gmail, Calendar, and YouTube. Users can also access the powerful Imagen 3 tool within the app.
The Gemini app is designed as an AI-powered personal assistant, bringing innovation and convenience to mobile users globally. Google’s Brian Marquardt highlights the app’s capability to transform everyday tasks, offering users an intuitive and versatile digital companion.
Turkey‘s Personal Data Protection Board (KVKK) has fined Amazon’s gaming platform Twitch 2 million lira ($58,000) following a significant data breach, the Anadolu Agency reported. The breach, involving a leak of 125 GB of data, affected 35,274 individuals in Türkiye.
KVKK’s investigation revealed that Twitch failed to implement adequate security measures before the breach and conducted insufficient risk and threat assessments. The platform only addressed vulnerabilities after the incident occurred. As a result, KVKK imposed a 1.75 million lira fine for inadequate security protocols and an additional 250,000 lira for failing to report the breach promptly.
This penalty underscores the increasing scrutiny and regulatory actions against companies handling personal data in Türkiye, highlighting the importance of robust cybersecurity measures to protect user information.
Federal Trade Commissioner Melissa Holyoak has called for closer scrutiny of how AI products handle data from younger users, raising concerns about privacy and safety. Speaking at an American Bar Association meeting in Washington, Holyoak questioned what happens to information collected from children using AI tools, comparing their interactions to asking advice from a toy like a Magic 8 Ball.
The FTC, which enforces the Children’s Online Privacy Protection Act, has previously sued platforms like TikTok over alleged violations. Holyoak suggested the agency should evaluate its authority to investigate AI privacy practices as the sector evolves. Her remarks come as the FTC faces a leadership change with President-elect Donald Trump set to appoint a successor to Lina Khan, known for her aggressive stance against corporate consolidation.
Holyoak, considered a potential acting chair, emphasised that the FTC should avoid a rigid approach to mergers and acquisitions, while also predicting challenges to the agency’s worker noncompete ban. She noted that a Supreme Court decision on the matter could provide valuable clarity.
Google has started rolling out its AI-powered Scam Detection feature for the Pixel Phone app, initially available only in the beta version for US users. First announced during Google I/O 2024, the feature uses onboard AI to help users identify potential scam calls. Currently, the update is accessible to Pixel 6 and newer models, with plans to expand to other Android devices in the future.
Scam Detection analyses the audio from incoming calls directly on the device, issuing alerts if suspicious activity is detected. For example, if a caller claims to be from a bank and pressures the recipient to transfer funds urgently, the app provides visual and audio warnings. The processing occurs locally on the phone, utilising the Pixel 9’s Gemini Nano chip or similar on-device machine learning models on earlier Pixel versions, ensuring no data is sent to the cloud.
This feature is part of Google’s ongoing efforts to tackle digital fraud, as the rise in generative AI has made scam calls more sophisticated. It joins the suite of security tools on the Pixel Phone app, including Call Screen, which uses a bot to screen calls before involving the user. Google’s localised approach aims to keep users’ information secure while enhancing their safety.
Currently, Scam Detection requires manual activation through the app’s settings, as it isn’t enabled by default. Google is seeking feedback from early adopters to refine the feature further before a wider release to other Android devices.
The UK government is considering fines of up to £10,000 for social media executives who fail to remove illegal knife advertisements from their platforms. This proposal is part of Labour’s effort to halve knife crime in the next decade by addressing the ‘unacceptable use’ of online spaces to market illegal weapons and promote violence.
Under the plans, police would have the power to issue warnings to online companies and require the removal of specific content, with further penalties imposed on senior officials if action is not taken swiftly.The government also aims to tighten laws around the sale of ninja swords, following the tragic case of 16-year-old Ronan Kanda, who was killed with a weapon bought online.
Home Secretary Yvette Cooper stated that these new sanctions are part of a broader mission to reduce knife crime, which has devastated many communities. The proposals, backed by a coalition including actor Idris Elba, aim to ensure that online marketplaces take greater responsibility in preventing the sale of dangerous weapons.
Australian Prime Minister Anthony Albanese announced a groundbreaking proposal on Thursday to implement a social media ban for children under 16. The proposed legislation would require social media platforms to verify users’ ages and ensure that minors are not accessing their services. Platforms that fail to comply would face substantial fines, while users or their parents would not face penalties for violating the law. Albanese emphasised that this initiative aims to protect children from the harmful effects of social media, stressing that parents and families could count on the government’s support.
The bill would not allow exemptions for children whose parents consent to their use of social media, and it would not ‘grandfather’ existing users who are underage. Social media platforms such as Instagram, TikTok, Facebook, X, and YouTube would be directly affected by the legislation. Minister for Communications, Michelle Rowland, mentioned that these platforms had been consulted on how the law could be practically enforced, but no exemptions would be granted.
While some experts have voiced concerns about the blanket nature of the proposed ban, suggesting that it might not be the most effective solution, social media companies, including Meta (the parent company of Facebook and Instagram), have expressed support for age verification and parental consent tools. Last month, over 140 international experts signed an open letter urging the government to reconsider the approach. This debate echoes similar discussions in the US, where there have been efforts to restrict children’s access to social media for mental health reasons.
The Australian government has announced plans to introduce a ban on social media access for children under 16, with legislation expected to pass by late next year. Prime Minister Anthony Albanese described the move as part of a world-leading initiative to combat the harms social media inflicts on children, particularly the negative impact on their mental and physical health. He highlighted concerns over the influence of harmful body image content for girls and misogynistic material directed at boys.
Australia is also testing age-verification systems, such as biometrics and government ID, to ensure that children cannot access social media platforms. The new legislation will not allow exemptions, including for children with parental consent or those with pre-existing accounts. Social media platforms will be held responsible for preventing access to minors, rather than placing the burden on parents or children.
The proposed ban includes major platforms such as Meta’s Instagram and Facebook, TikTok, YouTube, and X (formerly Twitter). While some digital industry representatives, like the Digital Industry Group, have criticised the plan, arguing it could push young people toward unregulated parts of the internet, Australian officials stand by the measure, emphasising the need for strong protections against online harm.
This move positions Australia as a leader in regulating children’s access to social media, with no other country implementing such stringent age-verification methods. The new rules will be introduced into parliament this year and are set to take effect 12 months after ratification.
Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds. Filed at the Créteil judicial court, this grouped case seeks to hold TikTok accountable for what the families describe as dangerous content promoting self-harm, eating disorders, and suicide.
The families’ lawyer, Laure Boutron-Marmion, argues that TikTok, as a company offering its services to minors, must address its platform’s risks and shortcomings. She emphasised the need for TikTok’s legal liability to be recognised, especially given that its algorithm is often blamed for pushing disturbing content. TikTok, like Meta’s Facebook and Instagram, faces multiple lawsuits worldwide accusing these platforms of targeting minors in ways that harm their mental health.
TikTok has previously stated it is committed to protecting young users’ mental well-being and has invested in safety measures, according to CEO Shou Zi Chew’s remarks to US lawmakers earlier this year.
The discovery of AI chatbots resembling deceased teenagers Molly Russell and Brianna Ghey on Character.ai has drawn intense backlash, with critics denouncing the platform’s moderation. Character.ai, which lets users create digital personas, faced criticism after ‘sickening’ replicas of Russell, who died by suicide at 14, and Ghey, who was murdered in 2023, appeared on the platform. The Molly Rose Foundation, a charity named in Russell’s memory, described these chatbots as a ‘reprehensible’ failure of moderation.
Concerns about the platform’s handling of sensitive content have already led to legal action in the US, where a mother is suing Character.ai after claiming her 14-year-old son took his own life following interactions with a chatbot. Character.ai insists it prioritises safety and actively moderates avatars in line with user reports and internal policies. However, after being informed of the Russell and Ghey chatbots, it removed them from the platform, saying it strives to ensure user protection but acknowledges the challenges in regulating AI.
Amidst rapid advancements in AI, experts stress the need for regulatory oversight of platforms hosting user-generated content. Andy Burrows, head of the Molly Rose Foundation, argued stronger regulation is essential to prevent similar incidents, while Brianna Ghey’s mother, Esther Ghey, highlighted the manipulation risks in unregulated digital spaces. The incident underscores the emotional and societal harm that can arise from unsupervised AI-generated personas.
The case has sparked wider debates over the responsibilities of companies like Character.ai, which states it bans impersonation and dangerous content. Despite automated tools and a growing trust and safety team, the platform faces calls for more effective safeguards. AI moderation remains an evolving field, but recent cases have underscored the pressing need to address risks linked to online platforms and user-created chatbots.
Clacton County High School in Essex, UK, has issued a warning to parents about a WhatsApp group called ‘Add Everyone,’ which reportedly exposes children to explicit and inappropriate material. In a Facebook post, the school advised parents to ensure their children avoid joining the group, urging them to block and report it if necessary. The warning comes amid rising concern about online safety for young people, though the school noted it had no reports of its students joining the group.
Parents have reacted strongly to the warning, with many sharing experiences of their children being added to groups containing inappropriate content. One parent described it as ‘absolutely disgusting’ and ‘scary’ that young users could be added so easily, while others expressed relief that their children left the group immediately. A similar alert was issued by Clacton Coastal Academy, which posted on social media about explicit content circulating in WhatsApp groups, though it clarified that no students at their academy had reported it.
UK, Essex Police are also investigating reports from the region about unsolicited and potentially illegal content being shared via WhatsApp. Police emphasised that, while WhatsApp can be useful for staying connected, it can also be a channel for unsolicited and abusive material. The police have encouraged parents and students to use online reporting tools to report harmful content and reminded parents to discuss online safety measures with their children.