Pinterest projected first-quarter revenue exceeding market expectations, driven by AI-powered advertising tools that enhanced ad spending. Shares surged 19% in extended trading following the announcement. The platform benefited from a strong holiday shopping season, setting new records for monthly active users and revenue in the fourth quarter.
AI-driven ad solutions, including the Performance+ suite, have attracted advertisers by automating and improving targeting. Increased engagement from Gen Z users and the introduction of more shoppable content have also made the platform more appealing to marketers. Expanding partnerships with Google and Amazon further diversified revenue streams, although most ad revenue remains concentrated in North America.
Ecommerce merchants using Shopify and Adobe Commerce can now integrate their products into Pinterest more easily. Analysts suggest that while global engagement is high, expanding third-party ad integrations will be crucial for long-term growth.
The company forecasts revenue between $837 million and $852 million, surpassing analyst expectations. Adjusted core earnings are expected to range from $155 million to $170 million, also exceeding estimates. Monthly active users reached a record 553 million, reflecting an 11% year-on-year increase.
Ofcom has ended its investigation into whether under-18s are accessing OnlyFans but will continue to examine whether the platform provided complete and accurate information during the inquiry. The media regulator stated that it would remain engaged with OnlyFans to ensure the platform implements appropriate measures to prevent children from accessing restricted content.
The investigation, launched in May, sought to determine whether OnlyFans was doing enough to protect minors from pornography. Ofcom stated that while no findings were made, it reserves the right to reopen the case if new evidence emerges.
OnlyFans maintains that its age assurance measures, which require users to be at least 20 years old, are sufficient to prevent underage access. A company spokesperson reaffirmed its commitment to compliance and child protection, emphasising that its policies have always met regulatory standards.
Young people in Guernsey are being offered a free six-week course on AI to help them understand both the opportunities and challenges of the technology. Run by Digital Greenhouse in St Peter Port, the programme is open to students and graduates over the age of 16, regardless of their academic background. Experts from University College London (UCL) deliver the lessons remotely each week.
Jenny de la Mare from Digital Greenhouse said the course was designed to “inform and inspire” participants while helping them stand out in job and university applications. She emphasised that the programme was not limited to STEM students and could serve as a strong introduction to AI for anyone interested in the field.
Recognising that young people in Guernsey may have fewer opportunities to attend major tech events in the UK, organisers hope the course will give them a competitive edge. The programme has already started but is still open for registrations, with interested individuals encouraged to contact Digital Greenhouse.
The United Kingdom is set to become the first country to criminalise the use of AI to create child sexual abuse images. New offences will target AI-generated explicit content, including tools that ‘nudeify’ real-life images of children. The move follows a sharp rise in AI-generated abuse material, with reports increasing nearly five-fold in 2024, according to the Internet Watch Foundation.
The government warns that predators are using AI to disguise their identities and blackmail children into further exploitation. New laws will criminalise the possession, creation, or distribution of AI tools designed for child abuse material, as well as so-called ‘paedophile manuals’ that provide instructions on using such technology. Websites hosting AI-generated child abuse content will also be targeted, and authorities will gain powers to unlock digital devices for inspection.
The measures will be included in the upcoming Crime and Policing Bill. Earlier this month, Britain also announced plans to outlaw AI-generated ‘deepfake’ pornography, making it illegal to create or share sexually explicit deepfakes. Officials say the new laws will help protect children from emerging online threats.
Australia’s government recently passed laws banning social media access for children under 16, targeting platforms like TikTok, Snapchat, Instagram, Facebook, and X. However, YouTube was granted an exemption, with the government arguing that it serves as a valuable educational tool and is not a ‘core social media application.’ That decision followed input from company executives and educational content creators, who argued that YouTube is essential for learning and information-sharing. While the government claims broad community support for the exemption, some experts believe this undermines the goal of protecting children from harmful online content.
Mental health and extremism experts have raised concerns that YouTube exposes young users to dangerous material, including violent, extremist, and addictive content. Despite being exempted from the ban, YouTube has been criticised for its algorithm, which researchers say can promote far-right ideologies, misogyny, and conspiracy theories to minors. Studies conducted by academics have shown that the platform delivers problematic content within minutes of search queries, including harmful videos on topics like sex, COVID-19, and European history.
To test these claims, Reuters created child accounts and found that searches led to content promoting extremism and hate speech. Although YouTube removed some flagged videos, others remain on the platform. YouTube stated that it is actively working to improve its content moderation systems and that it has removed content violating its policies. However, critics argue that the platform’s algorithm still allows harmful content to thrive, especially among younger users.
South Korea’s privacy watchdog plans to investigate how DeepSeek manages users’ personal data. The Personal Information Protection Commission intends to send a written request for details to the Chinese AI model’s operators.
An official from South Korea’s privacy commission confirmed that the request for information could be submitted as early as Friday. No further details were provided on the scope of the inquiry.
AI-powered study rooms are revolutionising online education in China by offering personalised, tech-driven learning experiences. These spaces cater to students aged 8 to 18, using advanced software to provide interactive lessons and real-time feedback. The AI systems analyse mistakes, adjust course materials, and generate detailed progress reports for parents, who can track their child’s improvement remotely. By leveraging technology, these study rooms aim to make education more engaging and tailored to individual learning needs.
These AI rooms are marketed as self-study spaces rather than traditional tutoring centres, allowing them to navigate China’s strict private tutoring regulations by framing their services as facility rentals or membership plans. This creative positioning allows them to operate within a regulatory grey area, avoiding restrictions on off-campus tutoring for students in grades one through nine. Membership fees range from 1,000 to 3,000 yuan monthly, making them a more affordable long-term alternative to expensive one-on-one tutoring sessions.
Despite their growing popularity, education experts remain sceptical of their educational value. Critics argue that many of these systems lack proper AI functionality, relying instead on preloaded prompts and automated responses. Furthermore, there are concerns that their heavy emphasis on drilling questions to improve test scores may neglect critical thinking and deeper comprehension. However, proponents believe these AI-powered study rooms represent an essential step toward integrating technology into education and expanding access to personalised learning.
Lina Khan, a prominent advocate of strong antitrust enforcement, has announced her resignation as chair of the US Federal Trade Commission (FTC) in a memo to staff. Her departure, set to occur in the coming weeks, marks the end of a tenure that challenged numerous corporate mergers and pushed for greater accountability among powerful companies.
During her leadership, Khan spearheaded high-profile lawsuits against Amazon, launched investigations into Microsoft, and blocked major deals, including Kroger’s planned $25 billion acquisition of Albertsons. Her efforts often focused on protecting consumers and workers from potential harms posed by dominant corporations.
Khan, the youngest person to lead the FTC, first gained recognition in 2017 for her work criticising Amazon’s market practices. She argued that tech giants exploited outdated antitrust laws, allowing them to sidestep scrutiny. Her aggressive approach divided opinion, with courts striking down some of her policies, including a proposed ban on noncompete clauses.
Following Khan’s exit, the FTC faces a temporary deadlock with two Republican and two Democratic commissioners. Republican Andrew Ferguson has assumed the role of chair, and a Republican majority is expected once the Senate approves Mark Meador, a pro-enforcement nominee, to complete the five-member commission.
Younger members of Generation Z are turning to ChatGPT for schoolwork, with a new Pew Research Centre survey revealing that 26% of US teens aged 13 to 17 have used the AI-powered chatbot for homework. This figure has doubled since 2023, highlighting the growing reliance on AI tools in education. The survey also showed mixed views among teens about its use, with 54% finding it acceptable for research, while smaller proportions endorsed its use for solving maths problems (29%) or writing essays (18%).
Experts have raised concerns about the limitations of ChatGPT in academic contexts. Studies indicate the chatbot struggles with accuracy in maths and certain subject areas, such as social mobility and African geopolitics. Research also shows varying impacts on learning outcomes, with Turkish students who used ChatGPT performing worse on a maths test than peers who didn’t. German students, while finding research materials more easily, synthesised information less effectively when using the tool.
Educators remain cautious about integrating AI into classrooms. A quarter of public K-12 teachers surveyed by Pew believed AI tools like ChatGPT caused more harm than good in education. Another study by the Rand Corporation found only 18% of K-12 teachers actively use AI in their teaching practices. The disparities in effectiveness and the tool’s limitations underscore the need for careful consideration of its role in learning environments.
The United States Federal Trade Commission (FTC) has referred a complaint about Snap Inc’s AI-powered chatbot, My AI, to the Department of Justice (DOJ) for further investigation. The FTC alleges the chatbot caused harm to young users, though specific details about the alleged harm remain undisclosed.
Snap Inc defended its chatbot, asserting that My AI operates under rigorous safety and privacy measures and criticised the FTC for lacking concrete evidence to support its claims. Despite the company’s reassurances, the FTC stated it had uncovered indications of potential legal violations.
The announcement impacted Snap’s stock performance, with shares dropping by 5.2% to close at $11.22 on Thursday. The US FTC noted that publicising the complaint’s transfer to the DOJ was in the public interest, underscoring the gravity of the allegations.