UK teachers embrace AI for future education

Teachers in Stoke-on-Trent gathered for a full-day event to discuss the role of AI in education. Organised by the Good Future Foundation, the session saw more than 40 educators, including Stoke-on-Trent South MP Allison Gardner, explore how AI can enhance teaching and learning. Gardner emphasised the government’s belief that AI represents a ‘generational opportunity’ for education in the UK.

The event highlighted both the promise and the challenges of integrating AI into UK schools. Attendees shared ideas on using AI to improve communication, particularly with families who speak English as an additional language, and to streamline access to school resources through automated chatbots. While the potential benefits are clear, many teachers expressed concerns about the risks associated with new technology.

Daniel Emmerson, executive director of the Good Future Foundation, stressed the importance of supporting educators in understanding and implementing AI. He explained that AI can help prepare students for a future dominated by this technology. Meanwhile, schools like Belgrave St Bartholomew’s Academy are already leading the way in using AI to improve lessons and prepare students for the opportunities AI will bring.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.

Tech giants challenge Australia’s exemption for YouTube

Major social media companies, including Meta, Snapchat, and TikTok, have urged Australia to reconsider its decision to exempt YouTube from a new law banning under-16s from social media platforms.

The legislation, passed in November, imposes strict age restrictions and threatens heavy fines for non-compliance. YouTube, however, is set to be excluded due to its educational value and parental supervision features.

Industry leaders argue that YouTube shares key features with other platforms, such as algorithmic content recommendations and social interaction tools, making its exemption inconsistent with the law’s intent.

Meta called for equal enforcement, while TikTok warned that excluding YouTube would create an ‘illogical, anticompetitive, and short-sighted’ regulation. Snapchat echoed these concerns, insisting that all platforms should be treated fairly.

Experts have pointed out that YouTube, like other platforms, can expose children to addictive and harmful content. The company has responded by strengthening content moderation and expanding its automated detection systems.

The debate highlights broader concerns over online safety and fair competition as Australia moves to enforce some of the world’s strictest social media regulations.

For more information on these topics, visit diplomacy.edu.

UK regulator sets deadline for assessing online content risks

Britain’s media regulator, Ofcom, has set a 31 March deadline for social media and online platforms to submit a risk assessment on the likelihood of users encountering illegal content. This move follows new laws passed last year requiring companies such as Meta’s Facebook and Instagram, as well as ByteDance’s TikTok, to take action against criminal activities on their platforms. Under the Online Safety Act, these firms must assess and address the risks of offences like terrorism, hate crimes, child sexual exploitation, and financial fraud.

The risk assessment must evaluate how likely it is for users to come across illegal content, or how user-to-user services could facilitate criminal activities. Ofcom has warned that failure to meet the deadline could result in enforcement actions against the companies. The new regulations aim to make online platforms safer and hold them accountable for the content shared on their sites.

The deadline is part of the UK‘s broader push to regulate online content and enhance user safety. Social media giants are now facing stricter scrutiny to ensure they are addressing potential risks associated with their platforms and protecting users from harmful content.

For more information on these topics, visit diplomacy.edu.

UK regulator scrutinises TikTok and Reddit for child privacy concerns

Britain’s privacy regulator, the Information Commissioner’s Office (ICO), has launched an investigation into the child privacy practices of TikTok, Reddit, and Imgur. The ICO is scrutinising how these platforms manage personal data and age verification for users, particularly teenagers, to ensure they comply with UK data protection laws.

The investigation focuses on TikTok’s use of data from 13-17-year-olds to recommend content via its algorithm. The ICO is also examining how Reddit and Imgur assess and protect the privacy of child users. If evidence of legal breaches is found, the ICO will take action, as it did in 2023 when TikTok was fined £12.7 million for mishandling data from children under 13.

Both Reddit and Imgur have expressed a commitment to adhering to UK regulations. Reddit, for example, stated that it plans to roll out updates to meet new age-assurance requirements. Meanwhile, TikTok and Imgur have not yet responded to requests for comment.

The investigation comes amid stricter UK legislation aimed at safeguarding children online, including measures requiring social media platforms to limit harmful content and enforce age checks to prevent underage access to inappropriate material.

For more information on these topics, visit diplomacy.edu.

Apple unveils age verification tech amid legal debates

Apple has rolled out a new feature called ‘age assurance’ to help protect children’s privacy while using apps. The technology allows parents to input their child’s age when setting up an account without disclosing sensitive information like birthdays or government IDs. Instead, parents can share a general ‘age range’ with app developers, putting them in control of what data is shared.

This move comes amid growing pressure from US lawmakers, including those in Utah and South Carolina, who are considering age-verification laws for social media apps. Apple has expressed concerns about collecting sensitive personal data for such verifications, arguing it would require users to hand over unnecessary details for apps that don’t require it.

The age assurance tool allows parents to maintain control over their children’s data while limiting what third parties can access. Meta, which has supported legislation for app stores to verify children’s ages, welcomed the new tech as a step in the right direction, though it raised concerns about its practical implementation.

For more information on these topics, visit diplomacy.edu.

Europol busts criminal group distributing AI-generated child abuse content

Europol announced on Friday that two dozen people have been arrested for their involvement in a criminal network distributing AI-generated images of child sexual abuse. This operation marks one of the first of its kind, highlighting concerns over the use of AI in creating illegal content. Europol noted that there is currently a lack of national legislation addressing AI-generated child abuse material.

The primary suspect, a Danish national, operated an online platform where he distributed the AI-generated content he created. Users from around the world paid a ‘symbolic online payment’ to access the material. The platform has raised significant concerns about the potential misuse of AI tools for such criminal purposes.

The ongoing operation, which involves authorities from 19 countries, resulted in 25 arrests, with most occurring simultaneously on Wednesday under the leadership of Danish authorities. Europol indicated that more arrests are expected in the coming weeks as the investigation continues.

For more information on these topics, visit diplomacy.edu.

Bluesky teams up with IWF to tackle harmful content

Bluesky, the rapidly growing decentralised social media platform, has partnered with the UK-based Internet Watch Foundation (IWF) to combat the spread of child sexual abuse material (CSAM). As part of the collaboration, Bluesky will gain access to the IWF’s tools, which include a list of websites containing CSAM and a catalogue of digital fingerprints, or ‘hashes,’ that identify abusive images. This partnership aims to reduce the risk of users encountering illegal content while helping to keep the platform safe from such material.

Bluesky’s head of trust and safety, Aaron Rodericks, welcomed the partnership as a significant step in protecting users from harmful content. With the platform’s rapid growth—reaching over 30 million users by the end of last month—the move comes at a crucial time. In November, Bluesky announced plans to expand its moderation team to address the rise in harmful material following the influx of new users.

The partnership also highlights the growing concern over online child sexual abuse material. The IWF reported record levels of harmful content last year, with over 291,000 web pages removed from the internet. The foundation’s CEO, Derek Ray-Hill, stressed the urgency of tackling the crisis, calling for a collective effort from governments, tech companies, and society.

For more information on these topics, visit diplomacy.edu.

Australia slaps A$1 million fine on Telegram

Australia’s eSafety Commission has fined messaging platform Telegram A$1 million ($640,000) for failing to respond promptly to questions regarding measures it took to prevent child abuse and extremist content. The Commission had asked social media platforms, including Telegram, to provide details on their efforts to combat harmful content. Telegram missed the May 2024 deadline, submitting its response in October, which led to the fine.

eSafety Commissioner Julie Inman Grant emphasised the importance of timely transparency and adherence to Australian law. Telegram, however, disagreed with the penalty, stating that it had fully responded to the questions, and plans to appeal the fine, which it claims was solely due to the delay in response time.

The fine comes amid increasing global scrutiny of Telegram, with growing concerns over its use by extremists. Australia’s spy agency recently noted that a significant portion of counter-terrorism cases involved youth, highlighting the increasing risk posed by online extremist content. If Telegram does not comply with the penalty, the eSafety Commission could pursue further legal action.

For more information on these topics, visit diplomacy.edu.

Australian kids overlook social media age checks

A recent report by Australia’s eSafety regulator reveals that children in the country are finding it easy to bypass age restrictions on social media platforms. The findings come ahead of a government ban, set to take effect at the end of 2025, that will prevent children under the age of 16 from using these platforms. The report highlights data from a national survey on social media use among 8 to 15-year-olds and feedback from eight major services, including YouTube, Facebook, and TikTok.

The report shows that 80% of Australian children aged 8 to 12 were using social media in 2024, with YouTube, TikTok, Instagram, and Snapchat being the most popular platforms. While most platforms, except Reddit, require users to enter their date of birth during sign-up, the report indicates that these systems rely on self-declaration, which can be easily manipulated. Despite these weaknesses, 95% of teens under 16 were found to be active on at least one of the platforms surveyed.

While some platforms, such as TikTok, Twitch, and YouTube, have introduced tools to proactively detect underage users, others have not fully implemented age verification technologies. YouTube remains exempt from the upcoming ban, allowing children under 13 to use the platform with parental supervision. However, eSafety Commissioner Julie Inman Grant stressed that there is still significant work needed to enforce the government’s minimum age legislation effectively.

The report also noted that most of the services surveyed had conducted research to improve their age verification processes. However, as the law approaches, there are increasing calls for app stores to take greater responsibility for enforcing age restrictions.

For more information on these topics, visit diplomacy.edu.