Meta cracks down on misinformation in Australia

Meta Platforms has announced new measures to combat misinformation and deepfakes in Australia ahead of the country’s upcoming national election.

The company’s independent fact-checking program, supported by Agence France-Presse and the Australian Associated Press, will detect and limit misleading content, while also removing any material that could incite violence or interfere with voting.

Deepfakes, AI-generated media designed to appear real, will also face stricter scrutiny. Meta stated that any content violating its policies would be removed or labelled as ‘altered’ to reduce its visibility.

Users sharing AI-generated content will be encouraged to disclose its origin, aiming to improve transparency.

Meta’s Australian policy follows similar strategies used in elections across India, the UK and the US.

The company is also navigating regulatory challenges in the country, including a proposed levy on big tech firms profiting from local news content and new requirements to enforce a ban on users under 16 by the end of the year.

For more information on these topics, visit diplomacy.edu.

Security Checkup arrives on TikTok to boost user account safety

TikTok has launched a new Security Checkup tool, offering users a simplified way to manage their account safety.

The dashboard provides an easy-to-navigate hub where users can review and update security settings such as login methods, two-step verification, and device access.

Designed to be user-friendly, it aims to encourage proactive security habits without overwhelming people with technical details.

The security portal functions similarly to tools offered by major tech companies like Google and Meta, reinforcing the importance of digital safety.

Features include passkey authentication for password-free logins, alerts for suspicious activity, and the ability to check which devices are logged into an account.

TikTok hopes the tool will make it easier for users to secure their profiles and prevent unauthorised access.

While the Security Checkup is a practical addition, it also arrives amid TikTok’s ongoing struggles in the US, where concerns over data privacy persist.

The company’s head of global security, Kim Albarella, describes the feature as a ‘powerful new tool’ that allows users to ‘take control’ of their account safety with confidence.

Accessing the tool is straightforward—users can find it within the app’s ‘Settings and privacy’ menu under ‘Security & permissions.’

For more information on these topics, visit diplomacy.edu.

California’s attempt to regulate online platforms faces legal setback

A federal judge in California has blocked a state law requiring online platforms to take extra measures to protect children, ruling it imposes unconstitutional burdens on tech companies.

The law, signed by Governor Gavin Newsom in 2022, aimed to prevent harm to young users by mandating businesses to assess risks, adjust privacy settings, and estimate users’ ages. Companies faced fines of up to $7,500 per child for intentional violations.

Judge Beth Freeman ruled that the law was too broad and infringed on free speech, siding with NetChoice, a group representing major tech firms, including Amazon, Google, Meta, and Netflix.

NetChoice argued the legislation effectively forced companies to act as government censors under the pretext of protecting privacy.

The ruling marks a victory for the tech industry, which has repeatedly challenged state-level regulations on content moderation and user protections.

California Attorney General Rob Bonta expressed disappointment in the decision and pledged to continue defending the law. The legal battle is expected to continue, as a federal appeals court had previously ordered a reassessment of the injunction.

The case highlights the ongoing conflict between government efforts to regulate online spaces and tech companies’ claims of constitutional overreach.

For more information on these topics, visit diplomacy.edu.

UK watchdog launches enforcement on file-sharing services

The UK’s internet watchdog, Ofcom, has launched a new enforcement programme under the Online Safety Act (OSA), targeting storage and file-sharing services due to concerns over the sharing of child sexual abuse material (CSAM).

The regulator has identified these services as particularly vulnerable to misuse for distributing CSAM and will assess the safety measures in place to prevent such activities.

As part of the enforcement programme, Ofcom has contacted a number of file-storage and sharing services, warning them that formal information requests will be issued soon.

These requests will require the services to submit details on the measures they have implemented or plan to introduce to combat CSAM, along with risk assessments related to illegal content.

Failure to comply with the requirements of the OSA could result in substantial penalties for these companies, with fines reaching up to 10% of their global annual turnover.

Ofcom’s crackdown highlights the growing responsibility for online services to prevent illegal content from being shared on their platforms.

For more information on these topics, visit diplomacy.edu.

UK teachers embrace AI for future education

Teachers in Stoke-on-Trent gathered for a full-day event to discuss the role of AI in education. Organised by the Good Future Foundation, the session saw more than 40 educators, including Stoke-on-Trent South MP Allison Gardner, explore how AI can enhance teaching and learning. Gardner emphasised the government’s belief that AI represents a ‘generational opportunity’ for education in the UK.

The event highlighted both the promise and the challenges of integrating AI into UK schools. Attendees shared ideas on using AI to improve communication, particularly with families who speak English as an additional language, and to streamline access to school resources through automated chatbots. While the potential benefits are clear, many teachers expressed concerns about the risks associated with new technology.

Daniel Emmerson, executive director of the Good Future Foundation, stressed the importance of supporting educators in understanding and implementing AI. He explained that AI can help prepare students for a future dominated by this technology. Meanwhile, schools like Belgrave St Bartholomew’s Academy are already leading the way in using AI to improve lessons and prepare students for the opportunities AI will bring.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.

Tech giants challenge Australia’s exemption for YouTube

Major social media companies, including Meta, Snapchat, and TikTok, have urged Australia to reconsider its decision to exempt YouTube from a new law banning under-16s from social media platforms.

The legislation, passed in November, imposes strict age restrictions and threatens heavy fines for non-compliance. YouTube, however, is set to be excluded due to its educational value and parental supervision features.

Industry leaders argue that YouTube shares key features with other platforms, such as algorithmic content recommendations and social interaction tools, making its exemption inconsistent with the law’s intent.

Meta called for equal enforcement, while TikTok warned that excluding YouTube would create an ‘illogical, anticompetitive, and short-sighted’ regulation. Snapchat echoed these concerns, insisting that all platforms should be treated fairly.

Experts have pointed out that YouTube, like other platforms, can expose children to addictive and harmful content. The company has responded by strengthening content moderation and expanding its automated detection systems.

The debate highlights broader concerns over online safety and fair competition as Australia moves to enforce some of the world’s strictest social media regulations.

For more information on these topics, visit diplomacy.edu.

UK regulator sets deadline for assessing online content risks

Britain’s media regulator, Ofcom, has set a 31 March deadline for social media and online platforms to submit a risk assessment on the likelihood of users encountering illegal content. This move follows new laws passed last year requiring companies such as Meta’s Facebook and Instagram, as well as ByteDance’s TikTok, to take action against criminal activities on their platforms. Under the Online Safety Act, these firms must assess and address the risks of offences like terrorism, hate crimes, child sexual exploitation, and financial fraud.

The risk assessment must evaluate how likely it is for users to come across illegal content, or how user-to-user services could facilitate criminal activities. Ofcom has warned that failure to meet the deadline could result in enforcement actions against the companies. The new regulations aim to make online platforms safer and hold them accountable for the content shared on their sites.

The deadline is part of the UK‘s broader push to regulate online content and enhance user safety. Social media giants are now facing stricter scrutiny to ensure they are addressing potential risks associated with their platforms and protecting users from harmful content.

For more information on these topics, visit diplomacy.edu.

UK regulator scrutinises TikTok and Reddit for child privacy concerns

Britain’s privacy regulator, the Information Commissioner’s Office (ICO), has launched an investigation into the child privacy practices of TikTok, Reddit, and Imgur. The ICO is scrutinising how these platforms manage personal data and age verification for users, particularly teenagers, to ensure they comply with UK data protection laws.

The investigation focuses on TikTok’s use of data from 13-17-year-olds to recommend content via its algorithm. The ICO is also examining how Reddit and Imgur assess and protect the privacy of child users. If evidence of legal breaches is found, the ICO will take action, as it did in 2023 when TikTok was fined £12.7 million for mishandling data from children under 13.

Both Reddit and Imgur have expressed a commitment to adhering to UK regulations. Reddit, for example, stated that it plans to roll out updates to meet new age-assurance requirements. Meanwhile, TikTok and Imgur have not yet responded to requests for comment.

The investigation comes amid stricter UK legislation aimed at safeguarding children online, including measures requiring social media platforms to limit harmful content and enforce age checks to prevent underage access to inappropriate material.

For more information on these topics, visit diplomacy.edu.

Apple unveils age verification tech amid legal debates

Apple has rolled out a new feature called ‘age assurance’ to help protect children’s privacy while using apps. The technology allows parents to input their child’s age when setting up an account without disclosing sensitive information like birthdays or government IDs. Instead, parents can share a general ‘age range’ with app developers, putting them in control of what data is shared.

This move comes amid growing pressure from US lawmakers, including those in Utah and South Carolina, who are considering age-verification laws for social media apps. Apple has expressed concerns about collecting sensitive personal data for such verifications, arguing it would require users to hand over unnecessary details for apps that don’t require it.

The age assurance tool allows parents to maintain control over their children’s data while limiting what third parties can access. Meta, which has supported legislation for app stores to verify children’s ages, welcomed the new tech as a step in the right direction, though it raised concerns about its practical implementation.

For more information on these topics, visit diplomacy.edu.