OpenAI expands AI tools with text-to-video feature

OpenAI has launched its text-to-video AI model, Sora, to ChatGPT Plus and Pro users, signalling a broader push into multimodal AI technologies. Initially limited to safety testers, Sora is now available as Sora Turbo at no additional cost, allowing users to create videos up to 20 seconds long in various resolutions and aspect ratios.

The move positions OpenAI to compete with similar tools from Meta, Google, and Stability AI. While the model is accessible in most regions, it remains unavailable in EU countries, the UK, and Switzerland due to regulatory considerations. OpenAI plans to introduce tailored pricing options for Sora next year.

The company emphasised safeguards against misuse, such as blocking harmful content like child exploitation and deepfake abuse. It also plans to gradually expand features, including uploads of people, as it enhances protections. Sora marks another step in OpenAI’s efforts to innovate responsibly in the AI space.

FTC targets data brokers over privacy concerns

Data brokers Mobilewalla and Gravy Analytics have agreed to stop using sensitive location data following a settlement with the US Federal Trade Commission (FTC). The agreement addresses concerns about tracking individuals’ religious beliefs, political leanings, and pregnancy status through mobile device data.

The settlement represents the first instance of banning the collection of location data through online advertising auctions. The FTC accused the companies of unfair practices, stating that Mobilewalla gathered information without consent from ad auction platforms. Such platforms allow advertisers to bid on specific audiences but inadvertently exposed consumers to privacy risks.

Gravy Analytics, owned by Unacast, sold location data to government contractors, prompting constitutional concerns from FTC commissioners. Mobilewalla disputed the allegations but stated the agreement allows it to continue offering insights while respecting privacy. Both companies committed to halting sensitive data usage and introducing opt-out options for consumers.

FTC Chair Lina Khan highlighted the broader risks of targeted advertising, warning that Americans’ sensitive data is at risk of misuse. The settlement is part of the Biden administration’s effort to regulate data brokers and strengthen privacy protections, as outlined by proposed rules from the US Consumer Financial Protection Bureau.

Australia pushes for new rules on AI in search engines

Australia‘s competition watchdog has called for a review of efforts to ensure more choice for internet users, citing Google’s dominance in the search engine market and the failure of its competitors to capitalise on the rise of AI. A report by the Australian Competition and Consumer Commission (ACCC) highlighted concerns about the growing influence of Big Tech, particularly Google and Microsoft, as they integrate generative AI into their search services. This raises questions about the accuracy and reliability of AI-generated search results.

While the use of AI in search engines is still in its early stages, the ACCC warns that large tech companies’ financial strength and market presence give them a significant advantage. The commission expressed concerns that AI-driven search could lead to misinformation, as consumers may find AI-generated responses both more useful and less accurate. In response to this, Australia is pushing for new regulations, including laws to prevent anti-competitive behaviour and improve consumer choice.

The Australian government has already introduced several measures targeting tech giants, such as requiring social media platforms to pay for news content and restricting access for children under 16. A proposed new law could impose hefty fines on companies that suppress competition. The ACCC has called for service-specific codes to address data advantages and ensure consumers have more freedom to switch between services. The inquiry is expected to close by March next year.

Australia begins trial of teen social media ban

Australia‘s government is conducting a world-first trial to enforce its national social media ban for children under 16, focusing on age-checking technology. The trial, set to begin in January and run through March, will involve around 1,200 randomly selected Australians. It will help guide the development of effective age verification methods, as platforms like Meta, X (formerly Twitter), TikTok, and Snapchat must prove they are taking ‘reasonable steps’ to keep minors off their services or face fines of up to A$49.5 million ($32 million).

The trial is overseen by the Age Check Certification Scheme and will test several age-checking techniques, such as video selfies, document uploads for verification, and email cross-checking. Although platforms like YouTube are exempt, the trial is seen as a crucial step for setting a global precedent for online age restrictions, which many countries are now considering due to concerns about youth mental health and privacy.

The trial’s outcomes could influence how other nations approach enforcing age restrictions, despite concerns from some lawmakers and tech companies about privacy violations and free speech. The government has responded by ensuring that no personal data will be required without alternatives. The age-check process could significantly shape global efforts to regulate social media access for children in the coming years.

Transparency issues plague UK mobile games

A recent investigation revealed that most top-selling mobile games in the UK fail to disclose the presence of loot boxes in their advertisements, despite regulations mandating transparency. Loot boxes, which provide randomised in-game items often obtained through payments, have drawn criticism for fostering addictive behaviors and targeting vulnerable groups, including children. Of the top 45 highest-grossing games analysed on Google Play, only two clearly mentioned loot boxes in their advertisements.

The UK Advertising Standards Authority, which oversees compliance, acknowledges the issue and promises further action but has faced criticism for its slow and limited enforcement. Critics argue that lax self-regulation within the gaming industry enables companies to prioritise profits over player well-being, particularly as loot boxes reportedly generate $15B annually.

Advocacy groups and researchers have voiced alarm over these findings, warning of long-term consequences. Zoë Osmond of GambleAware emphasised the risks of exposing children to gambling-like features in games, which could lead to harmful habits later in life. The gaming industry has so far resisted stricter government intervention, despite mounting evidence of non-compliance and harm.

Australian social media ban sparked by politician’s wife’s call to action

Australia has passed a landmark law banning children under 16 from using social media, following a fast-moving push led by South Australian Premier Peter Malinauskas. The law, which takes effect in November 2025, aims to protect young people from the harmful effects of social media, including mental health issues linked to cyberbullying and body image problems. The bill has widespread support, with a government survey showing 77% of Australians backing the measure. However, it has sparked significant opposition from tech companies and privacy advocates, who argue that the law is rushed and could push young users to more dangerous parts of the internet.

The push for the national ban gained momentum after Malinauskas’s state-level initiative to restrict social media access for children under 14 in September. This led to a broader federal response, with Prime Minister Anthony Albanese’s government introducing a nationwide version of the policy. The legislation eliminates parental discretion, meaning no child under 16 will be able to use social media without facing fines for platforms that fail to enforce the rules. This move contrasts with policies in countries like France and Florida, where minors can access social media with parental permission.

While the law has garnered support from most of Australia’s political leaders, it has faced strong criticism from social media companies like Meta and TikTok. These platforms warn that the law could drive teens to hidden corners of the internet and that the rushed process leaves many questions unanswered. Despite the backlash, the law passed with bipartisan support, and a trial of age-verification technology will begin in January to prepare for its full implementation.

The debate over the law highlights growing concerns worldwide about the impact of social media on young people. Although some critics argue that the law is an overreach, others believe it is a necessary step to protect children from online harm. With the law now in place, Australia has set a precedent that could inspire other countries grappling with similar issues.

Australia’s new social media ban faces backlash from Big Tech

Australia’s new law banning children under 16 from using social media has sparked strong criticism from major tech companies. The law, passed late on Thursday, targets platforms like Meta’s Instagram and Facebook, as well as TikTok, imposing fines of up to A$49.5 million for allowing minors to log in. Tech giants, including TikTok and Meta, argue that the legislation was rushed through parliament without adequate consultation and could have harmful unintended consequences, such as driving young users to less visible, more dangerous parts of the internet.

The law was introduced after a parliamentary inquiry into the harmful effects of social media on young people, with testimony from parents of children who had been bullied online. While the Australian government had warned tech companies about the impending legislation for months, the bill was fast-tracked in a chaotic final session of parliament. Critics, including Meta, have raised concerns about the lack of clear evidence linking social media to mental health issues and question the rushed process.

Despite the backlash, the law has strong political backing, and the government is set to begin a trial of enforcement methods in January, with the full ban expected to take effect by November 2025. Australia’s long-standing tensions with major US-based tech companies, including previous legislation requiring platforms to pay for news content, are also fueling the controversy. As the law moves forward, both industry representatives and lawmakers face challenges in determining how it will be practically implemented.

Mixed reactions as Australia bans social media for minors

Australia’s recent approval of a social media ban for children under 16 has sparked mixed reactions nationwide. While the government argues that the law sets a global benchmark for protecting youth from harmful online content, critics, including tech giants like TikTok, warn that it could push minors to darker corners of the internet. The law, which will fine platforms like Meta’s Facebook, Instagram and TikTok up to A$49.5 million if they fail to enforce it, takes effect one year after a trial period begins in January.

Prime Minister Anthony Albanese emphasised the importance of protecting children’s physical and mental health, citing the harmful impact of social media on body image and misogynistic content. Despite widespread support—77% of Australians back the measure—many are divided. Some, like Sydney resident Francesca Sambas, approve of the ban, citing concerns over inappropriate content, while others, like Shon Klose, view it as an overreach that undermines democracy. Young people, however, expressed their intent to bypass the restrictions, with 11-year-old Emma Wakefield saying she would find ways to access social media secretly.

This ban positions Australia as the first country to impose such a strict regulation, ahead of other countries like France and several US states that have restrictions based on parental consent. The swift passage of the law, which was fast-tracked through parliament, has drawn criticism from social media companies, which argue the law was rushed and lacked proper scrutiny. TikTok, in particular, warned that the law could worsen risks to children rather than protect them.

The move has also raised concerns about Australia’s relationship with the United States, as figures like Elon Musk have criticised the law as a potential overreach. However, Albanese defended the law, drawing parallels to age-based restrictions on alcohol, and reassured parents that while enforcement may not be perfect, it’s a necessary step to protect children online.

Australia enacts groundbreaking law banning under-16s from social media

Australia has approved a groundbreaking law banning children under 16 from accessing social media, following a contentious debate. The new regulation targets major tech companies like Meta, TikTok, and Snapchat, which will face fines of up to A$49.5 million if they allow minors to log in. Starting with a trial period in January, the law is set to take full effect in 2025. The move comes amid growing global concerns about the mental health impact of social media on young people, with several countries considering similar restrictions.

The law, which marks a significant political win for Prime Minister Anthony Albanese, has received widespread public support, with 77% of Australians backing the ban. However, it has faced opposition from privacy advocates, child rights groups, and social media companies, which argue the law was rushed through without adequate consultation. Critics also warn that it could inadvertently harm vulnerable groups, such as LGBTQIA or migrant teens, by cutting them off from supportive online communities.

Despite the backlash, many parents and mental health advocates support the ban, citing concerns about social media’s role in exacerbating youth mental health issues. High-profile campaigns and testimonies from parents of children affected by cyberbullying have helped drive public sentiment in favour of the law. However, some experts warn the ban could have unintended consequences, pushing young people toward more dangerous corners of the internet where they can avoid detection.

The law also has the potential to strain relations between Australia and the United States, as tech companies with major US ties, including Meta and X, have voiced concerns about its implications for internet freedom. While these companies have pledged to comply, there remain significant questions about how the law will be enforced and whether it can achieve its intended goals without infringing on privacy or digital rights.

UK social media platforms criticised over safety failures

Nearly a quarter of children aged 8-17 in the UK lie about their age to access adult social media platforms, according to a new Ofcom report. The media regulator criticised current verification processes as insufficient and warned tech companies they face heavy fines if they fail to improve safety measures under the Online Safety Act, which takes effect in 2025.

The law will require platforms to implement ‘highly effective’ age assurance to prevent underage users from accessing adult content. Ofcom’s findings highlight the risks children face from harmful material online, sparking concerns from advocates like the Molly Rose Foundation, which warns that tech companies are not enforcing their own rules.

Some social media platforms, including TikTok, claim they are enhancing safety measures with machine learning and other innovations. However, BBC investigations and feedback from teenagers suggest that bypassing current systems remains alarmingly easy, with no ID verification required for account setup. Calls for stricter regulation continue as online safety concerns grow.