YouTube challenges TikTok with AI video feature

YouTube Shorts has rolled out a new capability in its Dream Screen feature, enabling users to create AI-generated video backgrounds. Previously limited to image generation, this update harnesses Google DeepMind’s AI video-generation model, Veo, to produce 1080p cinematic-style video clips. Creators can enter text prompts, such as ‘magical forest’ or ‘candy landscape,’ select an animation style, and receive a selection of dynamic video backdrops.

Once a background is chosen, users can film their Shorts with the AI-generated video playing behind them. This feature offers creators unique storytelling opportunities, such as setting videos in imaginative scenes or crafting engaging animated openings. In future updates, YouTube plans to let users generate stand-alone six-second video clips using Dream Screen.

The feature, available in the US, Canada, Australia, and New Zealand, distinguishes YouTube Shorts from TikTok, which currently only offers AI-generated background images. By providing tools for creating custom video backdrops, YouTube aims to cement its position as a leader in short-form video innovation.

Elon Musk criticises Australia’s plan to ban social media for kids

Elon Musk has spoken out against Australia’s proposed law to ban social media use for children under 16, calling it a “backdoor way to control access to the Internet by all Australians.” The legislation, introduced by Australia’s centre-left government, includes fines of up to A$49.5 million ($32 million) for systemic breaches by platforms and aims to enforce an age-verification system.

Australia’s plan is among the world’s strictest, banning underage access without exceptions for parental consent or existing accounts. By contrast, countries like France and the US allow limited access for minors with parental approval or data protections for children. Critics argue Australia’s proposal could set a precedent for tougher global controls.

Musk, who has previously clashed with Prime Minister Anthony Albanese’s government, is a vocal advocate for free speech. His platform, X, has faced tensions with Australia, including a legal challenge to content regulation orders earlier this year. Albanese has called Musk an “arrogant billionaire,” underscoring their rocky relationship.

Snap challenges New Mexico lawsuit alleging child exploitation risks

Snap Inc., the parent company of Snapchat, has filed a motion to dismiss a New Mexico lawsuit accusing it of enabling child sexual exploitation on its platform. The lawsuit, brought by Attorney General Raul Torrez in September, claims Snapchat exposed minors to abuse and failed to warn parents about sextortion risks. Snap refuted the allegations, calling them ‘patently false,’ and argued that the state’s decoy investigation misrepresented key facts.

The lawsuit stems from a broader push by US lawmakers to hold tech firms accountable for harm to minors. Investigators claimed a decoy account for a 14-year-old girl received explicit friend suggestions despite no user activity. Snap countered that the account actively sent friend requests, disputing the state’s findings.

Snap further argued that the lawsuit violates Section 230 of the 1996 Communications Decency Act, which shields platforms from liability for user-generated content. It also invoked the First Amendment, stating the company cannot be forced to provide warnings about subjective risks without clear guidelines.

Defending its safety efforts, Snap highlighted its increased investment in trust and safety teams and collaboration with law enforcement. The company said it remains committed to protecting users while contesting what it views as an unjustified legal challenge.

Amazon faces EU probe over product favouritism, sources report

Amazon is likely to face an EU investigation next year into allegations that it favours its own brand products on its online marketplace, according to sources familiar with the matter. If found in violation of the EU’s Digital Markets Act (DMA), Amazon could face a fine of up to 10% of its global revenue.

The potential investigation will be overseen by Teresa Ribera, the incoming EU antitrust chief, who will take office next month. Amazon has denied any wrongdoing, stating it complies with the DMA and treats all products equally in its ranking algorithms. The company has been in ongoing discussions with the European Commission about its practices.

The DMA, implemented last year, aims to curb the dominance of Big Tech by prohibiting preferential treatment of their products and services. Alongside Amazon, other tech giants such as Apple, Google, and Meta are also under scrutiny. Amazon shares fell 3% following reports of the possible investigation.

Irish data authority seeks EU guidance on AI privacy under GDPR

The Irish Data Protection Commission (DPC) is awaiting guidance from the European Data Protection Board (EDPB) on handling AI-related privacy issues under the EU’s General Data Protection Regulation (GDPR). Data protection commissioners Des Hogan and Dale Sunderland emphasised the need for clarity, particularly on whether personal data continues to exist within AI training models. The EDPB is expected to provide its opinion before the end of the year, helping harmonise regulatory approaches across Europe.

The DPC has been at the forefront of addressing AI and privacy concerns, especially as companies like Meta, Google, and X (formerly Twitter) use EU users’ data to train large language models. As part of this growing responsibility, the Irish authority is also preparing for a potential role in overseeing national compliance with the EU’s upcoming AI Act, following the country’s November elections.

The regulatory landscape has faced pushback from Big Tech companies, with some arguing that stringent regulations could hinder innovation. Despite this, Hogan and Sunderland stressed the DPC’s commitment to enforcing GDPR compliance, citing recent legal actions, including a €310 million fine on LinkedIn for data misuse. With two more significant decisions expected by the end of the year, the DPC remains a key player in shaping data privacy in the age of AI.

Study finds 75% of news posts shared without reading

A new study has revealed that 75% of news-related social media posts are shared without being read, highlighting the rapid spread of unverified information. Researchers from US universities analysed over 35 million Facebook posts from 2017 to 2020, focusing on key moments in American politics. The study found that many users share links based on headlines, summaries, or the number of likes a post has received, without ever clicking to read the full article.

The study, published in Nature Human Behaviour, suggests this behavior may be driven by information overload and the fast-paced nature of social media. Users are often pressured to quickly share content without fully processing it, leading to the spread of misinformation. The research also pointed out that political partisans are more likely to share news without reading, though this could also be influenced by a few highly active, partisan accounts.

To mitigate the spread of misinformation, the authors suggest social media platforms implement warnings or alerts to inform users of the risks involved in sharing content without reading it. This would help users make more informed decisions before reposting news articles.

Reddit resolves US platform outage

Reddit has restored access to its platform following a software bug that disrupted services for tens of thousands of US users. The outage, starting at 3 pm ET, affected many who rely on the platform for social interaction and information.

Reports of issues peaked at around 49,000 users, according to monitoring service Downdetector. By 4:32 pm ET, the number of affected users dropped significantly to just over 14,500 as the platform began recovering.

The company acknowledged the issue stemmed from a recent update. A spokesperson confirmed, ‘A fix is in place, and we’re ramping back up.’ Operations were progressively restored, easing concerns among users.

Reddit’s swift action underscores the challenges of maintaining seamless services on social media platforms. Temporary glitches, however, highlight the importance of quick and efficient response strategies.

Australia introduces groundbreaking bill to ban social media for children under 16

Australia’s government introduced a bill to parliament aiming to ban social media use for children under 16, with potential fines of up to A$49.5 million ($32 million) for platforms that fail to comply. The law would enforce age verification, possibly using biometrics or government IDs, setting the highest global age limit for social media use without exemptions for parental consent or existing accounts.

Prime Minister Anthony Albanese described the reforms as a response to the physical and mental health risks social media poses, particularly for young users. Harmful content, such as body image issues targeting girls and misogynistic content aimed at boys, has fueled the government’s push for strict measures. Messaging services, gaming, and educational platforms like Google Classroom and Headspace would remain accessible under the proposal.

While opposition parties support the bill, independents and the Greens are calling for more details. Communications Minister Michelle Rowland emphasised that the law places responsibility on platforms, not parents or children, to implement robust age-verification systems. Privacy safeguards, including mandatory destruction of collected data, are also part of the proposed legislation. Australia’s policy would be among the world’s strictest, surpassing similar efforts in France and the US.

AI chatbots in healthcare: Balancing potential and privacy concerns amidst regulatory gaps

Security experts are urging caution when using AI chatbots like ChatGPT and Grok for interpreting medical scans or sharing private health information. Recent trends show users uploading X-rays, MRIs, and other sensitive data to these platforms, but such actions can pose significant privacy risks. Uploaded medical images may become part of training datasets for AI models, leaving personal information exposed to misuse.

Unlike healthcare apps covered by laws like HIPAA, many AI chatbots lack strict data protection safeguards. Companies offering these services may use the data to improve their algorithms, but it’s often unclear who has access or how the data will be used. This lack of transparency has raised alarms among privacy advocates.

X-owner Elon Musk recently encouraged users to upload medical imagery to Grok, his platform’s AI chatbot, citing its potential to evolve into a reliable diagnostic tool. However, Musk acknowledged that Grok is still in its early stages, and critics warn that sharing such data online could have lasting consequences.

OpenAI and Common Sense Media launch AI training for teachers

OpenAI, in partnership with Common Sense Media, has introduced a free training course aimed at helping teachers understand AI and prompt engineering. The course is designed to equip educators with the skills to use ChatGPT effectively in classrooms, including creating lesson content and streamlining administrative tasks.

The launch comes as OpenAI increases its efforts to promote the positive educational uses of ChatGPT, which became widely popular after its release in November 2022. While the tool’s potential for aiding students has been recognised, its use also sparked concerns about cheating and plagiarism.

Leah Belsky, formerly of Coursera and now leading OpenAI’s education efforts, emphasised the importance of teaching both students and teachers to use AI responsibly. Belsky noted that student adoption of ChatGPT is high, with many parents viewing AI literacy as crucial for future careers. The training is available on Common Sense Media’s website, marking the first of many initiatives in this partnership.