Study finds 75% of news posts shared without reading

A new study has revealed that 75% of news-related social media posts are shared without being read, highlighting the rapid spread of unverified information. Researchers from US universities analysed over 35 million Facebook posts from 2017 to 2020, focusing on key moments in American politics. The study found that many users share links based on headlines, summaries, or the number of likes a post has received, without ever clicking to read the full article.

The study, published in Nature Human Behaviour, suggests this behavior may be driven by information overload and the fast-paced nature of social media. Users are often pressured to quickly share content without fully processing it, leading to the spread of misinformation. The research also pointed out that political partisans are more likely to share news without reading, though this could also be influenced by a few highly active, partisan accounts.

To mitigate the spread of misinformation, the authors suggest social media platforms implement warnings or alerts to inform users of the risks involved in sharing content without reading it. This would help users make more informed decisions before reposting news articles.

Reddit resolves US platform outage

Reddit has restored access to its platform following a software bug that disrupted services for tens of thousands of US users. The outage, starting at 3 pm ET, affected many who rely on the platform for social interaction and information.

Reports of issues peaked at around 49,000 users, according to monitoring service Downdetector. By 4:32 pm ET, the number of affected users dropped significantly to just over 14,500 as the platform began recovering.

The company acknowledged the issue stemmed from a recent update. A spokesperson confirmed, ‘A fix is in place, and we’re ramping back up.’ Operations were progressively restored, easing concerns among users.

Reddit’s swift action underscores the challenges of maintaining seamless services on social media platforms. Temporary glitches, however, highlight the importance of quick and efficient response strategies.

Australia introduces groundbreaking bill to ban social media for children under 16

Australia’s government introduced a bill to parliament aiming to ban social media use for children under 16, with potential fines of up to A$49.5 million ($32 million) for platforms that fail to comply. The law would enforce age verification, possibly using biometrics or government IDs, setting the highest global age limit for social media use without exemptions for parental consent or existing accounts.

Prime Minister Anthony Albanese described the reforms as a response to the physical and mental health risks social media poses, particularly for young users. Harmful content, such as body image issues targeting girls and misogynistic content aimed at boys, has fueled the government’s push for strict measures. Messaging services, gaming, and educational platforms like Google Classroom and Headspace would remain accessible under the proposal.

While opposition parties support the bill, independents and the Greens are calling for more details. Communications Minister Michelle Rowland emphasised that the law places responsibility on platforms, not parents or children, to implement robust age-verification systems. Privacy safeguards, including mandatory destruction of collected data, are also part of the proposed legislation. Australia’s policy would be among the world’s strictest, surpassing similar efforts in France and the US.

AI chatbots in healthcare: Balancing potential and privacy concerns amidst regulatory gaps

Security experts are urging caution when using AI chatbots like ChatGPT and Grok for interpreting medical scans or sharing private health information. Recent trends show users uploading X-rays, MRIs, and other sensitive data to these platforms, but such actions can pose significant privacy risks. Uploaded medical images may become part of training datasets for AI models, leaving personal information exposed to misuse.

Unlike healthcare apps covered by laws like HIPAA, many AI chatbots lack strict data protection safeguards. Companies offering these services may use the data to improve their algorithms, but it’s often unclear who has access or how the data will be used. This lack of transparency has raised alarms among privacy advocates.

X-owner Elon Musk recently encouraged users to upload medical imagery to Grok, his platform’s AI chatbot, citing its potential to evolve into a reliable diagnostic tool. However, Musk acknowledged that Grok is still in its early stages, and critics warn that sharing such data online could have lasting consequences.

OpenAI and Common Sense Media launch AI training for teachers

OpenAI, in partnership with Common Sense Media, has introduced a free training course aimed at helping teachers understand AI and prompt engineering. The course is designed to equip educators with the skills to use ChatGPT effectively in classrooms, including creating lesson content and streamlining administrative tasks.

The launch comes as OpenAI increases its efforts to promote the positive educational uses of ChatGPT, which became widely popular after its release in November 2022. While the tool’s potential for aiding students has been recognised, its use also sparked concerns about cheating and plagiarism.

Leah Belsky, formerly of Coursera and now leading OpenAI’s education efforts, emphasised the importance of teaching both students and teachers to use AI responsibly. Belsky noted that student adoption of ChatGPT is high, with many parents viewing AI literacy as crucial for future careers. The training is available on Common Sense Media’s website, marking the first of many initiatives in this partnership.

TikTok faces divestment deadline in the US

Senator Richard Blumenthal has reaffirmed that ByteDance must divest TikTok’s US operations by January 19 or risk a ban. The measure, driven by security concerns over potential Chinese surveillance, was signed into law in April. A one-time extension of 90 days is available if significant progress is made, but Blumenthal emphasised that laws cannot be disregarded.

Blumenthal also raised alarms over China’s influence on US technology companies. Tesla’s production in China and the US military’s reliance on SpaceX were flagged as security risks. He pointed to Elon Musk’s economic ties with China as a potential vulnerability, warning that such dependencies could compromise national interests.

Apple faced criticism for complying with Chinese censorship and surveillance demands while generating significant revenue from the country. Concerns were voiced that major tech companies might prioritise profits over US security. Neither Apple nor Tesla has commented on these claims.

TikTok and ByteDance are challenging the divestment law in court. A decision is expected soon, but restrictions will tighten for app stores and hosting services if compliance is not achieved. The Biden administration has clarified that it supports ending Chinese ownership of TikTok rather than an outright ban.

OpenAI faces lawsuit from Indian News Agency

Asian News International (ANI), one of India’s largest news agencies, has filed a lawsuit against OpenAI, accusing it of using copyrighted news content to train its AI models without authorisation. ANI alleges that OpenAI’s ChatGPT generated false information attributed to the agency, including fabricated interviews, which it claims could harm its reputation and spread misinformation.

The case, filed in the Delhi High Court, is India’s first legal action against OpenAI on copyright issues. While the court summoned OpenAI to respond, it declined to grant an immediate injunction, citing the complexity of the matter. A detailed hearing is scheduled for January, and an independent expert may be appointed to examine the case’s copyright implications.

OpenAI has argued that copyright laws don’t protect factual data and noted that websites can opt out of data collection. ANI’s counsel countered that public access does not justify content exploitation, emphasising the risks posed by AI inaccuracies. The case comes amid growing global scrutiny of AI companies over their use of copyrighted material, with similar lawsuits ongoing in the US, Canada, and Germany.

German court rules Facebook users can seek compensation for data breach

Germany‘s Federal Court of Justice (BGH) has ruled that Facebook users affected by data breaches in 2018 and 2019 are entitled to compensation, even without proving financial losses. The court determined that the loss of control over personal data is sufficient grounds for damages, marking a significant step in data protection law.

The case stems from a 2021 breach involving Facebook’s friend search feature, where third parties accessed user accounts by exploiting phone number guesses. Lower courts in Cologne previously dismissed compensation claims, but the BGH ordered a re-examination, suggesting around €100 in damages could be awarded per user without proof of financial harm.

Meta, Facebook’s parent company, has resisted compensation, arguing that users did not suffer concrete damages. A spokesperson for Meta described the ruling as inconsistent with recent European Court of Justice decisions and noted that similar claims have been dismissed by German courts in thousands of cases. The breach reportedly impacted around six million users in Germany.

The court also instructed a review of Facebook’s terms of use, questioning whether they were transparent and whether user consent for data handling was voluntary. The decision adds pressure on companies to strengthen data protection measures and could set a precedent for future claims across Europe.

Tighter messaging controls for under-13 players on Roblox

Roblox has announced new measures to protect users under 13, permanently removing their ability to send messages outside of games. In-game messaging will remain available, but only with parental consent. Parents can now remotely manage accounts, oversee friend lists, set spending controls, and enforce screen time limits.

The gaming platform, which boasts 89 million users, has faced scrutiny over claims of child abuse on its service. In August, Turkish authorities blocked Roblox, citing concerns over user-generated content. A lawsuit filed in 2022 accused the company of facilitating exploitation, including sexual and financial abuse of a young girl in California.

New rules also limit communication for younger players, allowing under-13 users to receive public broadcast messages only within specific games. Roblox will implement updated content descriptors such as ‘Minimal’ and ‘Restricted’ to classify games, restricting access for users under nine to appropriate experiences.

Access to restricted content will now require users to be at least 17 years old and verify their age. These changes aim to enhance child safety amid growing concerns and highlight Roblox’s efforts to address ongoing challenges in its community.

Brendan Carr to lead FCC in Trump’s push for deregulation

President-elect Donald Trump has nominated Brendan Carr to lead the US Federal Communications Commission (FCC). Carr, an FCC commissioner since 2017, is a familiar figure within the administration and has aligned his policy views with Trump’s conservative agenda, particularly concerning free speech and deregulation. Often criticising tech giants like Alphabet and Meta, accusing them of stifling conservative voices, he has called for revisiting Section 230, which shields platforms from liability over user content. Carr advocates for changes to ensure anti-discrimination norms apply to tech firms and supports laws similar to those in Texas and Florida, enforcing platforms to accept diverse viewpoints. The US Supreme Court, however, is cautious about potential First Amendment conflicts, preserving platforms’ rights to moderate content.

Carr’s proposals extend to involving tech companies in funding the Universal Service Fund, which supports communication infrastructure, arguing their financial involvement is justified. Historically, tech firms have resisted this initiative, citing their substantial investments in infrastructure. Additionally, Carr opposes net neutrality, viewing it as restrictive to innovation. His experience includes contributing to repealing net neutrality under previous FCC Chairman Ajit Pai, with Carr arguing that alleged negative impacts, such as increased costs, did not materialise.

Removing Chinese telecom tech from US networks on national security grounds is also part of Carr’s agenda, seeking additional funding to replace it due to security concerns. He also labels TikTok a national threat, though Trump has softened his stance.

One of Carr’s crucial policy stances is to improve rural internet access through technologies like Starlink’s low-Earth orbit satellites, considering them cost-effective solutions. His agenda pushes for a deregulatory approach, reducing local government and regulatory barriers in telecom infrastructure to encourage growth and innovation.

Carr’s tenure is anticipated to bolster free speech and minimise regulation, aligning with Trump’s advocacy. However, his policies will likely stir debate, especially around balancing constitutional rights and industry demands. This approach suggests a potentially transformative phase for the FCC, marked by contentious discussions over free speech, regulation, and innovation under Trump’s forthcoming presidency.