Amazon is likely to face an EU investigation next year into allegations that it favours its own brand products on its online marketplace, according to sources familiar with the matter. If found in violation of the EU’s Digital Markets Act (DMA), Amazon could face a fine of up to 10% of its global revenue.
The potential investigation will be overseen by Teresa Ribera, the incoming EU antitrust chief, who will take office next month. Amazon has denied any wrongdoing, stating it complies with the DMA and treats all products equally in its ranking algorithms. The company has been in ongoing discussions with the European Commission about its practices.
The DMA, implemented last year, aims to curb the dominance of Big Tech by prohibiting preferential treatment of their products and services. Alongside Amazon, other tech giants such as Apple, Google, and Meta are also under scrutiny. Amazon shares fell 3% following reports of the possible investigation.
The Irish Data Protection Commission (DPC) is awaiting guidance from the European Data Protection Board (EDPB) on handling AI-related privacy issues under the EU’s General Data Protection Regulation (GDPR). Data protection commissioners Des Hogan and Dale Sunderland emphasised the need for clarity, particularly on whether personal data continues to exist within AI training models. The EDPB is expected to provide its opinion before the end of the year, helping harmonise regulatory approaches across Europe.
The DPC has been at the forefront of addressing AI and privacy concerns, especially as companies like Meta, Google, and X (formerly Twitter) use EU users’ data to train large language models. As part of this growing responsibility, the Irish authority is also preparing for a potential role in overseeing national compliance with the EU’s upcoming AI Act, following the country’s November elections.
The regulatory landscape has faced pushback from Big Tech companies, with some arguing that stringent regulations could hinder innovation. Despite this, Hogan and Sunderland stressed the DPC’s commitment to enforcing GDPR compliance, citing recent legal actions, including a €310 million fine on LinkedIn for data misuse. With two more significant decisions expected by the end of the year, the DPC remains a key player in shaping data privacy in the age of AI.
A new study has revealed that 75% of news-related social media posts are shared without being read, highlighting the rapid spread of unverified information. Researchers from US universities analysed over 35 million Facebook posts from 2017 to 2020, focusing on key moments in American politics. The study found that many users share links based on headlines, summaries, or the number of likes a post has received, without ever clicking to read the full article.
The study, published in Nature Human Behaviour, suggests this behavior may be driven by information overload and the fast-paced nature of social media. Users are often pressured to quickly share content without fully processing it, leading to the spread of misinformation. The research also pointed out that political partisans are more likely to share news without reading, though this could also be influenced by a few highly active, partisan accounts.
To mitigate the spread of misinformation, the authors suggest social media platforms implement warnings or alerts to inform users of the risks involved in sharing content without reading it. This would help users make more informed decisions before reposting news articles.
Reddit has restored access to its platform following a software bug that disrupted services for tens of thousands of US users. The outage, starting at 3 pm ET, affected many who rely on the platform for social interaction and information.
Reports of issues peaked at around 49,000 users, according to monitoring service Downdetector. By 4:32 pm ET, the number of affected users dropped significantly to just over 14,500 as the platform began recovering.
The company acknowledged the issue stemmed from a recent update. A spokesperson confirmed, ‘A fix is in place, and we’re ramping back up.’ Operations were progressively restored, easing concerns among users.
Reddit’s swift action underscores the challenges of maintaining seamless services on social media platforms. Temporary glitches, however, highlight the importance of quick and efficient response strategies.
Australia’s government introduced a bill to parliament aiming to ban social media use for children under 16, with potential fines of up to A$49.5 million ($32 million) for platforms that fail to comply. The law would enforce age verification, possibly using biometrics or government IDs, setting the highest global age limit for social media use without exemptions for parental consent or existing accounts.
Prime Minister Anthony Albanese described the reforms as a response to the physical and mental health risks social media poses, particularly for young users. Harmful content, such as body image issues targeting girls and misogynistic content aimed at boys, has fueled the government’s push for strict measures. Messaging services, gaming, and educational platforms like Google Classroom and Headspace would remain accessible under the proposal.
While opposition parties support the bill, independents and the Greens are calling for more details. Communications Minister Michelle Rowland emphasised that the law places responsibility on platforms, not parents or children, to implement robust age-verification systems. Privacy safeguards, including mandatory destruction of collected data, are also part of the proposed legislation. Australia’s policy would be among the world’s strictest, surpassing similar efforts in France and the US.
Security experts are urging caution when using AI chatbots like ChatGPT and Grok for interpreting medical scans or sharing private health information. Recent trends show users uploading X-rays, MRIs, and other sensitive data to these platforms, but such actions can pose significant privacy risks. Uploaded medical images may become part of training datasets for AI models, leaving personal information exposed to misuse.
Unlike healthcare apps covered by laws like HIPAA, many AI chatbots lack strict data protection safeguards. Companies offering these services may use the data to improve their algorithms, but it’s often unclear who has access or how the data will be used. This lack of transparency has raised alarms among privacy advocates.
X-owner Elon Musk recently encouraged users to upload medical imagery to Grok, his platform’s AI chatbot, citing its potential to evolve into a reliable diagnostic tool. However, Musk acknowledged that Grok is still in its early stages, and critics warn that sharing such data online could have lasting consequences.
OpenAI, in partnership with Common Sense Media, has introduced a free training course aimed at helping teachers understand AI and prompt engineering. The course is designed to equip educators with the skills to use ChatGPT effectively in classrooms, including creating lesson content and streamlining administrative tasks.
The launch comes as OpenAI increases its efforts to promote the positive educational uses of ChatGPT, which became widely popular after its release in November 2022. While the tool’s potential for aiding students has been recognised, its use also sparked concerns about cheating and plagiarism.
Leah Belsky, formerly of Coursera and now leading OpenAI’s education efforts, emphasised the importance of teaching both students and teachers to use AI responsibly. Belsky noted that student adoption of ChatGPT is high, with many parents viewing AI literacy as crucial for future careers. The training is available on Common Sense Media’s website, marking the first of many initiatives in this partnership.
Senator Richard Blumenthal has reaffirmed that ByteDance must divest TikTok’s US operations by January 19 or risk a ban. The measure, driven by security concerns over potential Chinese surveillance, was signed into law in April. A one-time extension of 90 days is available if significant progress is made, but Blumenthal emphasised that laws cannot be disregarded.
Blumenthal also raised alarms over China’s influence on US technology companies. Tesla’s production in China and the US military’s reliance on SpaceX were flagged as security risks. He pointed to Elon Musk’s economic ties with China as a potential vulnerability, warning that such dependencies could compromise national interests.
Apple faced criticism for complying with Chinese censorship and surveillance demands while generating significant revenue from the country. Concerns were voiced that major tech companies might prioritise profits over US security. Neither Apple nor Tesla has commented on these claims.
TikTok and ByteDance are challenging the divestment law in court. A decision is expected soon, but restrictions will tighten for app stores and hosting services if compliance is not achieved. The Biden administration has clarified that it supports ending Chinese ownership of TikTok rather than an outright ban.
Asian News International (ANI), one of India’s largest news agencies, has filed a lawsuit against OpenAI, accusing it of using copyrighted news content to train its AI models without authorisation. ANI alleges that OpenAI’s ChatGPT generated false information attributed to the agency, including fabricated interviews, which it claims could harm its reputation and spread misinformation.
The case, filed in the Delhi High Court, is India’s first legal action against OpenAI on copyright issues. While the court summoned OpenAI to respond, it declined to grant an immediate injunction, citing the complexity of the matter. A detailed hearing is scheduled for January, and an independent expert may be appointed to examine the case’s copyright implications.
OpenAI has argued that copyright laws don’t protect factual data and noted that websites can opt out of data collection. ANI’s counsel countered that public access does not justify content exploitation, emphasising the risks posed by AI inaccuracies. The case comes amid growing global scrutiny of AI companies over their use of copyrighted material, with similar lawsuits ongoing in the US, Canada, and Germany.
Germany‘s Federal Court of Justice (BGH) has ruled that Facebook users affected by data breaches in 2018 and 2019 are entitled to compensation, even without proving financial losses. The court determined that the loss of control over personal data is sufficient grounds for damages, marking a significant step in data protection law.
The case stems from a 2021 breach involving Facebook’s friend search feature, where third parties accessed user accounts by exploiting phone number guesses. Lower courts in Cologne previously dismissed compensation claims, but the BGH ordered a re-examination, suggesting around €100 in damages could be awarded per user without proof of financial harm.
Meta, Facebook’s parent company, has resisted compensation, arguing that users did not suffer concrete damages. A spokesperson for Meta described the ruling as inconsistent with recent European Court of Justice decisions and noted that similar claims have been dismissed by German courts in thousands of cases. The breach reportedly impacted around six million users in Germany.
The court also instructed a review of Facebook’s terms of use, questioning whether they were transparent and whether user consent for data handling was voluntary. The decision adds pressure on companies to strengthen data protection measures and could set a precedent for future claims across Europe.