India’s Competition Commission has rejected Apple’s request to pause an antitrust investigation, clearing the way for the case to progress. The investigation alleges Apple breached competition laws by exploiting its dominant app store position. Apple disputes these claims, arguing its market share in India is minor compared to Android devices.
The controversy began in 2021 when the non-profit Together We Fight Society (TWFS) accused Apple of anti-competitive practices. In August, the commission ordered investigation reports to be recalled, following Apple’s claims of sensitive information being leaked to rivals. Revised reports were issued after redaction disputes, but Apple requested a suspension, citing non-compliance by TWFS.
Regulator in India dismissed Apple’s concerns, calling its request to halt proceedings ‘untenable.’ The commission has now instructed Apple to submit audited financial records for three fiscal years to assess potential penalties. Apple has yet to respond publicly to these developments.
Senior officials at the Competition Commission are reviewing the evidence and will issue a final ruling. The case highlights broader scrutiny of major tech companies’ market behaviour, particularly regarding app store operations and developer relations.
A Massachusetts judge upheld disciplinary measures against a high school senior accused of cheating with an AI tool. The Hingham High School student’s parents sought to erase his record and raise his history grade, but the court sided with the school. Officials determined the student violated academic integrity by copying AI-generated text, including fabricated citations.
The student faced penalties including detention and temporary exclusion from the National Honor Society. He later gained readmission. His parents argued that unclear rules on AI usage led to confusion, claiming the school violated his constitutional rights. However, the court found the plagiarism policy sufficient.
Judge Paul Levenson acknowledged AI’s challenges in education but said the evidence showed misuse. The student and his partner had copied AI-generated content indiscriminately, bypassing proper review. The judge declined to order immediate changes to the student’s record or grade.
The case remains unresolved as the parents plan to pursue further legal action. School representatives praised the decision, describing it as accurate and lawful. The ruling highlights the growing complexities of generative AI in academic settings.
Elon Musk has spoken out against Australia’s proposed law to ban social media use for children under 16, calling it a “backdoor way to control access to the Internet by all Australians.” The legislation, introduced by Australia’s centre-left government, includes fines of up to A$49.5 million ($32 million) for systemic breaches by platforms and aims to enforce an age-verification system.
Australia’s plan is among the world’s strictest, banning underage access without exceptions for parental consent or existing accounts. By contrast, countries like France and the US allow limited access for minors with parental approval or data protections for children. Critics argue Australia’s proposal could set a precedent for tougher global controls.
Musk, who has previously clashed with Prime Minister Anthony Albanese’s government, is a vocal advocate for free speech. His platform, X, has faced tensions with Australia, including a legal challenge to content regulation orders earlier this year. Albanese has called Musk an “arrogant billionaire,” underscoring their rocky relationship.
Snap Inc., the parent company of Snapchat, has filed a motion to dismiss a New Mexico lawsuit accusing it of enabling child sexual exploitation on its platform. The lawsuit, brought by Attorney General Raul Torrez in September, claims Snapchat exposed minors to abuse and failed to warn parents about sextortion risks. Snap refuted the allegations, calling them ‘patently false,’ and argued that the state’s decoy investigation misrepresented key facts.
The lawsuit stems from a broader push by US lawmakers to hold tech firms accountable for harm to minors. Investigators claimed a decoy account for a 14-year-old girl received explicit friend suggestions despite no user activity. Snap countered that the account actively sent friend requests, disputing the state’s findings.
Snap further argued that the lawsuit violates Section 230 of the 1996 Communications Decency Act, which shields platforms from liability for user-generated content. It also invoked the First Amendment, stating the company cannot be forced to provide warnings about subjective risks without clear guidelines.
Defending its safety efforts, Snap highlighted its increased investment in trust and safety teams and collaboration with law enforcement. The company said it remains committed to protecting users while contesting what it views as an unjustified legal challenge.
OpenAI is under scrutiny after engineers accidentally erased key evidence in an ongoing copyright lawsuit filed by The New York Times and Daily News. The publishers accuse OpenAI of using their copyrighted content to train its AI models without authorisation.
The issue arose when OpenAI provided virtual machines for the plaintiffs to search its training datasets for infringed material. On 14 November 2024, OpenAI engineers deleted the search data stored on one of these machines. While most of the data was recovered, the loss of folder structures and file names rendered the information unusable for tracing specific sources in the training process.
Plaintiffs are now forced to restart the time-intensive search, leading to concerns over OpenAI’s ability to manage its own datasets. Although the deletion is not suspected to be intentional, lawyers argue that OpenAI is best equipped to perform searches and verify its use of copyrighted material. OpenAI maintains that training AI on publicly available data falls under fair use, but it has also struck licensing deals with major publishers like the Associated Press and News Corp. The company has neither confirmed nor denied using specific copyrighted works for its AI training.
US prosecutors have urged a federal judge to impose sweeping changes on Google to dismantle its alleged monopoly on online search and advertising. Proposed remedies include forcing Google to sell its Chrome browser, share search data with competitors, and possibly divest its Android operating system. These measures could remain in place for up to a decade, overseen by a court-appointed technical committee.
The Department of Justice (DOJ) and state antitrust enforcers argued that Google’s dominance, with a 90% share of US searches, has stifled competition by controlling critical distribution channels. The DOJ aims to end deals where Google pays companies like Apple billions annually to make its search engine the default on their devices. Prosecutors also want restrictions on Google’s acquisitions in search, AI, and advertising technology, as well as provisions for websites to opt out of training Google’s AI systems.
Google has called the proposals extreme, warning they would harm consumers and the economy. Alphabet’s legal chief, Kent Walker, said the measures represent “unprecedented government overreach.” Google will present alternative proposals in December, while a trial to decide the remedies is scheduled for April.
If implemented, the proposals could reshape the tech landscape, lowering barriers for competitors like DuckDuckGo. The case highlights broader global efforts to curb the power of tech giants and promote fair competition.
The Irish Data Protection Commission (DPC) is awaiting guidance from the European Data Protection Board (EDPB) on handling AI-related privacy issues under the EU’s General Data Protection Regulation (GDPR). Data protection commissioners Des Hogan and Dale Sunderland emphasised the need for clarity, particularly on whether personal data continues to exist within AI training models. The EDPB is expected to provide its opinion before the end of the year, helping harmonise regulatory approaches across Europe.
The DPC has been at the forefront of addressing AI and privacy concerns, especially as companies like Meta, Google, and X (formerly Twitter) use EU users’ data to train large language models. As part of this growing responsibility, the Irish authority is also preparing for a potential role in overseeing national compliance with the EU’s upcoming AI Act, following the country’s November elections.
The regulatory landscape has faced pushback from Big Tech companies, with some arguing that stringent regulations could hinder innovation. Despite this, Hogan and Sunderland stressed the DPC’s commitment to enforcing GDPR compliance, citing recent legal actions, including a €310 million fine on LinkedIn for data misuse. With two more significant decisions expected by the end of the year, the DPC remains a key player in shaping data privacy in the age of AI.
Gary Wang, a former FTX executive, has avoided prison after cooperating extensively with prosecutors in the case against cryptocurrency exchange founder Sam Bankman-Fried. Judge Lewis Kaplan acknowledged Wang’s lesser role in the $8 billion fraud and commended his efforts to accept responsibility. Wang had pleaded guilty to fraud and conspiracy charges but argued he was initially unaware of the scale of the misconduct.
Wang, a former chief technology officer at FTX, admitted to altering the platform’s software under Bankman-Fried’s direction, granting Alameda Research special access to customer funds. Despite realising the fraud later, Wang continued maintaining the system but expressed regret in court, vowing to dedicate his life to making amends. Prosecutors highlighted his assistance in uncovering the fraud and his current work on tools to combat market manipulation.
The two met during a summer math camp in their youth and later studied at MIT before founding FTX. Wang was part of the close-knit group living with Bankman-Fried in a luxury Bahamian penthouse before the exchange’s collapse in 2022. The company’s failure exposed the misappropriation of customer funds, leading to Bankman-Fried’s 25-year prison sentence, which he is currently appealing.
Wang’s sentencing marks the conclusion of legal actions against Bankman-Fried’s inner circle. Others implicated included Nishad Singh, who also avoided jail, and Caroline Ellison, sentenced to two years. Prosecutors emphasised Wang’s unique skill set and role in aiding investigations, describing his cooperation as pivotal in holding the former FTX leadership accountable.
Five individuals, alleged members of the hacking group Scattered Spider, face criminal charges in the US. Prosecutors accuse the group of orchestrating phishing schemes to steal sensitive data and cryptocurrency. Victims include at least 12 companies from industries such as gaming and telecommunications, alongside individual cryptocurrency holders.
The suspects, aged in their teens or 20s during the offences, allegedly deceived employees into sharing login details through fraudulent messages. These actions enabled them to access corporate systems and drain millions from personal accounts. The group’s notoriety grew following high-profile hacks of casino operators in 2023, though connections to those incidents remain unclear.
Officials claim Scattered Spider operates as a loose collective of cybercriminals, often collaborating temporarily for specific crimes. Industry experts have long called for stronger enforcement against such groups. Recent arrests signal intensified efforts, with cybersecurity professionals warning young hackers of severe consequences if caught.
The defendants, including individuals from Scotland, Texas, and North Carolina, face charges of conspiracy, identity theft, and wire fraud. Arrests have taken place in the US and Spain, with extradition proceedings underway. Investigations continue as authorities pursue other suspected members of the group.
Australia’s government introduced a bill to parliament aiming to ban social media use for children under 16, with potential fines of up to A$49.5 million ($32 million) for platforms that fail to comply. The law would enforce age verification, possibly using biometrics or government IDs, setting the highest global age limit for social media use without exemptions for parental consent or existing accounts.
Prime Minister Anthony Albanese described the reforms as a response to the physical and mental health risks social media poses, particularly for young users. Harmful content, such as body image issues targeting girls and misogynistic content aimed at boys, has fueled the government’s push for strict measures. Messaging services, gaming, and educational platforms like Google Classroom and Headspace would remain accessible under the proposal.
While opposition parties support the bill, independents and the Greens are calling for more details. Communications Minister Michelle Rowland emphasised that the law places responsibility on platforms, not parents or children, to implement robust age-verification systems. Privacy safeguards, including mandatory destruction of collected data, are also part of the proposed legislation. Australia’s policy would be among the world’s strictest, surpassing similar efforts in France and the US.