Chinese state-sponsored hackers, identified as the Salt Typhoon group, have breached multiple US telecommunications companies, including AT&T, Verizon, Charter Communications, and T-Mobile. These cyber-espionage operations exploited vulnerabilities in network devices from vendors such as Fortinet and Cisco Systems.
US National Security Adviser Jake Sullivan has stated that the United States has taken steps in response to these intrusions, sending clear messages to China about the consequences of disrupting American critical infrastructure.
The breaches have raised significant concerns about national security and the resilience of US critical infrastructure against sophisticated cyber threats. While companies like AT&T and Verizon have reported that their networks are now secure and are collaborating with law enforcement, the extent and impact of these breaches continue to be scrutinised.
China has denied involvement in these cyber activities, accusing the United States of disseminating disinformation. Nonetheless, the revelations have intensified discussions about the need for enhanced cybersecurity measures to protect sensitive communications and infrastructure from state-sponsored cyber espionage.
Triplegangers, was forced offline after a bot from OpenAI relentlessly scraped its website, treating it like a distributed denial-of-service (DDoS) attack. The AI bot sent tens of thousands of server requests, attempting to download hundreds of thousands of detailed 3D images and descriptions from the company’s extensive database of digital human models.
The sudden spike in traffic crippled Ukrainian Triplegangers’ servers and left CEO Oleksandr Tomchuk grappling with an unexpected problem. The company, which sells digital assets to video game developers and 3D artists, discovered that OpenAI’s bot operated across hundreds of IP addresses to gather its data. Despite having terms of service that forbid such scraping, the company had not configured the necessary robot.txt file to block the bot.
After days of disruption, Tomchuk implemented protective measures by updating the robot.txt file and using Cloudflare to block specific bots. However, he remains frustrated by the lack of transparency from OpenAI and the difficulty in determining exactly what data was taken. With rising costs and increased monitoring now necessary, he warns that other businesses remain vulnerable.
Tomchuk criticised AI companies for placing the responsibility on small businesses to block unwanted scraping, comparing it to a digital shakedown. “They should be asking permission, not just scraping data,” he said, urging companies to take greater precautions against AI crawlers that can compromise their sites.
Infosys has filed a counterclaim against Cognizant in a Texas federal court, accusing the US-based technology firm of anti-competitive behaviour. The Indian company alleges that Cognizant included restrictive clauses in client contracts, preventing them from working with rival firms and withholding necessary software training.
The Bengaluru-based software giant also claims Cognizant engaged in targeted poaching of its senior executives. The hiring of former Infosys president S Ravi Kumar as Cognizant’s CEO in 2023 allegedly delayed the development of Infosys’ Helix software product.
Cognizant denied the allegations, stating it supports fair competition but accused Infosys of improperly using its intellectual property. The counterclaim follows a 2023 lawsuit by Cognizant’s subsidiary TriZetto, which accused Infosys of stealing trade secrets related to healthcare insurance software.
Infosys is seeking damages, including legal fees, but did not disclose the amount. The case is being heard in the US District Court for the Northern District of Texas.
OpenSea users are facing increased risks after over 7 million email addresses were exposed in a data breach dating back to 2022. The breach occurred when an employee of Customer.io, OpenSea’s email delivery partner, mishandled user data, sharing email addresses with an unauthorised third party. This data includes the emails of major figures in the crypto world, raising concerns about potential phishing attacks and scams.
Blockchain security expert 23pds highlighted the growing threat, warning that the leaked information had been circulated multiple times before becoming public. OpenSea had previously alerted users about phishing risks following the breach, advising them to be cautious with email links and attachments.
Phishing scams targeting OpenSea users have been a persistent issue, with attackers using fake websites and fraudulent email campaigns to exploit vulnerabilities. One such scam in January 2024 promised exclusive access to an NFT event, only to direct victims to a malicious site designed to steal funds and wallet information.
Experts continue to advise users to stay vigilant, verify email sources, enable two-factor authentication, and never share sensitive wallet details to protect themselves from ongoing phishing threats.
Education technology provider PowerSchool has suffered a major data breach, exposing the personal information of millions of students and teachers. Hackers gained access to its systems by exploiting stolen credentials, using a tool within the company’s PowerSource support portal to export sensitive data.
The stolen records include names, addresses, and potentially more sensitive details such as Social Security numbers and medical information in the US and Canada. PowerSchool, which manages academic records for over 60 million K-12 students, assured customers that not all users were affected. However, the breach has left schools scrambling to assess the damage.
PowerSchool insists the hack wasn’t due to a flaw in its software but was a result of unauthorised access using legitimate credentials. The company has engaged cybersecurity experts to investigate and taken steps to improve security, including deactivating compromised accounts and strengthening password controls.
Critics argue that PowerSchool was slow to inform customers, potentially putting students, parents, and educators at greater risk of identity theft. While PowerSchool is offering affected users credit monitoring and identity protection services, the incident has sparked calls for stricter regulations on data security in the education sector.
The US Supreme Court on Friday appeared inclined to uphold a law requiring a sale or ban of TikTok in the United States by January 19, citing national security risks tied to its Chinese parent company, ByteDance. Justices questioned TikTok’s potential role in enabling the Chinese government to collect data on its 170 million American users and influence public opinion covertly. Chief Justice John Roberts and others expressed concerns about China’s potential to exploit the platform, while also probing implications for free speech protections under the First Amendment.
The law, passed with bipartisan support and signed by outgoing President Joe Biden, has been challenged by TikTok, ByteDance, and app users who argue it infringes on free speech. TikTok’s lawyer, Noel Francisco, warned that without a resolution or extension by President-elect Donald Trump, the platform would likely shut down on January 19. Francisco emphasised TikTok’s role as a key platform for expression and called for at least a temporary halt to the law.
Liberal and conservative justices alike acknowledged the tension between national security and constitutional rights. Justice Elena Kagan raised historical parallels to Cold War-era restrictions, while Justice Brett Kavanaugh highlighted the long-term risks of data collection. Solicitor General Elizabeth Prelogar, representing the Biden administration, argued that TikTok’s foreign ownership poses a grave threat, enabling covert manipulation and espionage. She defended Congress’s right to act in the interest of national security.
With global trade tensions and fears of digital surveillance mounting, the Supreme Court’s decision will have wide-ranging implications for technology, free speech, and US-China relations. The court is now considering whether to grant a temporary stay, providing Trump’s incoming administration an opportunity to address the issue politically.
The Japanese government is considering publicly disclosing the names of developers behind malicious artificial intelligence systems as part of efforts to combat disinformation and cyberattacks. The move, aimed at ensuring accountability, follows a government panel’s recommendation that stricter legal frameworks are necessary to prevent AI misuse.
The proposed bill, expected to be submitted to parliament soon, will focus on gathering information on harmful AI activities and encouraging developers to cooperate with government investigations. However, it will stop short of imposing penalties on offenders, amid concerns that harsh measures might discourage AI innovation.
Japan’s government may also share its findings with the public if harmful AI systems cause significant damage, such as preventing access to vital public services. While the bill aims to balance innovation with public safety, questions remain about how the government will decide what constitutes a “malicious” AI system and the potential impact on freedom of expression.
President Joe Biden is preparing to introduce a new executive order aimed at strengthening cybersecurity standards for federal agencies and contractors. The proposed measures address growing threats from Chinese-linked cyber operations and criminal cyberattacks, which have targeted critical infrastructure, government emails, and major telecom firms. Under the draft order, contractors must adhere to stricter secure software development practices and provide documentation to be verified by the Cybersecurity and Infrastructure Security Agency (CISA).
The order highlights vulnerabilities exposed by recent cyber incidents, including the May 2023 breach of US government email accounts, attributed to Chinese hackers. New guidelines will also focus on securing access tokens and cryptographic keys, which were exploited during the attack. Contractors whose security practices fail to meet standards may face legal consequences, with referrals to the attorney general for further action.
While experts like Tom Kellermann of Contrast Security support the initiative, some criticise the timeline as insufficient given the immediate threats posed by adversaries like China and Russia. Brandon Wales of SentinelOne views the order as a continuation of efforts across the past two administrations, emphasising the need to enhance existing cybersecurity frameworks while addressing a broad range of threats.
The order underscores Biden’s commitment to cybersecurity as a pressing national security issue. It comes amid escalating concerns about foreign cyber operations and aims to solidify protections for critical US systems before the transition to new leadership.
US antitrust regulators provided legal insights on Elon Musk’s lawsuit against OpenAI and Microsoft, alleging anticompetitive practices. While not taking a formal stance, the Federal Trade Commission (FTC) and Department of Justice (DOJ) highlighted key legal doctrines supporting Musk’s claims ahead of a court hearing in Oakland, California. Musk, a co-founder of OpenAI and now leading AI startup xAI, accuses OpenAI of enforcing restrictive agreements and sharing board members with Microsoft to stifle competition.
The lawsuit also claims OpenAI orchestrated an investor boycott against rivals. Regulators noted such boycotts are legally actionable, even if the alleged organiser isn’t directly involved. OpenAI has denied these allegations, labelling them baseless harassment. Meanwhile, the FTC is conducting a broader probe into AI partnerships, including those between Microsoft and OpenAI, to assess potential antitrust violations.
Microsoft declined to comment on the case, while OpenAI pointed to prior court filings refuting Musk’s claims. However, the FTC and DOJ stressed that even former board members, like Reid Hoffman, could retain sensitive competitive information, reinforcing Musk’s concerns about anticompetitive practices.
Musk’s legal team sees the regulators’ involvement as validation of the seriousness of the case, underscoring the heightened scrutiny around AI collaborations and their impact on competition.
A group of authors, including Ta-Nehisi Coates and Sarah Silverman, has accused Meta Platforms of using pirated books to train its AI systems with CEO Mark Zuckerberg’s approval. Newly disclosed court documents filed in California allege that Meta knowingly relied on the LibGen dataset, which contains millions of pirated works, to develop its large language model, Llama.
The lawsuit, initially filed in 2023, claims Meta infringed on copyright by using the authors’ works without permission. The authors argue that internal Meta communications reveal concerns within the company about the dataset’s legality, which were ultimately overruled. Meta has not yet responded to the latest allegations.
The case is one of several challenging the use of copyrighted materials to train AI systems. While defendants in similar lawsuits have cited fair use, the authors contend that newly uncovered evidence strengthens their claims. They have requested permission to file an updated complaint, adding computer fraud allegations and revisiting dismissed claims related to copyright management information.
US District Judge Vince Chhabria has allowed the authors to file an amended complaint but expressed doubts about the validity of some new claims. The outcome of the case could have broader implications for how AI companies utilise copyrighted content in training data.