US Supreme Court declines Snapchat case

The US Supreme Court decided not to review a case involving a Texas teenager who sued Snapchat, alleging the platform did not adequately protect him from sexual abuse by a teacher. The minor, known as Doe, accused Snap Inc. of negligence for failing to safeguard young users from sexual predators, particularly a teacher who exploited him via the app. Bonnie Guess-Mazock, the teacher involved, was convicted of sexually assaulting the teenager.

Lower courts dismissed the lawsuit, citing Section 230 of the Communications Decency Act, which shields internet companies from liability for content posted by users. With the Supreme Court declining to hear the case, Snapchat retains its protection under this law. Justices Clarence Thomas and Neil Gorsuch expressed concerns about the broad immunity granted to social media platforms under Section 230.

Why does this matter?

The case has sparked wider debate about the responsibilities of tech companies in preventing such abuses and whether laws like Section 230 should be revised to hold them more accountable for content on their platforms. Both US political parties have called for reforms to ensure internet companies can be held liable when their platforms are used for harmful activities.

Tech giants clash over California AI legislation

California lawmakers are poised to vote on groundbreaking legislation aimed at regulating AI to prevent potential catastrophic risks, such as manipulating the state’s electric grid or aiding in the creation of chemical weapons. Spearheaded by Democratic state Sen. Scott Wiener, the bill targets AI systems with immense computing power, setting safety standards that apply only to models costing over $100 million to train.

Tech giants like Meta (Facebook) and Google strongly oppose the bill, arguing that it unfairly targets developers rather than those who misuse AI for harmful purposes. They contend that such regulations could stifle innovation and drive tech companies away from California, potentially fracturing the regulatory landscape.

While highlighting California’s role as a leader in AI adoption, Governor Gavin Newsom has not publicly endorsed the bill. His administration is concurrently exploring rules to combat AI discrimination in employment and housing, underscoring the dual challenges of promoting AI innovation while safeguarding against its misuse.

The proposed legislation has garnered support from prominent AI researchers and would establish a new state agency to oversee AI development practices and enforce compliance. Proponents argue that California must act swiftly to avoid repeating past regulatory oversights in the social media sector, despite concerns over regulatory overreach and its potential economic impact.

Microsoft restructures China retail strategy

Microsoft is restructuring its retail strategy in mainland China, consolidating its retail channels amid reports of closing its network of authorised physical retailers. The tech giant did not confirm the closures or specify the number of stores affected but emphasised the need to adapt to changing customer needs.

Microsoft assured its products would remain available in China through retail partners and its website despite not operating physical stores directly in the region. However, the company did not detail which partners would continue to stock its products.

The change of strategy reflects Microsoft’s ongoing efforts to optimise its retail strategy in one of the world’s largest markets, ensuring accessibility and customer satisfaction through diverse channels, despite diplomatic and political challenges and restrictions.

US Department of Justice charges Russian hacker in cyberattack plot against Ukraine

The US Department of Justice has charged a Russian individual for allegedly conspiring to sabotage Ukrainian government computer systems as part of a broader hacking scheme orchestrated by Russia in anticipation of its unlawful invasion of Ukraine.

In a statement released by US prosecutors in Maryland, it was disclosed that Amin Stigal, aged 22, stands accused of aiding in the establishment of servers used by Russian state-backed hackers to carry out destructive cyber assaults on Ukrainian government ministries in January 2022, a month preceding the Kremlin’s invasion of Ukraine.

The cyber campaign, dubbed ‘WhisperGate,’ employed wiper malware posing as ransomware to intentionally and irreversibly corrupt data on infected devices. Prosecutors asserted that the cyberattacks were orchestrated to instil fear across Ukrainian civil society regarding the security of their government’s systems.

The indictment notes that the Russian hackers pilfered substantial volumes of data during the cyber intrusions, encompassing citizens’ health records, criminal histories, and motor insurance information from Ukrainian government databases. Subsequently, the hackers purportedly advertised the stolen data for sale on prominent cybercrime platforms.

Stigal is moreover charged with assisting hackers affiliated with Russia’s military intelligence unit, the GRU, in targeting Ukraine’s allies, including the United States. US prosecutors highlighted that the Russian hackers repeatedly targeted an unspecified US government agency situated in Maryland between 2021 and 2022 before the invasion, granting jurisdiction to prosecutors in the district to pursue charges against Stigal.

In a subsequent development in October 2022, the same servers arranged by Stigal were reportedly employed by the Russian hackers to target the transportation sector of an undisclosed central European nation, which allegedly provided civilian and military aid to Ukraine post-invasion. The incident aligns with a cyberattack in Denmark during the same period, resulting in widespread disruptions and delays across the country’s railway network.

The US government has announced a $10 million reward for information leading to the apprehension of Stigal, who is currently evading authorities and believed to be in Russia. If convicted, Stigal could face a maximum sentence of five years in prison.

Chinese AI companies respond to OpenAI restrictions

Chinese AI companies are swiftly responding to reports that OpenAI intends to restrict access to its technology in certain regions, including China. OpenAI, the creator of ChatGPT, is reportedly planning to block access to its API for entities in China and other countries. While ChatGPT is not directly available in mainland China, many Chinese startups have used OpenAI’s API platform to develop their applications. Users in China have received emails warning about restrictions, with measures set to take effect from 9 July.

In light of these developments, Chinese tech giants like Baidu and Alibaba Cloud are stepping in to attract users affected by OpenAI’s restrictions. Baidu announced an ‘inclusive Program,’ offering free migration to its Ernie platform for new users and additional Ernie 3.5 flagship model tokens to match their OpenAI usage. Similarly, Alibaba Cloud provides free tokens and migration services for OpenAI API users through its AI platform, offering competitive pricing compared to GPT-4.

Zhipu AI, another prominent player in China’s AI sector, has also announced a ‘Special Migration Program’ for OpenAI API users. The company emphasises its GLM model as a benchmark against OpenAI’s ecosystem, highlighting its self-developed technology for security and controllability. Over the past year, numerous Chinese companies have launched chatbots powered by their proprietary AI models, indicating a growing trend towards domestic AI development and innovation.

ByteDance challenges US TikTok ban in court

ByteDance and its subsidiary company TikTok are urging a US court to overturn a law that would ban the popular app in the USA by 19 January. The new legal act, signed by President Biden in April, demands ByteDance divest TikTok’s US assets or face a ban, which the company argues is impractical on technological, commercial, and legal grounds.

ByteDance contends that the law, driven by concerns over potential Chinese access to American data, violates free speech rights and unfairly targets TikTok while ‘ignores many applications with substantial operations in China that collect large amounts of US user data, as well as the many US companies that develop software and employ engineers in China.’ They argue that the legislation represents a substantial departure from the US tradition of supporting an open internet and sets a dangerous precedent.

The US Court of Appeals for the District of Columbia will hear oral arguments on this case on 16 September, a decision that could shape the future of TikTok in the US. ByteDance claims lengthy negotiations with the US government, which ended abruptly in August 2022, proposed various measures to protect US user data, including a ‘kill switch’ for the government to suspend TikTok if necessary. Additionally, the company made public a 100-plus page draft national security agreement to protect US TikTok user data and claims it has spent more than $2 billion on the effort. However, they believe the administration prefers to shut down the app rather than finalise a feasible agreement.

The Justice Department, defending the law, asserted that it addresses national security concerns appropriately. Moreover, the case follows a similar attempt by former President Trump to ban TikTok, which was blocked by the courts in 2020. This time, the new law would prohibit app stores and internet hosting services from supporting TikTok unless ByteDance divests it.

Meta to face US lawsuit by Australian billionaire over scam crypto ads on Facebook

A US judge has denied Meta Platforms’ attempt to dismiss a lawsuit filed by Australian billionaire Andrew Forrest. The lawsuit accuses Meta of negligence for allowing scam advertisements featuring Forrest’s likeness, promoting fake cryptocurrency and fraudulent investments, to appear on Facebook. Judge Casey Pitts ruled that Forrest could proceed with claims that Meta’s actions breached its duty to operate responsibly and that Meta misappropriated Forrest’s name and likeness for profit.

Meta had argued that it was protected under Section 230 of the Communications Decency Act, which typically shields online platforms from liability for third-party content. However, the judge determined that Forrest’s allegations raised questions about whether Meta’s advertising tools actively contributed to the misleading content rather than simply hosting it neutrally.

Forrest alleges that over 1,000 fraudulent ads featuring him appeared on Facebook in Australia from April to November 2023, resulting in millions of dollars in losses for victims. The lawsuit marks a significant step, challenging the usual immunity social media companies claim under Section 230 for their advertising practices. Forrest is seeking compensatory and punitive damages from Meta.

The following decision follows Australian prosecutors’ refusal to pursue criminal charges against Meta over similar scam ads. Forrest, the executive chairman of Fortescue Metals Group, considers the judge’s ruling a strategic victory in holding social media companies accountable for fraudulent advertising.

International Criminal Court investigates cyberattacks on Ukraine as possible war crimes

The International Criminal Court (ICC) is examining alleged Russian cyberattacks on Ukrainian civilian infrastructure as potential war crimes, marking the first instance of such an investigation by international prosecutors. According to sources, this could lead to arrest warrants if sufficient evidence is collected. The investigation focuses on cyberattacks that have endangered lives by disrupting power and water supplies, hindering emergency response communications, and disabling mobile data services used for air raid warnings.

Ukraine is actively gathering evidence to support the ICC investigation. Although the ICC prosecutor’s office has declined to comment on specific details, it has previously stated its jurisdiction over cybercrimes and its policy of not discussing ongoing cases. It should also be noted that since the invasion began, the ICC has issued four arrest warrants against senior Russian officials, including President Vladimir Putin, for war crimes related to the deportation of Ukrainian children to Russia. Russia, which is not a member of the ICC, has rejected these warrants as illegitimate. Despite not being a member state, Ukraine has granted the ICC jurisdiction over crimes committed within its borders.

In April, the ICC issued arrest warrants for two Russian commanders accused of crimes against humanity for their roles in attacks on civilian infrastructure. The Russian defense ministry did not respond to requests for comment. Sources indicated that at least four major attacks on energy infrastructure are being investigated.

Why does it matter?

The ICC case could set a significant precedent in international law. The Geneva Conventions prohibit attacks on civilian objects, but there is no universally accepted definition of cyber war crimes. The Tallinn Manual, a 2017 handbook on the application of international law to cyberwarfare, addresses this issue, but experts remain divided on whether data can be considered an ‘object’ under international humanitarian law and whether its destruction can be classified as a war crime. Professor Michael Schmitt of the University of Reading, who leads the Tallinn Manual initiative, emphasised the importance of the ICC’s potential ruling on this issue. He argued that the cyberattack on Kyivstar could be considered a war crime due to its foreseeable consequences for human safety.

Clearview AI reaches unusual settlement in privacy lawsuit

Facial recognition company Clearview AI has reached a groundbreaking class action settlement to address allegations of violating the privacy rights of millions of Americans. Filed in Chicago federal court on Wednesday, the agreement is notably unconventional as it does not specify a monetary payout upfront. Instead, it ties compensation to Clearview AI’s future financial outcomes, such as its potential IPO or merger valuation.

The lawsuit, rooted in Clearview AI’s alleged scraping of billions of facial images from the internet without consent, invoked Illinois’ biometric privacy law. Although Clearview denies any wrongdoing, the proposed settlement now awaits approval from US District Judge Sharon Johnson Coleman.

In a related development earlier this year, Clearview AI agreed with the ACLU to restrict access to its facial recognition database for private entities and government agencies in Illinois for five years. The plaintiffs’ attorneys acknowledged that this prior agreement influenced their approach to the class action settlement, adopting a structure that allows class members to share in potential future profits of Clearview AI.

The novel settlement approach, spearheaded by Loevy & Loevy, aims to provide meaningful relief to affected individuals while navigating Clearview AI’s financial constraints. Attorney Jon Loevy highlighted that this solution allows class members to reclaim some ownership over their biometric data, reflecting a unique attempt to compensate for privacy violations in the digital age.

Google settles allegations of digital advertising dominance in US, avoids jury trial

Alphabet’s Google will avoid a jury trial over allegations of digital advertising dominance after paying $2.3 million to settle the US government’s monetary damages claim. The payment means the case, involving non-monetary demands, will be heard directly by a judge. Initially, the Justice Department and several states had sued Google, accusing it of monopolising digital advertising and overcharging users, seeking primarily to break up its advertising business.

US District Judge Leonie Brinkema scheduled the non-jury trial for 9 September, where she will directly hear arguments and decide the case. Google criticised the Justice Department’s damages claim as contrived, denying any wrongdoing and not admitting liability by making the payment. A Justice Department spokesperson declined to comment on the matter.

The Justice Department initially claimed more than $100 million in damages but later reduced the demand to less than $1 million. Google’s $2.3 million payment covers the interest and potential tripling of damages under US antitrust law. Google accused the government of inflating its damages claim to secure a jury trial, while the government contended that Google has worked to keep its anticompetitive conduct hidden from public scrutiny.