Geologists voice concerns about potential censorship and bias in Chinese AI chatbot

Geologists are expressing concerns about potential Chinese censorship and bias in GeoGPT, a new AI chatbot backed by the International Union of Geological Sciences (IUGS). Developed under the Deep-time Digital Earth (DDE) program, which is heavily funded by China, GeoGPT aims to assist geoscientists, particularly in developing countries, by providing access to extensive geological data. However, issues around transparency and censorship have been highlighted by experts, raising questions about the chatbot’s reliability.

Critics like Prof. Paul Cleverley have pointed out potential censorship and lack of transparency in GeoGPT’s responses. Although DDE representatives claim that the chatbot’s information is purely geoscientific and free from state influence, tests with its underlying AI, Qwen, developed by Alibaba, suggest that certain sensitive questions may be avoided or answered inadequately. That contrasts with responses from other AI models like ChatGPT, which provide more direct information on similar queries.

Further concerns are raised about the involvement of Chinese funding and the potential for biassed data usage. Geoscientific research, which includes valuable information about natural resources, could be strategically filtered. Additionally, the terms of use for GeoGPT prohibit generating content that undermines national security or incites subversion, aligning with Chinese laws, which may influence the chatbot’s outputs.

The IUGS president, John Ludden, has stated that GeoGPT’s database will be made public once appropriate governance is ensured. However, with the project being predominantly funded by Chinese sources, geoscientists remain sceptical about the impartiality and transparency of GeoGPT’s data and responses.

Cybersecurity measures ramp up for 2024 Olympics

Next month, athletes worldwide will converge on Paris for the eagerly awaited 2024 Summer Olympics. While competitors prepare for their chance to win coveted medals, organisers are focused on defending against cybersecurity threats. Over the past decade, cyberattacks have become more sophisticated due to the misuse of AI. However, the responsible application of AI offers a promising countermeasure.

Sports organisations are increasingly partnering with AI-driven companies like Visual Edge IT, which specializes in risk reduction. Although Visual Edge IT does not directly work with the Olympics, cybersecurity expert Peter Avery shared insights on how Olympic organisers can mitigate risks. Avery emphasised the importance of robust technical, physical, and administrative controls to protect against cyber threats. He highlighted the need for a comprehensive incident response plan and the necessity of preparing for potential disruptions, such as internet overload and infrastructure attacks.

The advent of AI has revolutionised both productivity and cybercrime. Avery noted that AI allows cybercriminals to automate attacks, making them more efficient and widespread. He stressed that a solid incident response plan and regular simulation exercises are crucial for managing cyber threats. As Avery pointed out, the question is not if a cyberattack will happen but when.

The International Olympic Committee (IOC) also embraces AI responsibly within sports. IOC President Thomas Bach announced the AI plan to identify talent, personalise training, and improve judging fairness. The Summer Olympics in Paris, which run from 26 July to 11 August, will significantly test these cybersecurity and AI initiatives.

Illinois judge dismisses lawsuit against X over social media photo scanning

A federal judge in Illinois dismissed a class action lawsuit against the social network X, ruling that the photos it collected did not constitute biometric data under the state’s Biometric Information Privacy Act (BIPA). The lawsuit alleged that X violated BIPA by using Microsoft’s PhotoDNA software to scan for offensive images without proper disclosure and consent.

The judge concluded that the plaintiff failed to prove that the PhotoDNA tool involved facial geometry scanning or could identify specific individuals. Instead, the software analysed uploaded photos to detect nudity or pornographic content, which did not qualify as a scan of facial geometry under BIPA.

The ruling mirrors a recent case involving Facebook, where allegations of illegally collecting biometric data were dismissed. Both cases clarified that a digital signature generated from a photograph, known as a ‘hash’ or face signature, did not violate BIPA’s definition of biometric identifiers.

The judge emphasised that BIPA aims to regulate specific biometric identifiers like retina scans or fingerprints, excluding photographs to avoid an overly broad scope. Applying BIPA to any face geometry scan that cannot identify individuals would contradict the law’s purpose of ensuring notice and consent.

BIPA’s private right of action has been a significant deterrent for biometrics companies, allowing users to sue for damages in cases of non-compliance.

EU faces controversy over proposed AI scanning law

The EU is facing significant controversy over a proposed law that would require AI scanning of users’ photos and videos on messaging apps to detect child sexual abuse material (CSAM). Critics, including major tech companies like WhatsApp and Signal, argue that this law threatens privacy and encryption, undermining fundamental rights. They also warn that the AI detection systems could produce numerous false positives, overwhelming law enforcement.

A recent meeting among the EU member states’ representatives failed to reach a consensus on the proposal, leading to further delays. The Belgian presidency had hoped to finalise a negotiating mandate, but disagreements among member states prevented progress. The ongoing division means that discussions on the proposal will likely continue under Hungary’s upcoming EU Council presidency.

Opponents of the proposal, including Signal President Meredith Whittaker and Proton founder Andy Yen, emphasise the dangers of mass surveillance and the need for more targeted approaches to child protection. Despite the current setback, there’s concern that efforts to push the law forward will persist, necessitating continued vigilance from privacy advocates.

Cyberattack on London hospitals leads to data leak

Cybercriminals claiming responsibility for the recent hack on London hospitals have reportedly released stolen data from the incident. England’s National Health Service (NHS) acknowledged the publication of this data, allegedly belonging to Synnovis, the pathology provider targeted in the 3 June attack. NHS officials are working closely with Synnovis, the National Cyber Security Centre, and other partners to verify the content of these files swiftly. Their focus includes determining if the data originates from Synnovis systems and if it pertains to NHS patients.

According to reports, the hackers have disclosed nearly 400GB of data on their darknet website and Telegram channel. The published information supposedly includes patient names, dates of birth, NHS numbers, and descriptions of blood tests, alongside financial spreadsheets. However, the NHS has not confirmed whether medical test results are part of the exposed data.

The attack has been attributed to the Russian-speaking hacker group Qilin, which has demanded a $50 million ransom to halt further disclosures. Synnovis, a provider jointly operated by Synlab UK & Ireland and NHS trusts, is crucial in delivering lab testing services to healthcare facilities in London and Kent. The breach has severely impacted its blood transfusion and testing capabilities, leading to the postponement of over 1,000 operations and more than 2,000 appointments at affected hospital units.

US DoJ to file lawsuit against TikTok for alleged children’s privacy violations

TikTok will be sued again by the US Department of Justice (DoJ) in a consumer protection lawsuit against ByteDance’s TikTok later this year, focusing on alleged children’s privacy violations. The incentive for the legal move comes on behalf of the Federal Trade Commission (FTC), but the DoJ will not pursue allegations that TikTok misled US consumers about data security, specifically dropping claims that the company failed to inform users that China-based employees could access their personal and financial information.

The decision suggests that the primary focus will now be on how TikTok handles children’s privacy. The FTC had referred to the DoJ a complaint against TikTok and its parent, ByteDance, concerning potential violations of children’s privacy, stating that it investigated TikTok and found evidence suggesting they may be breaking the Children’s Online Privacy Protection Act. The federal act requires apps and websites aimed at kids to get parental consent before collecting personal information from children under 13.

Simultaneously, TikTok and ByteDance are challenging a US law that aims to ban the popular short video app in the United States starting from 19 January next year.

Ukrainian student’s identity misused by AI on Chinese social media platforms

Olga Loiek, a 21-year-old University of Pennsylvania student from Ukraine, experienced a disturbing twist after launching her YouTube channel last November. Her image was hijacked and manipulated through AI to create digital alter egos on Chinese social media platforms. These AI-generated avatars, such as ‘Natasha,’ posed as Russian women fluent in Chinese, promoting pro-Russian sentiments and selling products like Russian candies. These fake accounts amassed hundreds of thousands of followers in China, far surpassing Loiek’s own online presence.

Loiek’s experience highlights a broader trend of AI-generated personas on Chinese social media, presenting themselves as supportive of Russia and fluent in Chinese while selling various products. Experts reveal that these avatars often use clips of real women without their knowledge, aiming to appeal to single Chinese men. Some posts include disclaimers about AI involvement, but the followers and sales figures remain significant.

Why does it matter?

These events underscore the ethical and legal concerns surrounding AI’s misuse. As generative AI systems like ChatGPT become more widespread, issues related to misinformation, fake news, and copyright violations are growing.

In response, governments are starting to regulate the industry. China proposed guidelines to standardise AI by 2026, while the EU’s new AI Act imposes strict transparency requirements. However, experts like Xin Dai from Peking University warn that regulations struggle to keep pace with rapid AI advancements, raising concerns about the unchecked proliferation of AI-generated content worldwide.

ByteDance challenges US TikTok ban in court

ByteDance and its subsidiary company TikTok are urging a US court to overturn a law that would ban the popular app in the USA by 19 January. The new legal act, signed by President Biden in April, demands ByteDance divest TikTok’s US assets or face a ban, which the company argues is impractical on technological, commercial, and legal grounds.

ByteDance contends that the law, driven by concerns over potential Chinese access to American data, violates free speech rights and unfairly targets TikTok while ‘ignores many applications with substantial operations in China that collect large amounts of US user data, as well as the many US companies that develop software and employ engineers in China.’ They argue that the legislation represents a substantial departure from the US tradition of supporting an open internet and sets a dangerous precedent.

The US Court of Appeals for the District of Columbia will hear oral arguments on this case on 16 September, a decision that could shape the future of TikTok in the US. ByteDance claims lengthy negotiations with the US government, which ended abruptly in August 2022, proposed various measures to protect US user data, including a ‘kill switch’ for the government to suspend TikTok if necessary. Additionally, the company made public a 100-plus page draft national security agreement to protect US TikTok user data and claims it has spent more than $2 billion on the effort. However, they believe the administration prefers to shut down the app rather than finalise a feasible agreement.

The Justice Department, defending the law, asserted that it addresses national security concerns appropriately. Moreover, the case follows a similar attempt by former President Trump to ban TikTok, which was blocked by the courts in 2020. This time, the new law would prohibit app stores and internet hosting services from supporting TikTok unless ByteDance divests it.

Key player in semiconductor industry targeted in major data breach

The infamous threat actor Intelbroker has purportedly masterminded a data breach targeting Advanced Micro Devices (AMD), a prominent player in the semiconductor industry. The alleged breach of AMD’s systems was disclosed on BreachForums alongside detailed information about the intrusion and various data samples.

In response to these claims, AMD officials have issued a statement acknowledging the reported data breach by a cybercriminal group. The company stated that it is collaborating with law enforcement authorities and a third-party hosting partner to investigate the alleged breach and assess the nature and impact of the compromised data.

Intelbroker asserts that the leaked AMD data includes a wide range of sensitive information stolen from AMD’s databases. The data includes technical specifications, product details, and internal communications allegedly sourced from AMD’s secure servers. These disclosures not only point towards the possible extent of the breach but also raise concerns about potential vulnerabilities within AMD’s cybersecurity infrastructure.

The following incident is not the first cybersecurity challenge faced by AMD. In 2022, the company reportedly fell victim to the RansomHouse hacking group. Following the 2022 breach and the current incident, AMD initiated thorough investigations to evaluate the breach’s implications and in turn enhance its defences against cyber threats. These disclosures can potentially compromise AMD’s competitive edge and raise concerns about intellectual property theft and corporate espionage.

Who is Intelbroker?

Intelbroker, the alleged perpetrator behind the recent AMD data breach, has a track record of targeting critical infrastructure, major tech companies, and government contractors. The hacker operates as a lone wolf and employs sophisticated tactics to exploit vulnerabilities and access sensitive information. Previous breaches include infiltrations at Los Angeles International Airport (LAX) and US federal agencies via Acuity, emphasising the widespread impact of their activities.

The motives driving Intelbroker’s cyber campaigns range from financial gain through the sale of stolen data on dark web platforms to potential geopolitical agendas aimed at disrupting critical infrastructure and corporate operations. 

US Justice Department to investigate TikTok over child privacy complaint

The US Federal Trade Commission (FTC) has referred a complaint against TikTok and its parent company, ByteDance, to the Justice Department over potential violations of children’s privacy. The move follows an investigation that suggested the companies might be breaking the law and deemed it in the public interest to proceed with the complaint. The following investigation stems from allegations that TikTok failed to comply with a 2019 agreement to safeguard children’s privacy.

TikTok has been discussing with the FTC for over a year to address the agency’s concerns. The company expressed disappointment over the FTC’s decision to pursue litigation rather than continue negotiations, arguing that many of the FTC’s allegations are outdated or incorrect. TikTok remains committed to resolving the issues and believes it has already addressed many concerns.

Separately, TikTok is facing scrutiny from US Congress regarding the potential misuse of data from its 170 million US users by the Chinese government, a claim TikTok denies. Additionally, TikTok is preparing to file a legal brief challenging a recent law that mandates its parent company, ByteDance, to divest TikTok’s US assets by 19 January or face a ban.