Global hackers test online voting platform in Las Vegas

On 9 August, hackers from across the globe convened in Las Vegas to hack a new online voting platform to test and identify potential digital vulnerabilities in future election systems. The Secure Internet Voting (SIV) platform, operated by a US firm, allows voting via phones or computers and is currently in small pilot programs in the US. However, its broader adoption for conducting elections confronts challenges arising from security concerns, leading most states to favour the traditional method of auditable paper ballots.

SIV founder David Ernst, noting the general pessimism on the security of internet voting, stated ‘We believe that there are modern tools and technologies that allow you to make it hyper-secure, with a higher level of security than you can currently achieve with paper’. He further highlighted how SIV was successfully used to select a party primary candidate in 2023 when Republican Celeste Maloy was chosen through SIV and subsequently won Utah’s 2nd congressional district seat in November.

Americans are greatly concerned about voting security as they fear potential foreign cyberattacks on the upcoming elections. National security officials have already found Russia and Iran engaging in online influence campaigns in the current election. Moreover, in previous election cycles, Russian hackers targeted election offices and voting machine companies, and as such, their resilience became a paramount concern.

OpenAI appoints AI safety expert as director

One of the largest AI research organizations has appointed Zico Kolter, a distinguished professor and director of the machine learning department at Carnegie Mellon University, to its board of directors. Renowned for his focus on AI safety, Kolter will also join the company’s safety and security committee, which is tasked with overseeing the safe deployment of OpenAI’s projects. The appointment comes as OpenAI’s board undergoes changes in response to growing concerns about the safety of generative AI, which has seen rapid adoption across various sectors.

Following the departure of co-founder John Schulman, Kolter’s addition to the OpenAI board underscores a commitment to addressing these safety concerns. He brings a wealth of experience from his roles as the chief expert at Bosch and chief technical adviser at Gray Swan, a startup dedicated to AI safety. Notably, Kolter has contributed to developing methods that automatically assess the safety of large language models, a crucial area as AI systems become increasingly sophisticated. His expertise will be invaluable in guiding OpenAI as it navigates the challenges posed by the widespread use of generative AI technologies such as ChatGPT.

The formation of the safety and security committee in May, preceded by Ilya Sutskever‘s leaving, which includes Kolter alongside CEO Sam Altman and other directors, underlines OpenAI’s proactive approach to ensuring AI is developed and deployed responsibly. The committee is responsible for making recommendations on safety decisions across all of OpenAI’s projects, reflecting the company’s recognition of the potential risks associated with AI advancements.

In a related move, Microsoft relinquished its board observer seat at OpenAI in July, aiming to address antitrust concerns from regulators in the United States and the United Kingdom. This decision was seen as a step towards maintaining a balance of power within OpenAI, as the company continues to play a leading role in the rapidly evolving AI landscape.

X agrees to pause EU data use for AI amid legal dispute

Elon Musk’s social media platform, X, has agreed to pause using data from European Union users to train its AI systems until further court decisions are made. The agreement comes after Ireland’s Data Protection Commission (DPC) sought to suspend X’s processing of user data for AI development, arguing that the platform had started using this data without user consent.

X, formerly known as Twitter, introduced an option for users to opt out of data usage for AI training. However, this was only available from 16 July, despite data processing beginning on 7 May. This delay led the DPC to take legal action, with a court hearing revealing that X would refrain from using data collected between 7 May and 1 August until the issue is resolved.

X’s legal team is expected to file opposition papers against the DPC’s suspension order by 4 September. The platform defended its actions, calling the regulator’s order unwarranted and unjustified. This case follows similar scrutiny faced by other tech giants like Meta and Google, which have also faced regulatory challenges in the EU over their AI systems.

Critical browser flaw puts Mac and Linux users at risk

A newly identified zero-day flaw linked to the 0.0.0.0 IP address has been exploited by hackers, placing users of major web browsers on macOS and Linux at risk. This vulnerability has been observed in popular browsers like Safari, Chrome, and Firefox, which could potentially allow unauthorised access to private networks. Although Windows users are unaffected, other browsers like Microsoft Edge, Brave, and Opera, which are based on Chromium, are also vulnerable.

The cybersecurity firm Oligo has reported that this flaw enables hackers to communicate with local software on Mac or Linux systems. By using the 0.0.0.0 address instead of localhost, public websites might execute arbitrary code on a visitor’s device, bypassing long-standing security measures. Oligo researchers have estimated that around 100,000 websites could facilitate this attack, which has already been used in targeted strikes on AI workloads.

In response to the threat, Apple has promised to address the issue in the upcoming macOS 15 Sequoia beta by blocking the 0.0.0.0 address. An update to Safari’s WebKit will also block connections to this IP. Chrome is considering a similar approach to ensure that users cannot bypass its Private Network Access protection. Mozilla, however, remains cautious, with a spokesperson noting that tighter restrictions might lead to compatibility issues, and therefore, Firefox has not yet implemented any proposed restrictions.

The widespread nature of the vulnerability and the potential for serious security breaches underscore the urgent need for a solution. Users of affected browsers are encouraged to stay updated on patches and fixes as they become available, particularly from browser developers like Apple, Google, and Mozilla.

Musk’s X faces legal action over unauthorised data use in AI training

A consumer group has filed a complaint against Elon Musk’s social media platform X, alleging violations of the General Data Protection Regulation (GDPR) in using user data to train its AI tool, Grok. The complaint, submitted by lawyer Marco Scialdone on behalf of Euroconsumers and Altroconsumo, was lodged with the Irish Data Protection Commission (DPC).

X users recently discovered that their data was being used to train Grok, an AI chatbot that Musk’s company xAI developed, without explicit consent. The complaint accuses X of failing to clearly explain its data usage practices, collecting excessive data, and possibly mishandling sensitive information. Scialdone has called on the DPC to order X to stop using personal data for AI training and to ensure compliance with GDPR. Violations of these regulations can lead to fines as high as 4% of a company’s worldwide annual revenue, making non-compliance potentially very expensive for X.

The complaint also highlights issues with X’s communication regarding its data processing practices. According to Scialdone, X’s privacy policy does not transparently outline the legal basis for using personal data for AI training. The policy mentions using data on a ‘legitimate interest’ basis, which allows data processing if it serves a valid purpose without infringing on users’ rights. However, Scialdone argued that this information is not easily accessible to users. He also stressed that such legal actions would lead to a consistent regulatory approach across different platforms, preventing disparities in user treatment and market inequalities.

Why does this matter?

Musk’s approach to compliance with the EU privacy laws has been controversial, raising concerns about X’s adherence to regulatory standards. The DPC’s actions signal a potential end to Musk’s relatively unchecked run on GDPR oversight since the filed suit marks the third major tech company facing such allegations, following similar complaints against Meta and LinkedIn. Recently, X has also faced regulatory challenges in the Netherlands and scrutiny under the EU’s Digital Services Act, which could lead to even steeper penalties for non-compliance.

Amazon reveals Mithra to enhance network security

The multinational technology magnate has unveiled an internal security platform designed to handle the immense scale of the company’s network. Built on a vast graph database, Mithra helps Amazon manage and protect its systems by filtering vast amounts of data to identify and neutralise malicious domains. Chief Information Security Officer C.J. Moses likens Mithra to a funnel, narrowing down data until human intervention is minimal.

Mithra’s integration with Sonaris, Amazon’s network observation platform, creates a robust defensive net around Amazon’s environments. AI and machine learning are essential for managing the large-scale data, with AI models trained to detect anomalies and potential threats. Generative AI further assists threat analysts by allowing them to interact with data in plain language, enhancing decision-making efficiency.

Amazon’s proactive approach extends beyond technology. The company maintains a strong network of Chief Information Security Officers (CISOs) to facilitate rapid communication and collaboration in times of crisis. The unveiling of Mithra comes as Amazon faces scrutiny over its AI deal with startup Adept and accountability issues for hazardous products in the United States.

TikTok challenges DOJ’s secret evidence request

TikTok and its parent company ByteDance are urging a US appeals court to dismiss the Justice Department’s request to keep parts of its legal case against TikTok confidential. The government aims to file over 15% of its brief and 30% of its evidence in secret, which TikTok argues would hinder its ability to challenge any potentially incorrect factual claims.

The Justice Department, which has not commented publicly, recently filed a classified document outlining security concerns regarding ByteDance’s ownership of TikTok. The document includes declarations from the FBI and other national security agencies.

The government contends that TikTok’s Chinese ownership poses a significant national security threat due to its access to vast amounts of personal data from American users and China’s potential for information manipulation.

In response, TikTok maintains that it has never and will never share US user data with China or manipulate video content as alleged. The company suggests appointing a district court judge as a special master to review the classified submissions if the court does not reject the secret evidence.

The Biden administration has asked the court to dismiss lawsuits filed by TikTok, ByteDance, and TikTok creators that aim to block a law requiring the divestiture of TikTok’s US assets by 19 January or face a ban. Despite the lack of evidence that the Chinese government has accessed US user data, the Justice Department insists that the potential risk remains too significant to ignore.

US states and lawmakers support TikTok ban

A coalition of 21 states and over 50 US lawmakers has supported the Justice Department’s mandate requiring China-based ByteDance to sell TikTok’s US assets by 19 January or face a ban. The collective, led by the attorneys general of Montana and Virginia, argues that TikTok threatens national security and consumer privacy, citing risks of the Chinese Communist Party exploiting user data.

Prominent lawmakers, including US Representative John Moolenaar and Representative Raja Krishnamoorthi, emphasised that the law offers a straightforward solution to mitigate the national security threats posed by TikTok’s ownership structure. The legislative measure, passed by Congress in April, reflects widespread concern over potential data access and surveillance by China.

In response, TikTok, its parent company ByteDance, and a group of TikTok creators have filed lawsuits to block the law. They argue that the ban violates the First Amendment rights of the 170 million Americans who use the app and claim no evidence supports the government’s security concerns.

The US Court of Appeals for the District of Columbia is set to hear oral arguments on the legal challenge on 16 September, amidst the lead-up to the 2024 presidential election. The outcome of this case could significantly impact TikTok’s future operations in the United States.

FTC sues TikTok over child privacy violations

The Federal Trade Commission (FTC), supported by the Department of Justice (DOJ), has filed a lawsuit against TikTok and its parent company ByteDance for violating children’s privacy laws. The lawsuit claims that TikTok breached the Children’s Online Privacy Protection Act (COPPA) by failing to notify and obtain parental consent before collecting data from children under 13. The case also alleges that TikTok did not adhere to a 2019 FTC consent order regarding the same issue.

According to the complaint, TikTok collected personal data from underage users without proper parental consent, using this information to target ads and build user profiles. Despite knowing these practices violated COPPA, ByteDance and TikTok allowed children to use the platform by bypassing age restrictions. Even when parents requested account deletions, TikTok made the process difficult and often did not comply.

FTC Chair Lina M. Khan stated that TikTok’s actions jeopardised the safety of millions of children, and the FTC is determined to protect kids from such violations. The DOJ emphasised the importance of upholding parental rights to safeguard children’s privacy.

The lawsuit seeks civil penalties against ByteDance and TikTok and a permanent injunction to prevent future COPPA violations. The US District Court will review the case for the Central District of California.

OpenAI delays release of anti-cheating tool

OpenAI has developed a method to detect when ChatGPT is used to write essays or research papers, but the company still needs to release it. That decision results from an internal debate lasting two years, balancing the company’s commitment to transparency with the potential to deter users. One survey found nearly a third of loyal ChatGPT users would be turned off by the anti-cheating technology.

Concerns have been raised that the tool could disproportionately affect non-native English speakers. OpenAI’s spokeswoman emphasised the need for a deliberate approach due to the complexities involved. Employees supporting the tool argue that its benefits outweigh the risks, as AI-generated essays can be completed in seconds, posing a significant issue for educators.

The watermarking method would subtly alter token selection in AI-generated text, creating a detectable pattern invisible to human readers. That method is reported to be 99.9% effective, but there are concerns it could be bypassed through translation or text modifications. OpenAI is still determining how to provide access to the detector while preventing misuse.

Despite the effectiveness of watermarking, internal discussions at OpenAI have been ongoing since before ChatGPT’s launch in 2022. A 2023 survey showed global support for AI detection tools, but many ChatGPT users feared false accusations of AI use. OpenAI explores alternative approaches to address these concerns while maintaining AI transparency and credibility.