South Korea fines Meta $15.7 million for privacy violations

South Korea’s data protection agency has fined Meta Platforms, the owner of Facebook, 21.62 billion won ($15.67 million) for improperly collecting and sharing sensitive user data with advertisers. The Personal Information Protection Commission found that Meta gathered details on nearly one million South Korean users, including their religion, political views, and sexual orientation, without obtaining the necessary consent. This information was reportedly used by around 4,000 advertisers.

The commission revealed that Meta analysed user interactions, such as pages liked and ads clicked, to create targeted ad themes based on sensitive personal data. Some users were even categorised by highly private attributes, including identifying as North Korean defectors or LGBTQ+. Additionally, Meta allegedly denied users’ requests to access their information and failed to secure data for at least ten users, leading to a data breach.

Meta has not yet issued a statement regarding the fine. This penalty underscores South Korea’s commitment to strict data privacy enforcement as concerns over digital privacy intensify worldwide.

UNDP Bahrain and Derasat partner for digital transformation report

The United Nations Development Programme (UNDP) Bahrain and the Bahrain Center for Strategic, International, and Energy Studies (Derasat) have embarked on a significant partnership to develop the National Human Development Report (NHDR), titled ‘Digital Transformation: A Roadmap for Progress.’ That collaboration aims to harness digital transformation as a strategic tool for fostering inclusive growth in the Kingdom, aligning with Bahrain Vision 2030 and the Sustainable Development Goals (SDGs).

In this context, the NHDR will comprehensively analyse how digital transformation can enhance human development outcomes in Bahrain, addressing critical issues such as the digital divide, privacy concerns, cybersecurity, and integrating digital technologies into public services. Furthermore, the report will benchmark Bahrain’s digital landscape against regional and international standards, offering actionable insights and recommendations to improve digital inclusion, protect privacy, and secure digital infrastructures.

Moreover, the UNDP Bahrain and Derasat highlight the importance of stakeholder engagement in developing the NHDR. By collaborating with government entities, civil society organisations, and the private sector, diverse perspectives will be included to ensure alignment with Bahrain’s national development goals.

International Red Cross adopts resolution to shield civilians from harmful cyber activities in armed conflicts

The 34th International Conference of the Red Cross and Red Crescent has adopted a new resolution to protect civilians and essential infrastructure from the potential risks posed by ICT activities during armed conflict. Recognising the increased likelihood of information and communication technologies (ICTs) being used in future conflicts, the resolution addresses the need to safeguard civilian lives and critical systems from the unintended human costs of these operations.

The resolution highlights concerns over the malicious use of ICT capabilities by parties in conflict, noting that such activities could impact protected persons and objects, including essential infrastructure like power, water, and healthcare systems. It underscores that these civilian objects are crucial for survival and should remain unaffected during hostilities. The resolution further emphasises the importance of preventing these activities from crossing international borders, which could inadvertently impact civilians in other regions.

Acknowledging the limited resources and capacities of some states and humanitarian organisations, the resolution also draws attention to the vulnerability this may create. Without adequate defences, states and components of the Red Cross and Red Crescent Movement could face greater risks from cyber incursions during the conflict.

Another focus of the resolution is the potential for civilians to become involved in cyber activities related to conflict, either by conducting or supporting operations. It points to the need for greater awareness of the risks and legal implications, as civilians may need to fully understand the consequences of their involvement in ICT-related activities in conflict situations.

The resolution also calls for further study and dialogue on how international humanitarian law (IHL) applies to ICT activities in warfare. It acknowledges that while IHL traditionally protects civilians and critical infrastructure during conflict, the unique characteristics of cyberspace may require additional interpretation and understanding.

By adopting this resolution, the Red Cross aims to ensure that, as the nature of conflict changes, a strong international framework remains to protect civilians and essential infrastructure from the emerging threats posed by cyber activities in armed conflict.

The US federal agency investigates how Meta uses consumer financial data for targeted advertising

The Consumer Financial Protection Bureau (CFPB) has informed Meta of its intention to consider ‘legal action’ concerning allegations that the tech giant improperly acquired consumer financial data from third parties for its targeted advertising operations. This federal investigation was revealed in a recent filing that Meta submitted to the Securities and Exchange Commission (SEC).

The filing indicates that the CFPB notified Meta on 18 September that it evaluated whether the company’s actions violate the Consumer Financial Protection Act, designed to protect consumers from unfair and deceptive financial practices. The status of the investigation remains uncertain, with the filing noting that the CFPB could initiate a lawsuit soon, seeking financial penalties and equitable relief.

Meta, the parent company of Instagram and Facebook, is facing increased scrutiny from regulators and state attorneys general regarding various concerns, including its privacy practices.

In the SEC filing, Meta disclosed that the CFPB has formally notified the company about an investigation focusing on the alleged receipt and use for advertising of financial information from third parties through specific advertising tools. The inquiry targets explicitly advertising related to ‘financial products and services,’ although it remains to be seen whether the scrutiny pertains to Facebook, Instagram, or both platforms.

While a Meta spokesperson refrained from commenting on the matter, the company stated in the filing that it disputes the allegations and believes any enforcement action would be unjustified. The CFPB also opted not to provide additional comments.

Amid this scrutiny, Meta recently reported $41 billion in revenue for the third quarter, a 19 percent increase from the previous year. A significant portion of this revenue is generated from its targeted advertising business, which has faced criticism from the Federal Trade Commission (FTC) and European regulators for allegedly mishandling user data and violating privacy rights.

In 2019, Meta settled privacy allegations related to the Cambridge Analytica scandal by paying the FTC $5 billion after it was revealed that the company had improperly shared Facebook user data with the firm for voter profiling. Last year, the European Union fined Meta $1.3 billion for improperly transferring user data from Europe to the United States.

Google researchers discover first vulnerability using AI

Google researchers announced a breakthrough in cybersecurity, revealing they have discovered the first vulnerability using a large language model. This vulnerability, identified as an exploitable memory-safety issue in SQLite—a widely used open-source database engine—marks a significant milestone, as it is believed to be the first public instance of an AI tool uncovering a previously unknown flaw in real-world software.

The vulnerability was reported to SQLite developers in early October, who promptly addressed the issue on the same day it was identified. Notably, the bug was discovered before being included in an official release, ensuring that SQLite users were unaffected. Google emphasised this development as a demonstration of AI’s significant potential for enhancing cybersecurity defences.

The initiative is part of a collaborative project called Big Sleep, which involves Google Project Zero and Google DeepMind, stemming from previous efforts focused on AI-assisted vulnerability research.

Many companies, including Google, typically employ a technique known as ‘fuzzing,’ where software is tested by inputting random or invalid data to uncover vulnerabilities. However, Google noted that fuzzing often needs to improve in identifying hard-to-find bugs. The researchers expressed optimism that AI could help bridge this gap. ‘We see this as a promising avenue to achieve a defensive advantage,’ they stated.

The identified vulnerability was particularly intriguing because it was missed by existing testing frameworks, including OSS-Fuzz and SQLite’s internal systems. One of the key motivations behind the Big Sleep project is the ongoing challenge of vulnerability variants, with more than 40% of zero-day vulnerabilities identified in 2022 being variants of previously reported issues.

TikTok faces lawsuit in France after teen suicides linked to platform

Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds. Filed at the Créteil judicial court, this grouped case seeks to hold TikTok accountable for what the families describe as dangerous content promoting self-harm, eating disorders, and suicide.

The families’ lawyer, Laure Boutron-Marmion, argues that TikTok, as a company offering its services to minors, must address its platform’s risks and shortcomings. She emphasised the need for TikTok’s legal liability to be recognised, especially given that its algorithm is often blamed for pushing disturbing content. TikTok, like Meta’s Facebook and Instagram, faces multiple lawsuits worldwide accusing these platforms of targeting minors in ways that harm their mental health.

TikTok has previously stated it is committed to protecting young users’ mental well-being and has invested in safety measures, according to CEO Shou Zi Chew’s remarks to US lawmakers earlier this year.

Crypto firm Gotbit’s founder faces fraud charges

Aleksei Andriunin, the founder of cryptocurrency firm Gotbit, has been indicted in the US for alleged involvement in a conspiracy to manipulate cryptocurrency markets. The Justice Department claims that Andriunin and his firm provided market manipulation services to increase artificial trading volumes for various cryptocurrency companies from 2018 to 2024.

The superseding indictment also names Gotbit’s directors, Fedor Kedrov and Qawi Jalili, who were already charged earlier in October. Prosecutors allege that these actions aimed to distort the cryptocurrency markets, with several companies, including some in the United States, reportedly benefitting from these tactics.

If convicted, Andriunin faces significant penalties, with wire fraud charges carrying a potential 20-year prison sentence. He could also face an additional five years for conspiracy charges. The allegations form part of a larger crackdown on crypto market manipulation, which has already led to several arrests and asset seizures worth $25 million.

Recent moves by federal prosecutors highlight a more aggressive stance on crypto-related fraud. They have targeted multiple firms, including Gotbit, and several leaders have already agreed to plead guilty. The crackdown aims to strengthen transparency and curb malpractice in the cryptocurrency market.

EU moves to formalise disinformation code under DSA

The EU‘s voluntary code of practice on disinformation will soon become a formal set of rules under the Digital Services Act (DSA). According to Paul Gordon, assistant director at Ireland’s media regulator Coimisiúin na Meán, efforts are underway to finalise the transition by January. He emphasised that the new regulations should lead to more meaningful engagement from platforms, moving beyond mere compliance.

Originally established in 2022 and signed by 44 companies, including Google, Meta, and TikTok, the code outlines commitments to combat online disinformation, such as increasing transparency in political advertising and enhancing cooperation during elections. A spokesperson for the European Commission confirmed that the code aims to be recognised as a ‘Code of Conduct’ under the DSA, which already mandates content moderation measures for online platforms.

The DSA, which applies to all platforms since February, imposes strict rules on the largest online services, requiring them to mitigate risks associated with disinformation. The new code will help these platforms demonstrate compliance with the DSA’s obligations, as assessed by the Commission and the European Board of Digital Services. However, no specific timeline has been provided for the code’s formal implementation.

Chinese military adapts Meta’s Llama for AI tool

China’s People’s Liberation Army (PLA) has adapted Meta’s open-source AI model, Llama, to create a military-focused tool named ChatBIT. Developed by researchers from PLA-linked institutions, including the Academy of Military Science, ChatBIT leverages an earlier version of Llama, fine-tuned for military decision-making and intelligence processing tasks. The tool reportedly performs better than some alternative AI models, though it falls short of OpenAI’s ChatGPT-4.

Meta, which supports open innovation, has restrictions against military uses of its models. However, the open-source nature of Llama limits Meta’s ability to prevent unauthorised adaptations, such as ChatBIT. In response, Meta affirmed its commitment to ethical AI use and noted the need for US innovation to stay competitive as China intensifies its AI research investments.

China’s approach reflects a broader trend, as its institutions reportedly employ Western AI technologies for areas like airborne warfare and domestic security. With increasing US scrutiny over the national security implications of open-source AI, the Biden administration has moved to regulate AI’s development, balancing its potential benefits with growing risks of misuse.

Musk’s platform under fire for inadequate fact-checking

Elon Musk’s social media platform, X, is facing criticism from the Center for Countering Digital Hate (CCDH), which claims its crowd-sourced fact-checking feature, Community Notes, is struggling to curb misinformation on the upcoming US election. According to a CCDH report, out of 283 analysed posts containing misleading information, only 26% showed corrected notes visible to all users, allowing false narratives to reach massive audiences. The 209 uncorrected posts gained over 2.2 billion views, raising concerns over the platform’s commitment to truth and transparency.

Community Notes was launched to empower users to flag inaccurate content. However, critics argue this system alone may be insufficient to handle misinformation during critical events like elections. Calls for X to strengthen its safety measures follow a recent legal loss to CCDH, which faulted the platform for an increase in hate speech. The report also highlights Musk’s endorsement of Republican candidate Donald Trump as a potential complicating factor, since Musk has also been accused of spreading misinformation himself.

In response to the ongoing scrutiny, five US state officials urged Musk in August to address misinformation on X’s AI chatbot, which has reportedly circulated false claims related to the November election. X has yet to respond to these calls for stricter safeguards, and its ability to manage misinformation effectively remains under close watch as the election approaches.