Meta is expanding the reach of its AI models, making its Llama AI series available to US government agencies and private sector partners involved in national security projects. Partnering with firms like Lockheed Martin, Oracle, and Scale AI, Meta aims to assist government teams and contractors with applications such as intelligence gathering and computer code generation for defence needs.
Although Meta’s policies generally restrict using Llama for military purposes, the company is making an exception for these government partners. This decision follows concerns over foreign misuse of the technology, particularly after reports revealed that researchers affiliated with China’s military had used an earlier Llama model without authorisation for intelligence-related applications.
The choice to integrate open AI like Llama into defence remains controversial. Critics argue that AI’s data security risks and its tendency to generate incorrect outputs make it unreliable in military contexts. Recent findings from the AI Now Institute caution that AI tools could be misused by adversaries due to data vulnerabilities, potentially putting sensitive information at risk.
Meta maintains that open AI can accelerate research and enhance security, though US military adoption remains limited. While some big tech employees oppose military-linked projects, Meta emphasises its commitment to strengthening national security while safeguarding its technology from unauthorised foreign use.
A federal judge has dismissed a proposed class-action lawsuit claiming Google illegally profited from scams involving Google Play gift cards. The plaintiff, Judy May, alleged she lost $1,000 after a scammer posed as a government official, instructing her to purchase Google Play gift cards to claim grant money. She argued that Google should have warned consumers about such scams on the card packaging.
However, Judge Beth Labson Freeman ruled that Google was not responsible for May’s losses, as the tech giant neither caused her financial harm nor knowingly benefited from the stolen funds. Freeman also dismissed claims that Google’s 15% to 30% commission on purchases using the gift cards was linked to the initial fraud.
The Federal Trade Commission reported that Americans lost $217 million to gift card fraud in 2023, with Google Play cards implicated in roughly 20% of reported cases. Though May’s case was dismissed, the judge allowed her the option to refile.
Meta has announced an extended ban on new political ads following the United States election, aiming to counter misinformation in the tense post-election period. In a blog post on Monday, the Facebook parent company explained that the suspension will remain in place until later in the week, preventing any new political ads from being introduced immediately after the election. Ads that were served at least once before the restriction will still be displayed, but editing options will be limited.
Meta‘s decision to extend its ad restriction is part of its ongoing policy to help prevent last-minute claims that could be difficult to verify. The social media giant implemented a similar measure in the last election cycle, underscoring the need for extra caution as elections unfold.
Last year, Meta also barred political advertisers and regulated industries from using its generative AI-based ad products, reflecting a continued focus on reducing potential misinformation through stricter ad controls and ad content regulations.
South Korea’s data protection agency has fined Meta Platforms, the owner of Facebook, 21.62 billion won ($15.67 million) for improperly collecting and sharing sensitive user data with advertisers. The Personal Information Protection Commission found that Meta gathered details on nearly one million South Korean users, including their religion, political views, and sexual orientation, without obtaining the necessary consent. This information was reportedly used by around 4,000 advertisers.
The commission revealed that Meta analysed user interactions, such as pages liked and ads clicked, to create targeted ad themes based on sensitive personal data. Some users were even categorised by highly private attributes, including identifying as North Korean defectors or LGBTQ+. Additionally, Meta allegedly denied users’ requests to access their information and failed to secure data for at least ten users, leading to a data breach.
Meta has not yet issued a statement regarding the fine. This penalty underscores South Korea’s commitment to strict data privacy enforcement as concerns over digital privacy intensify worldwide.
The United Nations Development Programme (UNDP) Bahrain and the Bahrain Center for Strategic, International, and Energy Studies (Derasat) have embarked on a significant partnership to develop the National Human Development Report (NHDR), titled ‘Digital Transformation: A Roadmap for Progress.’ That collaboration aims to harness digital transformation as a strategic tool for fostering inclusive growth in the Kingdom, aligning with Bahrain Vision 2030 and the Sustainable Development Goals (SDGs).
In this context, the NHDR will comprehensively analyse how digital transformation can enhance human development outcomes in Bahrain, addressing critical issues such as the digital divide, privacy concerns, cybersecurity, and integrating digital technologies into public services. Furthermore, the report will benchmark Bahrain’s digital landscape against regional and international standards, offering actionable insights and recommendations to improve digital inclusion, protect privacy, and secure digital infrastructures.
Moreover, the UNDP Bahrain and Derasat highlight the importance of stakeholder engagement in developing the NHDR. By collaborating with government entities, civil society organisations, and the private sector, diverse perspectives will be included to ensure alignment with Bahrain’s national development goals.
The 34th International Conference of the Red Cross and Red Crescent has adopted a new resolution to protect civilians and essential infrastructure from the potential risks posed by ICT activities during armed conflict. Recognising the increased likelihood of information and communication technologies (ICTs) being used in future conflicts, the resolution addresses the need to safeguard civilian lives and critical systems from the unintended human costs of these operations.
The resolution highlights concerns over the malicious use of ICT capabilities by parties in conflict, noting that such activities could impact protected persons and objects, including essential infrastructure like power, water, and healthcare systems. It underscores that these civilian objects are crucial for survival and should remain unaffected during hostilities. The resolution further emphasises the importance of preventing these activities from crossing international borders, which could inadvertently impact civilians in other regions.
Acknowledging the limited resources and capacities of some states and humanitarian organisations, the resolution also draws attention to the vulnerability this may create. Without adequate defences, states and components of the Red Cross and Red Crescent Movement could face greater risks from cyber incursions during the conflict.
Another focus of the resolution is the potential for civilians to become involved in cyber activities related to conflict, either by conducting or supporting operations. It points to the need for greater awareness of the risks and legal implications, as civilians may need to fully understand the consequences of their involvement in ICT-related activities in conflict situations.
The resolution also calls for further study and dialogue on how international humanitarian law (IHL) applies to ICT activities in warfare. It acknowledges that while IHL traditionally protects civilians and critical infrastructure during conflict, the unique characteristics of cyberspace may require additional interpretation and understanding.
By adopting this resolution, the Red Cross aims to ensure that, as the nature of conflict changes, a strong international framework remains to protect civilians and essential infrastructure from the emerging threats posed by cyber activities in armed conflict.
The Consumer Financial Protection Bureau (CFPB) has informed Meta of its intention to consider ‘legal action’ concerning allegations that the tech giant improperly acquired consumer financial data from third parties for its targeted advertising operations. This federal investigation was revealed in a recent filing that Meta submitted to the Securities and Exchange Commission (SEC).
The filing indicates that the CFPB notified Meta on 18 September that it evaluated whether the company’s actions violate the Consumer Financial Protection Act, designed to protect consumers from unfair and deceptive financial practices. The status of the investigation remains uncertain, with the filing noting that the CFPB could initiate a lawsuit soon, seeking financial penalties and equitable relief.
Meta, the parent company of Instagram and Facebook, is facing increased scrutiny from regulators and state attorneys general regarding various concerns, including its privacy practices.
In the SEC filing, Meta disclosed that the CFPB has formally notified the company about an investigation focusing on the alleged receipt and use for advertising of financial information from third parties through specific advertising tools. The inquiry targets explicitly advertising related to ‘financial products and services,’ although it remains to be seen whether the scrutiny pertains to Facebook, Instagram, or both platforms.
While a Meta spokesperson refrained from commenting on the matter, the company stated in the filing that it disputes the allegations and believes any enforcement action would be unjustified. The CFPB also opted not to provide additional comments.
Amid this scrutiny, Meta recently reported $41 billion in revenue for the third quarter, a 19 percent increase from the previous year. A significant portion of this revenue is generated from its targeted advertising business, which has faced criticism from the Federal Trade Commission (FTC) and European regulators for allegedly mishandling user data and violating privacy rights.
In 2019, Meta settled privacy allegations related to the Cambridge Analytica scandal by paying the FTC $5 billion after it was revealed that the company had improperly shared Facebook user data with the firm for voter profiling. Last year, the European Union fined Meta $1.3 billion for improperly transferring user data from Europe to the United States.
Google researchers announced a breakthrough in cybersecurity, revealing they have discovered the first vulnerability using a large language model. This vulnerability, identified as an exploitable memory-safety issue in SQLite—a widely used open-source database engine—marks a significant milestone, as it is believed to be the first public instance of an AI tool uncovering a previously unknown flaw in real-world software.
The vulnerability was reported to SQLite developers in early October, who promptly addressed the issue on the same day it was identified. Notably, the bug was discovered before being included in an official release, ensuring that SQLite users were unaffected. Google emphasised this development as a demonstration of AI’s significant potential for enhancing cybersecurity defences.
The initiative is part of a collaborative project called Big Sleep, which involves Google Project Zero and Google DeepMind, stemming from previous efforts focused on AI-assisted vulnerability research.
Many companies, including Google, typically employ a technique known as ‘fuzzing,’ where software is tested by inputting random or invalid data to uncover vulnerabilities. However, Google noted that fuzzing often needs to improve in identifying hard-to-find bugs. The researchers expressed optimism that AI could help bridge this gap. ‘We see this as a promising avenue to achieve a defensive advantage,’ they stated.
The identified vulnerability was particularly intriguing because it was missed by existing testing frameworks, including OSS-Fuzz and SQLite’s internal systems. One of the key motivations behind the Big Sleep project is the ongoing challenge of vulnerability variants, with more than 40% of zero-day vulnerabilities identified in 2022 being variants of previously reported issues.
Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds. Filed at the Créteil judicial court, this grouped case seeks to hold TikTok accountable for what the families describe as dangerous content promoting self-harm, eating disorders, and suicide.
The families’ lawyer, Laure Boutron-Marmion, argues that TikTok, as a company offering its services to minors, must address its platform’s risks and shortcomings. She emphasised the need for TikTok’s legal liability to be recognised, especially given that its algorithm is often blamed for pushing disturbing content. TikTok, like Meta’s Facebook and Instagram, faces multiple lawsuits worldwide accusing these platforms of targeting minors in ways that harm their mental health.
TikTok has previously stated it is committed to protecting young users’ mental well-being and has invested in safety measures, according to CEO Shou Zi Chew’s remarks to US lawmakers earlier this year.
Aleksei Andriunin, the founder of cryptocurrency firm Gotbit, has been indicted in the US for alleged involvement in a conspiracy to manipulate cryptocurrency markets. The Justice Department claims that Andriunin and his firm provided market manipulation services to increase artificial trading volumes for various cryptocurrency companies from 2018 to 2024.
The superseding indictment also names Gotbit’s directors, Fedor Kedrov and Qawi Jalili, who were already charged earlier in October. Prosecutors allege that these actions aimed to distort the cryptocurrency markets, with several companies, including some in the United States, reportedly benefitting from these tactics.
If convicted, Andriunin faces significant penalties, with wire fraud charges carrying a potential 20-year prison sentence. He could also face an additional five years for conspiracy charges. The allegations form part of a larger crackdown on crypto market manipulation, which has already led to several arrests and asset seizures worth $25 million.
Recent moves by federal prosecutors highlight a more aggressive stance on crypto-related fraud. They have targeted multiple firms, including Gotbit, and several leaders have already agreed to plead guilty. The crackdown aims to strengthen transparency and curb malpractice in the cryptocurrency market.