Chinese military adapts Meta’s Llama for AI tool

China’s People’s Liberation Army (PLA) has adapted Meta’s open-source AI model, Llama, to create a military-focused tool named ChatBIT. Developed by researchers from PLA-linked institutions, including the Academy of Military Science, ChatBIT leverages an earlier version of Llama, fine-tuned for military decision-making and intelligence processing tasks. The tool reportedly performs better than some alternative AI models, though it falls short of OpenAI’s ChatGPT-4.

Meta, which supports open innovation, has restrictions against military uses of its models. However, the open-source nature of Llama limits Meta’s ability to prevent unauthorised adaptations, such as ChatBIT. In response, Meta affirmed its commitment to ethical AI use and noted the need for US innovation to stay competitive as China intensifies its AI research investments.

China’s approach reflects a broader trend, as its institutions reportedly employ Western AI technologies for areas like airborne warfare and domestic security. With increasing US scrutiny over the national security implications of open-source AI, the Biden administration has moved to regulate AI’s development, balancing its potential benefits with growing risks of misuse.

Musk’s platform under fire for inadequate fact-checking

Elon Musk’s social media platform, X, is facing criticism from the Center for Countering Digital Hate (CCDH), which claims its crowd-sourced fact-checking feature, Community Notes, is struggling to curb misinformation on the upcoming US election. According to a CCDH report, out of 283 analysed posts containing misleading information, only 26% showed corrected notes visible to all users, allowing false narratives to reach massive audiences. The 209 uncorrected posts gained over 2.2 billion views, raising concerns over the platform’s commitment to truth and transparency.

Community Notes was launched to empower users to flag inaccurate content. However, critics argue this system alone may be insufficient to handle misinformation during critical events like elections. Calls for X to strengthen its safety measures follow a recent legal loss to CCDH, which faulted the platform for an increase in hate speech. The report also highlights Musk’s endorsement of Republican candidate Donald Trump as a potential complicating factor, since Musk has also been accused of spreading misinformation himself.

In response to the ongoing scrutiny, five US state officials urged Musk in August to address misinformation on X’s AI chatbot, which has reportedly circulated false claims related to the November election. X has yet to respond to these calls for stricter safeguards, and its ability to manage misinformation effectively remains under close watch as the election approaches.

AI chatbots mimicking deceased teens spark outrage

The discovery of AI chatbots resembling deceased teenagers Molly Russell and Brianna Ghey on Character.ai has drawn intense backlash, with critics denouncing the platform’s moderation. Character.ai, which lets users create digital personas, faced criticism after ‘sickening’ replicas of Russell, who died by suicide at 14, and Ghey, who was murdered in 2023, appeared on the platform. The Molly Rose Foundation, a charity named in Russell’s memory, described these chatbots as a ‘reprehensible’ failure of moderation.

Concerns about the platform’s handling of sensitive content have already led to legal action in the US, where a mother is suing Character.ai after claiming her 14-year-old son took his own life following interactions with a chatbot. Character.ai insists it prioritises safety and actively moderates avatars in line with user reports and internal policies. However, after being informed of the Russell and Ghey chatbots, it removed them from the platform, saying it strives to ensure user protection but acknowledges the challenges in regulating AI.

Amidst rapid advancements in AI, experts stress the need for regulatory oversight of platforms hosting user-generated content. Andy Burrows, head of the Molly Rose Foundation, argued stronger regulation is essential to prevent similar incidents, while Brianna Ghey’s mother, Esther Ghey, highlighted the manipulation risks in unregulated digital spaces. The incident underscores the emotional and societal harm that can arise from unsupervised AI-generated personas.

The case has sparked wider debates over the responsibilities of companies like Character.ai, which states it bans impersonation and dangerous content. Despite automated tools and a growing trust and safety team, the platform faces calls for more effective safeguards. AI moderation remains an evolving field, but recent cases have underscored the pressing need to address risks linked to online platforms and user-created chatbots.

Democratic senators urge Biden administration to address human rights in UN Cybercrime Convention

Six Democratic senators have urged the Biden administration to address critical concerns about human rights and cybersecurity in the upcoming United Nations Cybercrime Convention, which is set for a vote at the UN General Assembly. In a letter to top officials, including Secretary of State Antony Blinken and National Security Adviser Jake Sullivan, the senators—Tim Kaine, Jeff Merkley, Ed Markey, Chris Van Hollen, Ron Wyden, and Cory Booker—expressed alarm over the convention’s handling of privacy rights, freedom of expression, and cybersecurity.

The letter warns that the current version of the treaty, supported by US lead negotiator Ambassador Deborah McCarthy, risks aligning the US with repressive regimes under the pretence of cybersecurity. The senators voiced concerns that the treaty, which originated as a Russian proposal in 2017, could enable authoritarian states to legitimise surveillance, suppress dissent, and infringe on human rights globally.

While the Biden administration tried to revise the text, the senators argued that these changes needed revision. The treaty’s provisions require countries to enact laws that allow local law enforcement access to electronic data, threaten privacy rights, and potentially enable surveillance without judicial oversight. The top diplomat warned of serious fallout if the US fails to back the treaty.

The letter also criticises the treaty for lacking clear protections for journalists and security researchers, whose work often involves uncovering vulnerabilities that malicious actors could exploit. The senators warn that this oversight could weaken cybersecurity without explicit safeguards, making sensitive systems more vulnerable to attack.

China-linked hackers allegedly target US telecom, involving high-profile figures

China-linked hackers have reportedly breached telecommunications systems, targeting members of former President Donald Trump’s family and officials from the Biden administration, according to the New York Times. Individuals affected include Trump’s son Eric Trump, son-in-law Jared Kushner, and Senate Majority Leader Chuck Schumer.

Concerns surrounding this hacking group, known as “Salt Typhoon,” have intensified following media reports of their activities. Earlier this month, the Wall Street Journal reported that the group accessed broadband providers’ networks and gathered data from systems used by the federal government for court-authorised wiretapping.

No response was received from the State Department or Trump family representatives regarding Reuters’ requests for comments. The White House, National Security Agency, and Cybersecurity and Infrastructure Security Agency also did not reply immediately. Similarly, the Chinese Embassy in Washington did not respond, though Beijing usually denies involvement in cyberespionage activities.

CTGT helps firms deploy AI with safety and transparency

CTGT, a startup founded by Cyril Gorlla and Trevor Tuttle, aims to improve the safety and transparency of AI models. Operating in a field known as ‘explainable AI,’ CTGT’s platform identifies biased outputs and hallucinations in AI models, with a particular focus on applications in healthcare, finance, and other high-stakes industries. Rather than training additional models to oversee the AI, CTGT employs mathematically-guaranteed interpretability techniques, allowing companies to identify errors more efficiently and accurately.

CEO Gorlla highlighted the dangers of relying on inaccurate or biased AI decisions, emphasising that models are increasingly deployed in critical areas where errors can have serious consequences. CTGT’s clients include three unnamed Fortune 10 companies, one of which used the platform to correct biases in a facial recognition system. By offering both managed and on-premises solutions, CTGT also addresses data privacy concerns, giving companies control over their information without compromising security.

CTGT has gained support from major investors, including Mark Cuban and the co-founder of Zapier, and is a graduate of the Character Labs accelerator. As the startup expands, it plans to build out its engineering team and enhance its platform to meet the rising demand for AI interpretability. Analytics firm Markets and Markets estimates that the explainable AI sector could reach $16.2 billion by 2028, a promising outlook for companies focused on AI safety and transparency.

ForceField offers new solution to combat deepfakes and AI deception

ForceField is unveiling its new technology at the 2024 TechCrunch Disrupt, introducing tools aimed at fighting deepfakes and manipulated content. Unlike platforms that flag AI-generated media, ForceField authenticates content directly from devices, ensuring the integrity of digital evidence. Using its HashMarq API, the startup verifies the authenticity of data streams by generating a secure digital signature in real time.

The company uses blockchain technology for smart contracts, safeguarding content without relying on cryptocurrencies or web3 solutions. This system authenticates data collected across various platforms, from mobile apps to surveillance cameras. By tracking metadata like time, location, and surrounding signals, ForceField provides insights that aid journalists, law enforcement, and organisations in verifying the accuracy of submitted media.

ForceField was inspired by CEO MC Spano’s personal experience in 2018, when she struggled to submit video evidence following an assault. Her frustration with the justice system sparked the creation of technology that could simplify evidence submission and ensure its acceptance. Now the startup is working with clients such as Erie Insurance and plans to launch commercially by early 2025, focusing initially on the insurance sector but with applications in media and law enforcement.

The company, which is entirely woman-led, has gained financial backing from several angel investors and strategic partnerships. Spano aims to raise a seed round by year’s end, highlighting the importance of diversity in tech leadership. As AI-generated content continues to flood the internet, ForceField’s tools offer a new way to validate authenticity and restore trust in digital information.

TikTok ‘money glitch’ results in JP Morgan fraud cases

JP Morgan Chase has initiated lawsuits against customers accused of exploiting a glitch to withdraw large sums from its ATMs. The viral ‘infinite money glitch’ trend on TikTok involved users writing large cheques to themselves, depositing them, and withdrawing the money before the cheques were returned as invalid.

The lawsuits target two individuals and two businesses, demanding the return of funds with interest, reimbursement of overdraft fees, and coverage of legal expenses. In a court filing, JP Morgan revealed that one incident involved a $335,000 cheque deposited on 29 August, with over $290,000 still owed after the cheque was deemed counterfeit.

Bank officials stressed their commitment to fraud prevention, describing bank fraud as a serious crime in court documents. The total amount linked to the defendants in the lawsuits exceeds $660,000. Typically, banks permit customers to withdraw only part of a cheque’s value until it clears.

The Wall Street Journal recently reported that the bank closed the loophole shortly after the glitch went viral. An ongoing investigation by JP Morgan is reviewing thousands of potential fraud cases tied to the incident.

AI startup Sierra hits $4.5 billion valuation

Sierra, a young AI software startup co-founded by former Salesforce co-CEO Bret Taylor, has secured $175 million in new funding led by Greenoaks Capital. This latest round gives the company a valuation of $4.5 billion, a significant jump from its earlier valuation of nearly $1 billion. Investors such as Thrive Capital, Iconiq, Sequoia, and Benchmark have also backed the firm.

Founded just a year ago, Sierra has already crossed $20 million in annualised revenue, focusing on selling AI-powered customer service chatbots to enterprises. It works with major clients, including WeightWatchers and Sirius XM. The company claims its technology reduces ‘hallucinations’ in large language models, ensuring reliable AI interactions for businesses.

The rising valuation reflects investor enthusiasm for applications in AI that generate steady revenue, shifting from expensive foundational models to enterprise solutions. Sierra operates in a competitive space, facing rivals such as Salesforce and Forethought, but aims to stand out through more dependable AI performance.

Bret Taylor, who also chairs OpenAI’s board, co-founded Sierra alongside former Google executive Clay Bavor. Taylor previously held leadership roles at Salesforce and oversaw Twitter’s board during its takeover by Elon Musk. Bavor, who joined Google in 2005, played key roles managing Gmail and Google Drive.

Luxottica founder’s son involved in alleged data access scheme, faces probe

Italian authorities have placed Leonardo Maria Del Vecchio, son of the late billionaire founder of Luxottica, and three others under house arrest as part of a probe into suspected illegal access to state databases. Del Vecchio, whose father created the Ray-Ban eyewear empire, is accused of employing a private intelligence agency, allegedly managed by a former police officer, to gather confidential data. The alleged access was reportedly linked to a family dispute over inheritance.

Del Vecchio’s lawyer, Maria Emanuela Mascalchi, said her client is “eagerly awaiting” the investigation’s conclusion, maintaining he has “nothing to do” with the allegations and is more a victim of the situation. Prosecutors allege that the intelligence agency illegally accessed data from state systems, including tax, police, and financial databases, which were reportedly used to blackmail business figures or sold to third parties.

The probe, which extends back to at least 2019 and continued until March 2024, highlights concerns about a lucrative market for sensitive information in Italy. Italy’s national anti-mafia prosecutor, Giovanni Melillo, remarked that the case has raised alarm over the existence of an underground market for confidential data, now operating on an industrial scale.

This case follows a recent investigation into a significant data breach at Italy’s largest bank, Intesa Sanpaolo, suggesting a wider issue of data misuse in the country.