Brazilian President Luiz Inácio Lula da Silva has condemned Meta’s decision to discontinue its fact-checking program in the United States, calling it a grave issue. Speaking in Brasília on Thursday, Lula emphasised the need for accountability in digital communication, equating its responsibilities to those of traditional media. He announced plans to meet with government officials to discuss the matter.
Meta’s recent decision has prompted Brazilian prosecutors to seek clarification on whether the changes will affect the country. The company has been given 30 days to respond as part of an ongoing investigation into how social media platforms address misinformation and online violence in Brazil.
Justice Alexandre de Moraes of Brazil’s Supreme Court, known for his strict oversight of tech companies, reiterated that social media firms must adhere to Brazilian laws to continue operating in the country. Last year, he temporarily suspended X (formerly Twitter) over non-compliance with local regulations.
Meta has so far declined to comment on the matter in Brazil, fueling concerns over its commitment to tackling misinformation globally. The outcome of Brazil’s inquiry could have broader implications for how tech firms balance local laws with global policy changes.
Business email compromise (BEC) scams are on the rise, targeting companies through highly deceptive tactics. These scams involve cybercriminals hacking into legitimate email accounts and tricking victims into transferring large sums of money. Recently, a small business narrowly avoided a major financial loss when a scammer posed as its owner, sending fraudulent wiring instructions to the company’s bank. Quick action by the business owner and a vigilant banker prevented the funds from being transferred.
Experts warn that BEC scams rely less on technical vulnerabilities and more on exploiting trust between businesses and their partners. Hackers often gain access through phishing attacks, installing malicious software, or guessing weak passwords. Once inside an email account, they may create hidden rules to intercept or forward messages, concealing their activities until it’s too late.
To counter these threats, cybersecurity professionals recommend measures such as enabling two-factor authentication, regularly updating passwords, and monitoring email account activity for unusual changes. Businesses are also advised to verify financial transactions using secondary methods, such as phone calls, to confirm the legitimacy of requests.
With global losses from BEC scams amounting to billions, the stakes are high. By taking proactive steps to enhance security, businesses can protect themselves from falling victim to these sophisticated schemes.
Dragos and Singapore’s Digital and Intelligence Service (DIS) are collaborating to enhance cybersecurity capabilities through a strategic partnership focusing on planning, training, and exchanging information about cyber threats. The agreement, announced during the Critical Infrastructure Defence Exercise (CIDeX) 2024, aims to fortify the defence of Singapore’s critical infrastructure and increase its resilience to cyber attacks.
The partnership builds on Dragos’s long-standing collaboration with Singapore, including a previous agreement in August 2023 with the Cyber Security Agency (CSA) to improve operational technology (OT) cybersecurity. DIS emphasised the importance of expanding cybersecurity partnerships across sectors, while Dragos commended Singapore’s proactive approach to cybersecurity as an example for other nations to follow.
That partnership underscores the shared commitment of both parties to secure critical infrastructure amid an evolving cyber threat landscape. By leveraging their expertise, Dragos and DIS aim to provide Singapore with the necessary tools and knowledge to navigate emerging challenges, ensuring the protection of its infrastructure and citizens.
Hong Kong is advancing its digital economy and smart city initiatives, striving to become a global leader in digital transformation. To support this vision, the Hong Kong Institute of Information Technology (HKIIT) and the Office of the Government Chief Information Officer (OGCIO) have partnered to enhance digital literacy, strengthen cybersecurity, and promote digital transformation in public and government sectors.
The collaboration focuses on specialised training programs covering emerging technologies, cybersecurity, and data analytics to equip public sector employees and industry professionals with critical skills. Practical exercises like real-world cybersecurity simulations aim to improve awareness and resilience against cyber threats. Additionally, data literacy training is prioritised to help public employees utilise data for decision-making and service improvement, aligning with Hong Kong’s goals of innovation and efficiency.
Beyond training, community events like competitions and seminars promote digital awareness, fostering a culture of innovation and collaboration. The initiative builds on prior efforts, such as the ‘Cyber Security Drill 2024’ and certification programs, while future plans aim to expand its reach across more government departments and organisations.
The Vocational Training Council (VTC), Hong Kong’s largest provider of vocational and professional education, plays a key role in these efforts by supporting the city’s innovation agenda and equipping individuals with the skills needed to succeed in a rapidly evolving digital landscape. Through partnerships like the one with OGCIO, VTC institutions such as HKIIT contribute to strengthening the city’s workforce and ensuring its readiness for the challenges of digital transformation.
British universities are increasingly distancing themselves from Elon Musk’s X platform, citing its role in spreading misinformation and inciting racial unrest. A Reuters survey found that several institutions have stopped posting or significantly reduced their activity, joining a broader exodus of academics and public bodies. Concerns over falling engagement, violent content, and the platform’s perceived toxicity have driven the shift.
The University of Cambridge has seen at least seven of its colleges stop posting, while Oxford’s Merton College has deleted its account entirely. Institutions such as the University of East Anglia and London Metropolitan University report dwindling engagement, while arts conservatoires like Trinity Lab and the Royal Northern College of Music are focusing their communication efforts elsewhere. Some universities, including Buckinghamshire New University, have publicly stated that X is no longer a suitable space for meaningful discussion.
The retreat from X follows similar moves by British police forces, reflecting growing unease among public institutions. Despite the trend, some universities continue to maintain a presence on the platform, though many are actively exploring alternatives. X did not respond to requests for comment on the issue.
Do Kwon, the founder of Terraform Labs, is facing a criminal trial in the US, currently anticipated for early 2026. Prosecutors are dealing with six terabytes of data, encrypted devices, and the need to translate messages from Korean to English, creating significant delays in evidence gathering. District Judge Paul Engelmayer described the extended schedule as unprecedented in his 15 years on the bench.
Kwon denies the nine charges against him, which include securities fraud and money laundering conspiracies related to the $60 billion collapse of the Terra/Luna ecosystem in 2022. The incident impacted over 1 million investors. In a separate civil fraud lawsuit, a New York jury ordered Terraform Labs to cease operations and pay $4.5 billion in fines.
Extradited from Montenegro after 22 months in custody, Kwon has financed his legal defence with $200 million. His lawyers have until next week to request an earlier trial date, with the next hearing scheduled for 6 March.
A hacker claims to have breached US location tracking company Gravy Analytics, leaking around 1.4 gigabytes of data. The allegation, shared on a Russian-language cybercriminal forum, included screenshots suggesting a data theft. Verification attempts were complicated as Gravy’s website remained offline and the company did not respond to messages.
Cybersecurity experts reviewing the leaked data found the breach credible. Marley Smith from RedSense and John Hammond from Huntress both confirmed the data appeared legitimate, though the hacker’s identity remains unclear.
The FTC expressed concerns that such data could be misused for stalking, blackmail, and espionage but declined to comment on the breach. FTC Chair Lina Khan recently warned that targeted advertising practices leave sensitive data highly vulnerable.
A new app designed to help children aged seven to twelve manage anxiety through gaming is being launched in Lincolnshire, UK. The app, called Lumi Nova, combines cognitive behavioural therapy (CBT) techniques with personalised quests to gently expose children to their fears in a safe and interactive way.
The digital game has been created by BFB Labs, a social enterprise focused on digital therapy, in collaboration with children, parents, and mental health experts. The app aims to make mental health support more accessible, particularly in rural areas, where traditional services may be harder to reach.
Families in Lincolnshire can download the app for free without needing a prescription or referral. Councillor Patricia Bradwell from Lincolnshire County Council highlighted the importance of flexible mental health services, saying: ‘We want to ensure children and young people have easy access to support that suits their needs.’
By using immersive videos and creative tasks, Lumi Nova allows children to confront their worries at their own pace from the comfort of home, making mental health care more engaging and approachable. The year-long pilot aims to assess the app’s impact on childhood anxiety in the region.
US safety regulators are investigating Tesla’s ‘Actually Smart Summon’ feature, which allows drivers to move their cars remotely without being inside the vehicle. The probe follows reports of crashes involving the technology, including at least four confirmed incidents.
The US National Highway Traffic Safety Administration (NHTSA) is examining nearly 2.6 million Tesla cars equipped with the feature since 2016. The agency noted issues with the cars failing to detect obstacles, such as posts and parked vehicles, while using the technology.
Tesla has not commented on the investigation. Company founder Elon Musk has been a vocal supporter of self-driving innovations, insisting they are safer than human drivers. However, this probe, along with other ongoing investigations into Tesla’s autopilot features, could result in recalls and increased scrutiny of the firm’s driverless systems.
The NHTSA will assess how fast cars can move in Smart Summon mode and the safeguards in place to prevent use on public roads. Tesla’s manual advises drivers to operate the feature only in private areas with a clear line of sight, but concerns remain over its real-world safety applications.
The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.
Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.
To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.