Tesla’s driverless tech under investigation

US safety regulators are investigating Tesla’s ‘Actually Smart Summon’ feature, which allows drivers to move their cars remotely without being inside the vehicle. The probe follows reports of crashes involving the technology, including at least four confirmed incidents.

The US National Highway Traffic Safety Administration (NHTSA) is examining nearly 2.6 million Tesla cars equipped with the feature since 2016. The agency noted issues with the cars failing to detect obstacles, such as posts and parked vehicles, while using the technology.

Tesla has not commented on the investigation. Company founder Elon Musk has been a vocal supporter of self-driving innovations, insisting they are safer than human drivers. However, this probe, along with other ongoing investigations into Tesla’s autopilot features, could result in recalls and increased scrutiny of the firm’s driverless systems.

The NHTSA will assess how fast cars can move in Smart Summon mode and the safeguards in place to prevent use on public roads. Tesla’s manual advises drivers to operate the feature only in private areas with a clear line of sight, but concerns remain over its real-world safety applications.

FBI warns of AI-driven fraud

The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.

Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.

To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.

Faculty AI develops AI for military drones

Faculty AI, a consultancy company with significant experience in AI, has been developing AI technologies for both civilian and military applications. Known for its close work with the UK government on AI safety, the NHS, and education, Faculty is also exploring the use of AI in military drones. The company has been involved in testing AI models for the UK’s AI Safety Institute (AISI), which was established to study the implications of AI safety.

While Faculty has worked extensively with AI in non-lethal areas, its work with military applications raises concerns due to the potential for autonomous systems in weapons, including drones. Though Faculty has not disclosed whether its AI work extends to lethal drones, it continues to face scrutiny over its dual roles in advising both the government on AI safety and working with defense clients.

The company has also generated some controversy because of its growing influence in both the public and private sectors. Some experts, including Green Party members, have raised concerns about potential conflicts of interest due to Faculty’s widespread government contracts and its private sector involvement in AI, such as its collaborations with OpenAI and defence firms. Faculty’s work on AI safety is seen as crucial, but critics argue that its broad portfolio could create a risk of bias in the advice it provides.

Despite these concerns, Faculty maintains that its work is guided by strict ethical policies, and it has emphasised its commitment to ensuring AI is used safely and responsibly, especially in defence applications. As AI continues to evolve, experts call for caution, with discussions about the need for human oversight in the development of autonomous weapons systems growing more urgent.

Meta ends fact-checking program in the US

Meta Platforms has announced the termination of its US fact-checking program and eased restrictions on politically charged discussions, such as immigration and gender identity. The decision, which affects Facebook, Instagram, and Threads, marks a significant shift in the company’s content moderation strategy. CEO Mark Zuckerberg framed the move as a return to ‘free expression,’ citing recent US elections as a cultural tipping point. The changes come as Meta seeks to build rapport with the incoming Trump administration.

In place of fact-checking, Meta plans to adopt a ‘Community Notes’ system, similar to that used by Elon Musk’s platform X. The company will also scale back proactive monitoring of hate speech, relying instead on user reports, while continuing to address high-severity violations like terrorism and scams. Meta is also relocating some policy teams from California to other states, signalling a broader operational shift. The decision follows the promotion of Republican policy executive Joel Kaplan to head of global affairs and the appointment of Trump ally Dana White to Meta’s board.

The move has sparked criticism from fact-checking organisations and free speech advocates. Angie Drobnic Holan, head of the International Fact-Checking Network, pushed back against Zuckerberg’s claims of bias, asserting that fact-checkers provide context rather than censorship. Critics, including the Centre for Information Resilience, warn that the policy rollback could exacerbate disinformation. For now, the changes will apply only to the US, with Meta maintaining its fact-checking operations in regions like the European Union, where stricter tech regulations are in place.

As Meta rolls out its ‘Community Notes’ system, global scrutiny is expected to intensify. The European Commission, already investigating Musk’s X over similar practices, noted Meta’s announcement and emphasised compliance with the EU’s Digital Services Act, which mandates robust content regulation. While Meta navigates a complex regulatory and political landscape, the impact of its new policies on disinformation and public trust remains uncertain.

White House introduces Cyber Trust Mark for smart devices

The White House unveiled a new label, the Cyber Trust Mark, for internet-connected devices like smart thermostats, baby monitors, and app-controlled lights. This new shield logo aims to help consumers evaluate the cybersecurity of these products, similar to how Energy Star labels indicate energy efficiency in appliances. Devices that display the Cyber Trust Mark will have met cybersecurity standards set by the US National Institute of Standards and Technology (NIST).

As more household items, from fitness trackers to smart ovens, become internet-connected, they offer convenience but also present new digital security risks. Anne Neuberger, US Deputy National Security Advisor for Cyber, explained that each connected device could potentially be targeted by cyber attackers. While the label is voluntary, officials hope consumers will prioritise security and demand the Cyber Trust Mark when making purchases.

The initiative will begin with consumer devices like cameras, with plans to expand to routers and smart meters. Products bearing the Cyber Trust Mark are expected to appear on store shelves later this year. Additionally, the Biden administration plans to issue an executive order by the end of the president’s term, requiring the US government to only purchase products with the label starting in 2027. The program has garnered bipartisan support, officials said.

UN’s ICAO targeted in alleged cyberattack

The International Civil Aviation Organization (ICAO) is investigating a potential cybersecurity breach following claims that a hacker accessed thousands of its documents. The United Nations agency, which sets global aviation standards, confirmed it is reviewing reports of an incident allegedly linked to a known cybercriminal group.

A post on a popular hacking forum dated 5 January suggested that 42,000 ICAO documents had been compromised, including sensitive personal data. Samples of the leaked information reportedly contain names, dates of birth, home addresses, email contacts, phone numbers, and employment details, with some records appearing to belong to ICAO staff.

ICAO has not confirmed whether the alleged breach is genuine or the extent of any possible data exposure. In response to media inquiries, the agency declined to provide further details beyond its official statement acknowledging the ongoing investigation.

Taiwan sees sharp rise in cyberattacks linked to China

Cyberattacks on Taiwan’s government departments doubled in 2024, reaching an average of 2.4 million attacks per day, according to the island’s National Security Bureau. Most of the attacks were attributed to Chinese cyber forces, with key targets including telecommunications, transportation, and defence. The report highlighted the increasing severity of China’s hacking activities, noting that many of the attacks were timed to coincide with Chinese military drills around Taiwan.

Taiwan has long accused Beijing of using cyberwarfare as part of broader “grey-zone harassment” efforts, which also include military exercises and surveillance balloons. The latest report detailed how China’s cyber forces employed advanced hacking techniques, such as distributed denial-of-service (DDoS) attacks and social engineering, in an attempt to steal confidential government data. These attacks were aimed at disrupting Taiwan’s infrastructure, including highways and ports, and gaining strategic advantages in politics, military affairs, and technology.

China has not responded to the allegations, though it routinely denies involvement in hacking operations. However, Taiwan’s findings come amid growing international concerns over Chinese cyber activities, with the United States recently accusing Chinese hackers of stealing sensitive documents from the US Treasury Department. Taiwan’s government has warned that Beijing’s cyber threats are intensifying and pose a growing risk to national security.

Chinese hackers breach multiple US telecom firms

Recent reports reveal that Chinese hackers have compromised a broader range of US telecommunications companies than previously known. In addition to earlier breaches involving AT&T and Verizon, the cyberattacks have now been found to affect Charter Communications, Consolidated Communications, Windstream, Lumen Technologies, and T-Mobile. The hacking group, identified as Salt Typhoon and linked to Chinese intelligence, exploited vulnerabilities in network devices from security vendors such as Fortinet and Cisco Systems.

The Wall Street Journal reports that US National Security Adviser Jake Sullivan informed telecommunications and technology executives in a confidential meeting in late 2023 that these hackers had developed the capability to disrupt critical US infrastructure, including ports and power grids. While companies like AT&T and Verizon have stated that their networks are now secure and that they are collaborating with law enforcement, concerns persist about the extent and impact of these breaches.

China has denied involvement in these cyber activities, accusing the United States of disseminating disinformation. Nonetheless, the revelations have intensified discussions about national security and the resilience of US critical infrastructure against sophisticated cyber threats. The situation underscores the ongoing challenges in safeguarding sensitive communications and infrastructure from state-sponsored cyber espionage.

OpenAI confident in AGI but faces safety concerns

OpenAI CEO Sam Altman has stated that the company believes it knows how to build AGI and is now turning its focus towards developing superintelligence. He argues that advanced AI could significantly boost scientific discovery and economic growth. While AGI is often defined as AI that outperforms humans in most tasks, OpenAI and Microsoft also use a financial benchmark—$100 billion in profits—as a key measure.

Despite Altman’s optimism, today’s AI systems still struggle with accuracy and reliability. OpenAI has previously acknowledged that transitioning to a world with superintelligence is far from certain, and controlling such systems remains an unsolved challenge. The company has, however, recently disbanded key safety teams, leading to concerns about its priorities as it seeks further investment.

Altman remains confident that AI will soon make a significant impact on businesses, suggesting that AI agents could enter the workforce and reshape industries in the near future. He insists that OpenAI continues to balance innovation with safety, despite growing scepticism from former staff and industry critics.

Windows 10 users face security risks as support ends

Security concerns are mounting as Windows 10 sees a rise in market share while Windows 11 adoption declines. Microsoft will officially end free security updates and support for Windows 10 on 14 October 2025, leaving millions of users vulnerable unless they upgrade or pay for extended security updates.

Experts warn that continuing to use Windows 10 beyond its support period poses risks of cyberattacks, data breaches, and ransomware. Microsoft strongly recommends switching to Windows 11, which is designed to meet modern security demands, or choosing an alternative operating system.

Cybersecurity professionals urge users not to delay, with ESET‘s Thorsten Urbanski stressing the urgency of upgrading before the deadline to avoid a security crisis. The transition period is quickly closing, making early action essential for those relying on Windows 10.