EU Court orders damages for data breach by Commission

In a landmark decision, the EU General Court ruled on Wednesday that the European Commission must pay €400 ($412) in damages to a German citizen for violating data protection laws. The case marks the first time the Commission has been held liable for failing to comply with its data regulations.

The court found that the Commission improperly transferred the citizen’s personal data, including an IP address, to Meta Platforms in the United States without adequate safeguards. The breach occurred when the individual used the ‘Sign in with Facebook’ option on the EU login webpage to register for a conference.

The Commission acknowledged the ruling, stating it would review the judgment and its implications. The decision underscores the robust enforcement of the EU’s General Data Protection Regulation (GDPR), which has led to significant penalties against major firms like Meta, LinkedIn, and Klarna for non-compliance.

Apple denies misusing Siri data following $95 million settlement

Apple has clarified that it has never sold data collected by its Siri voice assistant or used it to create marketing profiles. The statement, issued Wednesday, follows a $95 million settlement last week to resolve a class action lawsuit alleging that Siri had inadvertently recorded private conversations and shared them with third parties, including advertisers. Apple denied the claims and admitted no wrongdoing as part of the settlement, which could result in payouts of up to $20 per Siri-enabled device for millions of affected customers.

The controversy stemmed from claims that Siri sometimes activated unintentionally, recording sensitive interactions. Apple emphasised in its statement that Siri data is used minimally and only for real-time server input when necessary, with no retention of audio recordings unless users explicitly opt-in. Even in such cases, the recordings are solely used to improve Siri’s functionality. Apple reaffirmed its commitment to privacy, stating, ‘Apple has never used Siri data to build marketing profiles, never made it available for advertising, and never sold it to anyone.’

This case has drawn attention alongside a similar lawsuit targeting Google’s Voice Assistant, currently pending in federal court in San Jose, California. Both lawsuits are spearheaded by the same legal teams, highlighting growing scrutiny over how tech companies handle voice assistant data.

Brazil warns tech firms to follow laws or face expulsion

Brazilian Supreme Court Judge Alexandre de Moraes reiterated on Wednesday that technology companies must comply with national laws to continue operating in the country. His statement followed Meta’s recent announcement to scale back its US fact-checking program, raising concerns about its impact on Brazil.

Speaking at an event marking the anniversary of anti-institution riots, Moraes emphasised that the court would not tolerate the use of hate speech for profit. Last year, he ordered the suspension of social media platform X for over a month due to its failure to moderate hate speech, a decision later upheld by the court. X owner Elon Musk criticised the move as censorship but ultimately complied with court demands to restore the platform’s services in Brazil.

Brazilian prosecutors have also asked Meta to clarify whether its US fact-checking changes will apply in Brazil, citing an ongoing investigation into social media platforms’ efforts to combat misinformation and violence. Meta has been given 30 days to respond but declined to comment through its local office.

Gaming app offers mental health support for kids

A new app designed to help children aged seven to twelve manage anxiety through gaming is being launched in Lincolnshire, UK. The app, called Lumi Nova, combines cognitive behavioural therapy (CBT) techniques with personalised quests to gently expose children to their fears in a safe and interactive way.

The digital game has been created by BFB Labs, a social enterprise focused on digital therapy, in collaboration with children, parents, and mental health experts. The app aims to make mental health support more accessible, particularly in rural areas, where traditional services may be harder to reach.

Families in Lincolnshire can download the app for free without needing a prescription or referral. Councillor Patricia Bradwell from Lincolnshire County Council highlighted the importance of flexible mental health services, saying: ‘We want to ensure children and young people have easy access to support that suits their needs.’

By using immersive videos and creative tasks, Lumi Nova allows children to confront their worries at their own pace from the comfort of home, making mental health care more engaging and approachable. The year-long pilot aims to assess the app’s impact on childhood anxiety in the region.

Tesla’s driverless tech under investigation

US safety regulators are investigating Tesla’s ‘Actually Smart Summon’ feature, which allows drivers to move their cars remotely without being inside the vehicle. The probe follows reports of crashes involving the technology, including at least four confirmed incidents.

The US National Highway Traffic Safety Administration (NHTSA) is examining nearly 2.6 million Tesla cars equipped with the feature since 2016. The agency noted issues with the cars failing to detect obstacles, such as posts and parked vehicles, while using the technology.

Tesla has not commented on the investigation. Company founder Elon Musk has been a vocal supporter of self-driving innovations, insisting they are safer than human drivers. However, this probe, along with other ongoing investigations into Tesla’s autopilot features, could result in recalls and increased scrutiny of the firm’s driverless systems.

The NHTSA will assess how fast cars can move in Smart Summon mode and the safeguards in place to prevent use on public roads. Tesla’s manual advises drivers to operate the feature only in private areas with a clear line of sight, but concerns remain over its real-world safety applications.

FBI warns of AI-driven fraud

The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.

Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.

To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.

Faculty AI develops AI for military drones

Faculty AI, a consultancy company with significant experience in AI, has been developing AI technologies for both civilian and military applications. Known for its close work with the UK government on AI safety, the NHS, and education, Faculty is also exploring the use of AI in military drones. The company has been involved in testing AI models for the UK’s AI Safety Institute (AISI), which was established to study the implications of AI safety.

While Faculty has worked extensively with AI in non-lethal areas, its work with military applications raises concerns due to the potential for autonomous systems in weapons, including drones. Though Faculty has not disclosed whether its AI work extends to lethal drones, it continues to face scrutiny over its dual roles in advising both the government on AI safety and working with defense clients.

The company has also generated some controversy because of its growing influence in both the public and private sectors. Some experts, including Green Party members, have raised concerns about potential conflicts of interest due to Faculty’s widespread government contracts and its private sector involvement in AI, such as its collaborations with OpenAI and defence firms. Faculty’s work on AI safety is seen as crucial, but critics argue that its broad portfolio could create a risk of bias in the advice it provides.

Despite these concerns, Faculty maintains that its work is guided by strict ethical policies, and it has emphasised its commitment to ensuring AI is used safely and responsibly, especially in defence applications. As AI continues to evolve, experts call for caution, with discussions about the need for human oversight in the development of autonomous weapons systems growing more urgent.

Meta ends fact-checking program in the US

Meta Platforms has announced the termination of its US fact-checking program and eased restrictions on politically charged discussions, such as immigration and gender identity. The decision, which affects Facebook, Instagram, and Threads, marks a significant shift in the company’s content moderation strategy. CEO Mark Zuckerberg framed the move as a return to ‘free expression,’ citing recent US elections as a cultural tipping point. The changes come as Meta seeks to build rapport with the incoming Trump administration.

In place of fact-checking, Meta plans to adopt a ‘Community Notes’ system, similar to that used by Elon Musk’s platform X. The company will also scale back proactive monitoring of hate speech, relying instead on user reports, while continuing to address high-severity violations like terrorism and scams. Meta is also relocating some policy teams from California to other states, signalling a broader operational shift. The decision follows the promotion of Republican policy executive Joel Kaplan to head of global affairs and the appointment of Trump ally Dana White to Meta’s board.

The move has sparked criticism from fact-checking organisations and free speech advocates. Angie Drobnic Holan, head of the International Fact-Checking Network, pushed back against Zuckerberg’s claims of bias, asserting that fact-checkers provide context rather than censorship. Critics, including the Centre for Information Resilience, warn that the policy rollback could exacerbate disinformation. For now, the changes will apply only to the US, with Meta maintaining its fact-checking operations in regions like the European Union, where stricter tech regulations are in place.

As Meta rolls out its ‘Community Notes’ system, global scrutiny is expected to intensify. The European Commission, already investigating Musk’s X over similar practices, noted Meta’s announcement and emphasised compliance with the EU’s Digital Services Act, which mandates robust content regulation. While Meta navigates a complex regulatory and political landscape, the impact of its new policies on disinformation and public trust remains uncertain.

White House introduces Cyber Trust Mark for smart devices

The White House unveiled a new label, the Cyber Trust Mark, for internet-connected devices like smart thermostats, baby monitors, and app-controlled lights. This new shield logo aims to help consumers evaluate the cybersecurity of these products, similar to how Energy Star labels indicate energy efficiency in appliances. Devices that display the Cyber Trust Mark will have met cybersecurity standards set by the US National Institute of Standards and Technology (NIST).

As more household items, from fitness trackers to smart ovens, become internet-connected, they offer convenience but also present new digital security risks. Anne Neuberger, US Deputy National Security Advisor for Cyber, explained that each connected device could potentially be targeted by cyber attackers. While the label is voluntary, officials hope consumers will prioritise security and demand the Cyber Trust Mark when making purchases.

The initiative will begin with consumer devices like cameras, with plans to expand to routers and smart meters. Products bearing the Cyber Trust Mark are expected to appear on store shelves later this year. Additionally, the Biden administration plans to issue an executive order by the end of the president’s term, requiring the US government to only purchase products with the label starting in 2027. The program has garnered bipartisan support, officials said.

UN’s ICAO targeted in alleged cyberattack

The International Civil Aviation Organization (ICAO) is investigating a potential cybersecurity breach following claims that a hacker accessed thousands of its documents. The United Nations agency, which sets global aviation standards, confirmed it is reviewing reports of an incident allegedly linked to a known cybercriminal group.

A post on a popular hacking forum dated 5 January suggested that 42,000 ICAO documents had been compromised, including sensitive personal data. Samples of the leaked information reportedly contain names, dates of birth, home addresses, email contacts, phone numbers, and employment details, with some records appearing to belong to ICAO staff.

ICAO has not confirmed whether the alleged breach is genuine or the extent of any possible data exposure. In response to media inquiries, the agency declined to provide further details beyond its official statement acknowledging the ongoing investigation.