Apple explores expressive robotics

Apple is headded into consumer robotics, unveiling research that highlights the importance of expressive movements in human-robot interaction. Drawing inspiration from Pixar’s Luxo Jr., the company’s study explores how non-humanlike objects, such as a lamp, can be designed to convey intention and emotion through motion.

A video accompanying the research showcases a prototype lamp robot, which mimics Pixar’s iconic animated mascot. The study suggests that even small movements, such as turning towards a window before answering a weather query, can create a stronger connection between humans and machines. The lamp, operating with Siri’s voice, behaves as a more dynamic alternative to smart speakers like Apple’s HomePod or Amazon’s Echo.

This research comes amid speculation that Apple is working on a more advanced smart home hub, possibly incorporating robotic features. While details remain scarce, rumours suggest a device resembling a robotic arm with an integrated screen. Though Apple’s consumer robotics project is still in early stages, the findings hint at a future where expressive, intelligent robots become a part of everyday life.

French authorities scrutinise X’s algorithms for potential bias

French prosecutors have launched an investigation into X, formerly known as Twitter, over alleged algorithmic bias. The probe was initiated after a lawmaker raised concerns that biased algorithms on the platform may have distorted automated data processing. The Paris prosecutor’s office confirmed that cybercrime specialists are analysing the issue and conducting technical checks.

The investigation comes just days before a major AI summit in Paris, where global leaders and tech executives from companies like Microsoft and Alphabet will gather. X has not responded to requests for comment. The case highlights growing scrutiny of the platform, which has been criticised for its role in shaping political discourse. Elon Musk’s vocal support for right-wing parties in Europe has raised fears of foreign interference.

France‘s J3 cybercrime unit, which is leading the investigation, has previously targeted major tech platforms, including Telegram. Last year, it played a key role in the arrest of Telegram’s founder and pressured the platform to remove illegal content. X has also faced legal challenges in other countries, including Brazil, where it was temporarily blocked for failing to curb misinformation.

EU lawmakers to negotiate next data protection supervisor

Lawmakers are set to negotiate with EU member states to determine the next European Data Protection Supervisor (EDPS), following the expiration of the current EDPS, Wojciech Wiewiórowski’s mandate in December. The decision on his successor is expected in March at the earliest, with both the European Parliament and member states backing different candidates. The Parliament’s Civil Liberties, Justice and Home Affairs Committee (LIBE) voted to appoint Bruno Gencarelli, an Italian Commission official, while member states are supporting Wiewiórowski for another term.

The European Parliament’s group leaders have recently backed the LIBE decision, but a joint committee with the Council of the EU needs to be set up to finalise the appointment. The configuration of the committee is still under discussion. Meanwhile, privacy experts have expressed concern over Gencarelli’s candidacy, arguing that the next EDPS should not come from within the Commission due to potential conflicts of interest, citing past decisions such as the EDPS ruling against Microsoft 365’s use by the EU executive.

The EDPS role, while unable to fine Big Tech companies directly, is significant in shaping EU privacy law, as it publishes opinions on legislative proposals. The new appointee will play a crucial role in overseeing the data protection practices of EU institutions and ensuring that privacy rights are upheld.

Amazon removes diversity references as companies scale back DEI policies

Amazon has removed references to ‘inclusion and diversity‘ from its latest annual report, signalling a shift away from diversity, equity and inclusion (DEI) initiatives. The change follows an internal memo from December, in which Amazon announced it was winding down certain DEI programmes by the end of 2024. Instead of maintaining separate initiatives, the company plans to integrate DEI efforts into broader corporate processes.

Tech giants such as Meta and Google have also been scaling back diversity programmes, facing pressure from conservative groups threatening legal action. Disney has similarly adjusted its DEI approach, removing mentions of its ‘Reimagine Tomorrow‘ programme while introducing an initiative to hire US military veterans. The trend reflects a broader corporate retreat from diversity-focused policies that gained traction after the 2020 protests against racial injustice.

Political opposition to DEI has grown, with President Donald Trump’s administration vowing to eliminate diversity policies in the private sector. In response, attorneys general from twelve US states, including New York and California, have reaffirmed their commitment to enforcing civil rights protections against workplace discrimination. The debate over DEI’s future remains contentious as businesses and lawmakers continue to clash over its role in corporate America.

UK gambling websites breach data protection laws

Gambling companies are under investigation for covertly sharing visitors’ data with Facebook’s parent company, Meta, without proper consent, breaching data protection laws. A hidden tracking tool embedded in numerous UK gambling websites has been sending data, such as the web pages users visit and the buttons they click, to Meta, which then uses this information to profile individuals as gamblers. This data is then used to target users with gambling-related ads, violating the legal requirement for explicit consent before sharing such information.

Testing of 150 gambling websites revealed that 52 automatically transmitted user data to Meta, including large brands like Hollywoodbets, Sporting Index, and Bet442. This data sharing occurred without users having the opportunity to consent, resulting in targeted ads for gambling websites shortly after visiting these sites. Experts have raised concerns about the industry’s unlawful practices and called for immediate regulatory action.

The Information Commissioner’s Office (ICO) is reviewing the use of tracking tools like Meta Pixel and has warned that enforcement action could be taken, including significant fines. Some gambling companies have updated their websites to prevent automatic data sharing, while others have removed the tracking tool altogether in response to the findings. However, the Gambling Commission has yet to address the issue of third-party profiling used to recruit new customers.

The misuse of data in this way highlights the risks of unregulated marketing, particularly for vulnerable individuals. Data privacy experts have stressed that these practices not only breach privacy laws but could also exacerbate gambling problems by targeting individuals who may already be at risk.

RBI to introduce secure domain names to combat digital payment fraud

India‘s central bank has raised concerns over the increasing fraud in digital payments and announced new measures to improve security. Reserve Bank of India (RBI) Governor Sanjay Malhotra warned that cyber fraud and data breaches are becoming more frequent as banks and consumers adopt new technology. To counter this, the RBI will introduce exclusive website domain names to reduce the risk of deceptive online practices.

Fraudsters often use misleading domain names to trick users into revealing sensitive information or making fraudulent transactions. To enhance online security and credibility, the RBI will launch dedicated domains for financial institutions. Banks will use ‘bank.in’ while non-bank financial entities will operate under ‘fin.in’. These exclusive domains will provide a unique digital identity, making it easier for users to recognise legitimate platforms.

The Institute for Development and Research in Banking Technology (IDRBT) will oversee the registration process for these domains, with actual registrations set to begin in April 2025. The initiative is part of the RBI’s broader effort to strengthen cybersecurity and protect consumers in the rapidly growing digital payments sector.

China looks to build consensus on AI at Global Summit

Chinese Vice Premier Zhang Guoqing will visit France from Sunday until February 12 to attend the AI Action Summit as a special representative of President Xi Jinping. The summit will bring together representatives from nearly 100 countries to discuss the safe development of AI.

A foreign ministry spokesperson, Lin Jian, said China is eager to strengthen communication and collaboration with other nations at the event. China also aims to foster consensus on AI cooperation and contribute to the implementation of the United Nations Global Digital Compact.

Vice President JD Vance is leading the US delegation to the summit, but reports suggest that the US team will not include technical staff from the AI Safety Institute.

ByteDance unveils AI that creates uncannily realistic deepfakes

ByteDance, the company behind TikTok, has introduced OmniHuman-1, an advanced AI system capable of generating highly realistic deepfake videos from just a single image and an audio clip. Unlike previous deepfake technology, which often displayed telltale glitches, OmniHuman-1 produces remarkably smooth and lifelike footage. The AI can also manipulate body movements, allowing for extensive editing of existing videos.

Trained on 19,000 hours of video content from undisclosed sources, the system’s potential applications range from entertainment to more troubling uses, such as misinformation. The rise of deepfake content has already led to cases of political and financial deception worldwide, from election interference to multimillion-dollar fraud schemes. Experts warn that the technology’s increasing sophistication makes it harder to detect AI-generated fakes.

Despite calls for regulation, deepfake laws remain limited. While some governments have introduced measures to combat AI-generated disinformation, enforcement remains a challenge. With deepfake content spreading at an alarming rate, many fear that systems like OmniHuman-1 could further blur the line between reality and fabrication.

India bans use of AI tools in government offices

India‘s finance ministry has issued an advisory urging employees to refrain from using AI tools like ChatGPT and DeepSeek for official tasks, citing concerns over the potential risks to the confidentiality of government data. The directive, dated January 29, highlights the dangers of AI apps on office devices, warning that they could jeopardise the security of sensitive documents and information.

This move comes amid similar actions taken by other countries such as Australia and Italy, which have restricted the use of DeepSeek due to data security concerns. The advisory surfaced just ahead of OpenAI CEO Sam Altman’s visit to India, where he is scheduled to meet with the IT minister.

Representatives from India’s finance ministry, OpenAI, and DeepSeek have yet to comment on the matter. It remains unclear whether other Indian ministries have implemented similar measures.

EU bans AI tracking of workers’ emotions and manipulative online tactics

The European Commission has unveiled new guidelines restricting how AI can be used in workplaces and online services. Employers will be prohibited from using AI to monitor workers’ emotions, while websites will be banned from using AI-driven techniques that manipulate users into spending money. These measures are part of the EU’s Artificial Intelligence Act, which takes full effect in 2026, though some rules, including the ban on certain practices, apply from February 2024.

The AI Act also prohibits social scoring based on unrelated personal data, AI-enabled exploitation of vulnerable users, and predictive policing based solely on biometric data. AI-powered facial recognition CCTV for law enforcement will be heavily restricted, except under strict conditions. The EU has given member states until August to designate authorities responsible for enforcing these rules, with breaches potentially leading to fines of up to 7% of a company’s global revenue.

Europe’s approach to AI regulation is significantly stricter than that of the United States, where compliance is voluntary, and contrasts with China‘s model, which prioritises state control. The guidelines aim to provide clarity for businesses and enforcement agencies while ensuring AI is used ethically and responsibly across the region.