Samsung unveils AI smart glasses with Google and Qualcomm

Samsung has teamed up with Google and Qualcomm to develop advanced AI-powered smart glasses, set for release in Q3 2025. Initial production will feature 500,000 units, targeting a competitive edge over existing options like Meta’s and Ray-Ban’s smart glasses. Equipped with AI and augmented reality (AR) technologies, the glasses promise enhanced interactivity and user experiences.

The device boasts Qualcomm’s AR1 chip for performance and NXP’s auxiliary processor for added computing. High-resolution imaging is ensured with a 12MP Sony IMX681 camera, supporting superior video and image capture. Lightweight at 50 grams, it offers features like gesture and human recognition, QR-based payments, and extended use powered by a 155mAh battery.

Google’s Gemini large language model will integrate into the software, delivering smarter user interactions and contextual understanding. Samsung disclosed the development during its earnings report, with analysts expecting a possible showcase at the January Galaxy Unpacked event, alongside the Galaxy S25.

Market excitement grows as Samsung enters the competitive smart glasses arena, setting a high standard for innovation and functionality. Observers anticipate a significant shift in wearable technology driven by AI and AR advancements.

Denmark faces backlash over AI welfare surveillance

Concerns are mounting over Denmark’s use of AI in welfare fraud detection, with Amnesty International condemning the system for violating privacy and risking discrimination. Algorithms developed by Udbetaling Danmark (UDK) and ATP flag individuals suspected of benefit fraud, potentially breaching EU laws. Amnesty argues these tools classify citizens unfairly, resembling prohibited social scoring practices.

The AI models process extensive personal data, including residency, citizenship, and sensitive information that may act as proxies for ethnicity or migration status. Critics highlight the disproportionate targeting of marginalised groups, such as migrants and low-income individuals. Amnesty accuses the algorithms of fostering systemic discrimination while exacerbating existing inequalities within Denmark’s social structure.

Experts warn that the system undermines trust, with many recipients reporting stress and depression linked to invasive investigations. Specific algorithms like ‘Really Single’ scrutinise family dynamics and living arrangements, often without clear criteria, leading to arbitrary decisions. Amnesty’s findings suggest these practices compromise human dignity and fail to uphold transparency.

Amnesty is urging Danish authorities to halt the system’s use and for the EU to clarify AI regulations. The organisation emphasises the need for oversight and bans on discriminatory data use. Danish authorities dispute Amnesty’s findings but have yet to offer transparency on their algorithmic processes.

Hong Kong and Shenzhen boost data exchange with strategic HKPC-SDEC partnership

The Hong Kong Productivity Council (HKPC) and the Shenzhen Data Exchange Centre (SDEC) have partnered to foster data exchange and collaboration between Hong Kong and Shenzhen. That partnership aims to promote data interconnection between the two cities, develop data element markets, and support small and medium-sized enterprises (SMEs) in utilising data for business growth and digital transformation.

Additionally, the organisations will focus on building a data ecosystem that encourages innovation and collaboration around data-driven solutions. Moreover, they plan to explore initiatives to advance the digital economy in both regions, creating new opportunities and enhancing their competitive edge. The collaboration will involve stakeholders such as government agencies, data service providers, traders, and SMEs, with HKPC and SDEC combining their expertise to drive these efforts forward.

Furthermore, HKPC and SDEC will organise seminars and briefings to engage stakeholders and share knowledge on leveraging data for growth. These sessions will provide valuable insights into how businesses can utilise the growing digital economy and enhance their data-driven capabilities. As a result, both organisations are committed to advancing regional cooperation in data exchange and innovation, thereby contributing to developing a stronger, more connected digital ecosystem.

Google launches Imagen 3 and Gemini on iPhones

Google has rolled out Imagen 3, its advanced text-to-image generation model, directly within Google Docs. The tool allows users to create realistic or stylised images by simply typing prompts. Workspace customers with specific Gemini add-ons will be the first to access the feature, which is gradually being made available. The addition aims to help users enhance communication by generating customised images without tedious searches.

Imagen 3 initially faced setbacks due to historical inaccuracies in generated images, causing Google to delay its release. Following improvements, the feature launched quietly earlier this year and is now integrated into the Gemini platform. The company emphasises the tool’s ability to streamline creativity and simplify the visual content creation process.

Google has also introduced its Gemini app for iPhone users, following its February release on Android. The app boasts advanced features like Gemini Live in multiple languages and seamless integration of popular Google services such as Gmail, Calendar, and YouTube. Users can also access the powerful Imagen 3 tool within the app.

The Gemini app is designed as an AI-powered personal assistant, bringing innovation and convenience to mobile users globally. Google’s Brian Marquardt highlights the app’s capability to transform everyday tasks, offering users an intuitive and versatile digital companion.

EU Human Rights Commissioner focuses on Ukraine and AI

Michael O’Flaherty, the Council of Europe’s new Commissioner for Human Rights, warned that failing to defend Ukraine would be an ‘existential loss’ for Europe. Speaking at the Web Summit in Lisbon, O’Flaherty emphasised the critical need for Europe to stand firm in supporting Ukraine amid growing authoritarianism and human rights abuses. He also highlighted the risks posed by emerging technologies, particularly AI, and stressed the importance of human rights safeguards in tech regulation.

O’Flaherty, in his first year as commissioner, underscored the enormous potential of AI to improve lives but also warned of its dangers, such as discrimination and misuse in warfare. He called for stronger regulations to ensure AI advancements align with human rights commitments. His focus on Ukraine comes at a time when the country’s challenges and human rights violations continue to dominate global discussions, with high-profile figures like Yulia Navalnaya and Olena Zelenska also speaking out on human rights issues at the summit.

As technology continues to evolve rapidly, O’Flaherty stressed the need for better communication between the tech sector and human rights advocates, aiming to create a more unified approach to solving global challenges. He also advocated for holding perpetrators of atrocities, like those in Ukraine, criminally accountable, reinforcing the preventive role of justice.

Turkey sanctions Twitch for user data breach

Turkey‘s Personal Data Protection Board (KVKK) has fined Amazon’s gaming platform Twitch 2 million lira ($58,000) following a significant data breach, the Anadolu Agency reported. The breach, involving a leak of 125 GB of data, affected 35,274 individuals in Türkiye.

KVKK’s investigation revealed that Twitch failed to implement adequate security measures before the breach and conducted insufficient risk and threat assessments. The platform only addressed vulnerabilities after the incident occurred. As a result, KVKK imposed a 1.75 million lira fine for inadequate security protocols and an additional 250,000 lira for failing to report the breach promptly.

This penalty underscores the increasing scrutiny and regulatory actions against companies handling personal data in Türkiye, highlighting the importance of robust cybersecurity measures to protect user information.

T-Mobile targeted in Chinese cyber-espionage campaign

T-Mobile‘s network was among those breached in a prolonged cyber-espionage campaign attributed to Chinese intelligence-linked hackers, according to a Wall Street Journal report. The attackers allegedly targeted multiple US and international telecom companies to monitor cellphone communications of high-value intelligence targets. T-Mobile confirmed it was aware of the industry-wide attack but stated there was no significant impact on its systems or evidence of customer data being compromised.

The Federal Bureau of Investigation (FBI) and the US Cybersecurity and Infrastructure Security Agency (CISA) recently disclosed that China-linked hackers intercepted surveillance data intended for American law enforcement by infiltrating telecom networks. Earlier reports revealed breaches into US broadband providers, including Verizon, AT&T, and Lumen Technologies, where hackers accessed systems used for court-authorised wiretapping.

China has consistently denied allegations of engaging in cyber espionage, rejecting claims by the US and its allies that it orchestrates such operations. The latest revelations highlight persistent vulnerabilities in critical communication networks targeted by state-backed hackers.

OpenAI leads shift in model development

Leading AI companies are rethinking their approach to large language models as scaling existing methods faces diminishing returns. OpenAI’s latest model, o1, represents a pivotal shift towards human-like problem-solving techniques.

The traditional focus on larger datasets and increased computing power is being reconsidered. Key figures, including former OpenAI co-founder Ilya Sutskever, highlight the plateauing benefits of scaling and call for more innovative methods. Power shortages, data scarcity, and high costs have also hindered the development of superior models like GPT-4.

New approaches like ‘test-time compute’ are gaining traction, enabling AI systems to evaluate multiple solutions before choosing the most suitable one. This advancement enhances model performance without requiring massive increases in computational resources. OpenAI, Google DeepMind, and others are rapidly adopting these techniques, marking a shift in the competitive AI landscape.

These advancements could significantly alter demand in the hardware market, challenging Nvidia’s dominance in AI chips. As AI evolves, companies are competing not only to improve models but also to redefine the tools and techniques shaping the future of artificial intelligence.

Amnesty International raises alarm over AI-driven discrimination in Danish welfare system

Amnesty International has raised significant concerns about the Danish welfare authority, Udbetaling Danmark (UDK), and its partner, Arbejdsmarkedets Tillægspension (ATP), using AI tools in fraud detection for social benefits.

The organisation warns that these AI systems may disproportionately discriminate against vulnerable groups, including individuals with disabilities, low-income persons, migrants, refugees, and marginalised racial communities. This is detailed in Amnesty’s report, ‘Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State,’ which criticises the risk of entrenching social inequalities instead of supporting at-risk populations.

The report condemns what it describes as mass surveillance practices, highlighting the erosion of privacy due to the extensive collection of sensitive data such as residency, citizenship, and family relationships. Amnesty argues that such practices not only compromise individual dignity but also facilitate algorithmic discrimination, particularly through systems like the ‘Really Single’ and ‘Model Abroad’ algorithms. These tools may unfairly target atypical family setups or those with foreign affiliations, further marginalising already vulnerable communities. The psychological impact is severe, with individuals describing the stress of ongoing investigations as living ‘at the end of a gun,’ exacerbating mental distress particularly among people with disabilities.

Why does it matter?

The report points to issues of transparency and accountability, critiquing UDK and ATP for resisting full disclosure of their AI systems and dismissing claims of using a social scoring mechanism without robust justification. It also links these practices to potential violations of international, EU, and Danish commitments to privacy and non-discrimination. Amnesty called for an immediate halt to the use of these algorithms, the prohibition of ‘foreign affiliation’ data in risk assessments, and urged the European Commission to provide clarity on AI practices considered as social scoring, ensuring that human rights are safeguarded amid technological advancements.

FTC’s Holyoak raises concerns over AI and kids’ data

Federal Trade Commissioner Melissa Holyoak has called for closer scrutiny of how AI products handle data from younger users, raising concerns about privacy and safety. Speaking at an American Bar Association meeting in Washington, Holyoak questioned what happens to information collected from children using AI tools, comparing their interactions to asking advice from a toy like a Magic 8 Ball.

The FTC, which enforces the Children’s Online Privacy Protection Act, has previously sued platforms like TikTok over alleged violations. Holyoak suggested the agency should evaluate its authority to investigate AI privacy practices as the sector evolves. Her remarks come as the FTC faces a leadership change with President-elect Donald Trump set to appoint a successor to Lina Khan, known for her aggressive stance against corporate consolidation.

Holyoak, considered a potential acting chair, emphasised that the FTC should avoid a rigid approach to mergers and acquisitions, while also predicting challenges to the agency’s worker noncompete ban. She noted that a Supreme Court decision on the matter could provide valuable clarity.