Ireland takes legal action against X over data privacy

The Irish Data Protection Commission (DPC) has launched legal action against the social media platform X, formerly Twitter, in a case that revolves around processing user data to train Musk’s AI large language model called Grok. The AI tool or chatbot was developed by xAI, a company founded by Elon Musk, and is used as a search assistant for premium users on the platform.

The DPC is seeking a court order to stop or limit the processing of user data by X for training its AI systems, expressing concerns that this could violate the European Union’s General Data Protection Regulation (GDPR). The case may be referred to the European Data Protection Board for further review.

The legal dispute is part of a broader conflict between Big Tech companies and regulators over using personal data to develop AI technologies. Consumer organisations have accused X of breaching GDPR, a claim the company has vehemently denied, calling the DPC’s actions unwarranted and overly broad.

The Irish DPC has an important role in overseeing X’s compliance with the EU data protection laws since the platform’s operations in the EU are managed from Dublin. The current legal proceedings could significantly shift how Ireland enforces GDPR against large tech firms.

The DPC is also concerned about X’s plans to launch a new version of Grok, which is reportedly being trained using data from the EU and European Economic Area users. The privacy watchdog argues that this could worsen existing issues with data processing.

Despite X implementing some mitigation measures, such as offering users an opt-out option, these steps were not in place when the data processing began, leading to further scrutiny from the DPC. X has resisted the DPC’s requests to halt data processing or delay the release of the new Grok version, leading to an ongoing court battle.

The outcome of this case could set a precedent for how AI and data protection issues are handled across Europe.

FTC sues TikTok over child privacy violations

The Federal Trade Commission (FTC), supported by the Department of Justice (DOJ), has filed a lawsuit against TikTok and its parent company ByteDance for violating children’s privacy laws. The lawsuit claims that TikTok breached the Children’s Online Privacy Protection Act (COPPA) by failing to notify and obtain parental consent before collecting data from children under 13. The case also alleges that TikTok did not adhere to a 2019 FTC consent order regarding the same issue.

According to the complaint, TikTok collected personal data from underage users without proper parental consent, using this information to target ads and build user profiles. Despite knowing these practices violated COPPA, ByteDance and TikTok allowed children to use the platform by bypassing age restrictions. Even when parents requested account deletions, TikTok made the process difficult and often did not comply.

FTC Chair Lina M. Khan stated that TikTok’s actions jeopardised the safety of millions of children, and the FTC is determined to protect kids from such violations. The DOJ emphasised the importance of upholding parental rights to safeguard children’s privacy.

The lawsuit seeks civil penalties against ByteDance and TikTok and a permanent injunction to prevent future COPPA violations. The US District Court will review the case for the Central District of California.

AI chatbots impersonate OnlyFans creators

OnlyFans, a platform known for offering subscribers ‘authentic relationships’ with content creators, faces scrutiny over the use of AI chatbots impersonating performers. Some management agencies employ AI software to sext with subscribers, bypassing the need for human interaction. NEO Agency, for example, uses a chatbot called FlirtFlow to create what it claims are ‘genuine and meaningful’ connections, although OnlyFans’ terms of service prohibit such use of AI.

Despite these rules, chatbots are prevalent. NEO Agency manages about 70 creators, with half using FlirtFlow. The AI engages subscribers in small talk to gather personal information, aiming to extract more money. While effective for high-traffic accounts, human chatters are still preferred for more personalised interactions, especially in niche erotic categories.

Similarly, Australian company Botly offers software that generates responses for OnlyFans messages, which a human can then send. Botly claims its technology is used in over 100,000 chats per month. Such practices raise concerns about transparency and authenticity on platforms that promise direct interactions with creators.

The issue coincides with broader discussions on online safety. The UK recently amended its Online Safety Bill to combat deepfakes and revenge porn, highlighting the rising threat of deceptive digital practices. Meanwhile, other platforms like X (formerly Twitter) have officially allowed adult content, increasing the complexity of managing online safety and authenticity.

Civil society and industry share concerns about the UN draft Cybercrime Convention

Civil society organisations and more than 150 tech companies within the Cybersecurity Tech Accord urged the United Nations to revise the final draft of the UN Cybercrime Convention. Non-state stakeholders share concerns that the current language of the convention could lead to human rights abuses and criminalise the work of penetration testers, ethical hackers, security researchers, and journalists.

The UN member states are currently in the final round of negotiations for what will become the first global treaty on cybercrime, with talks running from 29 July to 8 August. The current draft, published on 23 May, has seen some positive changes, but the Tech Accord, in particular, calls for further revisions. The office of the UN High Commissioner for Human Rights also noted that the revised draft of the UN Cybercrime Convention includes some welcome improvements, however significant concerns remain about many provisions that fail to meet international human rights standards. The Electronic Frontier Foundation (EFF) added that the proposed UN Cybercrime Convention mandates intrusive domestic surveillance measures and requires states to cooperate in surveillance and data sharing. It allows the collection, preservation, and sharing of electronic evidence for any crime deemed serious by a country’s domestic law, with minimal human rights safeguards, even with countries that have poor human rights records.

These shortcomings are particularly concerning given the already expansive use of existing cybercrime laws in some jurisdictions, which have been used to unduly restrict freedom of expression, target dissenting voices, and arbitrarily interfere with the privacy and anonymity of communications, according to the office’s analysis. A key concern of the Tech Accord is the need for more transparency in the convention’s current form, while the EFF calls to address the currently formulated highly intrusive secret spying powers without robust safeguards and insufficient protection for security researchers, among other concerns.

Microsoft reveals VALL-E 2 AI, achieving human-like speech

Microsoft has made a significant leap forward in AI speech generation with its VALL-E 2 text-to-speech (TTS) system. VALL-E 2 achieves human parity, meaning it can produce voices indistinguishable from real people. The system only needs a few seconds of audio to learn and mimic a speaker’s voice.

Tests on speech datasets like LibriSpeech and VCTK showed that VALL-E 2’s voice quality matches or even surpasses human quality. Features like ‘Repetition Aware Sampling’ and ‘Grouped Code Modeling’ allow the system to handle complex sentences and repetitive phrases naturally, ensuring smooth and realistic speech output.

Despite releasing audio samples, Microsoft considers VALL-E 2 too advanced for public release due to potential misuse like voice spoofing. This cautious approach aligns with the wider industry’s concerns, as seen with OpenAI’s restrictions on its voice technology.

While VALL-E 2 represents a significant breakthrough, it remains a research project for now. The development of AI continues apace, with companies striving to balance innovation with ethical considerations.

GSMA announces global effort to improve smartphone access

The GSMA has announced the formation of a global coalition to make smartphones more accessible and affordable for some of the world’s poorest populations. The coalition will include mobile operators, vendors, and significant institutions such as the World Bank Group, the United Nations’ ITU agency, and the WEF Edison Alliance.

The group aims to reduce the barriers to entering the digital economy for low-income populations, particularly in Sub-Saharan Africa and South Asia. The GSMA highlighted that handset affordability is the most significant obstacle preventing people from going online.

In many low and middle-income countries, mobile phones are often the only means of accessing the internet. Currently, 38% of the global population cannot use mobile internet due to high costs and lack of skills. The coalition will work together to improve access to affordable internet-enabled devices, aiming to close the ‘Usage Gap’ that hinders around three billion people from fully participating in the global digital economy.

Healthcare experts demand transparency in AI use

Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but demand greater transparency regarding its application. A survey by Elsevier reveals that 94% of researchers and 96% of clinicians believe AI will accelerate knowledge discovery, while a similar proportion sees it boosting research output and reducing costs. Both groups, however, stress the need for quality content, trust, and transparency before they fully embrace AI tools.

The survey, involving 3,000 participants across 123 countries, indicates that 87% of respondents think AI will enhance overall work quality, and 85% believe it will free up time for higher-value projects. Despite these positive outlooks, there are significant concerns about AI’s potential misuse. Specifically, 95% of researchers and 93% of clinicians fear that AI could be used to spread misinformation. In India, 82% of doctors worry about overreliance on AI in clinical decisions, and 79% are concerned about societal disruptions like unemployment.

To address these issues, 81% of researchers and clinicians expect to be informed if the tools they use depend on generative AI. Moreover, 71% want assurance that AI-dependent tools are based on high-quality, trusted data sources. Transparency in peer-review processes is also crucial, with 78% of researchers and 80% of clinicians expecting to know if AI influences manuscript recommendations. These insights underscore the importance of transparency and trust in the adoption of AI in healthcare.

French study uncovers Russian disinformation tactics amid legislative campaign

Russian disinformation campaigns are targeting social media to destabilise France’s political scene during its legislative campaign, according to a study by the French National Centre for Scientific Research (CNRS). The study highlights Kremlin strategies such as normalising far-right ideologies and weakening the ‘Republican front’ that opposes the far-right Rassemblement National (RN).

Researchers noted that Russia’s influence tactics, including astroturfing and meme wars, have been used previously during the 2016 US presidential elections and the 2022 French presidential elections to support RN figurehead Marine Le Pen. The Kremlin’s current efforts aim to exploit ongoing global conflicts, such as the Israeli-Palestinian conflict, to influence French political dynamics.

Despite these findings, the actual impact of these disinformation campaigns remains uncertain. Some experts argue that while such interference may sway voter behaviour or amplify tensions, the overall effect is limited. The CNRS study focused on activity on X (formerly Twitter) and acknowledged that further research is needed to understand the broader implications of these digital disruptions.

Microsoft settles California leave discrimination case for $14 million

Microsoft will be paying $14 million to settle a discrimination case where it is alleged that the company has illegally penalised workers taking medical and family care leave. The settlement, pending a judge’s approval, will conclude a lengthy investigation by the Civil Rights Department, and the money will go to the affected workers.

The California Civil Rights Department had filed accusations in state court against the tech giant, claiming that since 2017, the company has been unfairly penalising its California employees for taking parental, disability, pregnancy, and family-care leave by withholding raises, promotions, or stock awards. According to the department, many of the affected workers were women and people with disabilities, who received lower performance reviews, thereby impacting their overall career growth.

Microsoft, however, stated that they did nothing wrong and disagreed with the accusations. Nonetheless, alongside the $14.4 million settlement, Microsoft has agreed to bring in an independent consultant to ensure their policies are fair to employees taking leave. The consultant will also ensure that workers can voice their concerns without any repercussions. Additionally, Microsoft will train managers and HR staff to prevent future workplace violations of employment rights.

Meta responds to photo tagging issues with new AI labels

Meta has announced a significant update regarding using AI labels across its platforms, replacing the ‘Made with AI’ tag with ‘AI info’. This change comes after widespread complaints about the incorrect tagging of photos. For instance, a historical photograph captured on film four decades ago was mistakenly labelled AI-generated when uploaded with basic editing tools like Adobe’s cropping feature.

Kate McLaughlin, a spokesperson for Meta, emphasised that the company is continuously refining its AI products and collaborating closely with industry partners on AI labelling standards. The new ‘AI info’ label aims to clarify that content may have been modified with AI tools rather than solely created by AI.

The issue primarily stems from how metadata tools like Adobe Photoshop apply information to images, which platforms interpret. Following the expansion of its AI content labelling policies, daily photos shared on Meta’s platforms, such as Instagram and Facebook, were erroneously tagged as ‘Made with AI’.

Initially, the updated labelling will roll out on mobile apps before extending to web platforms. Clicking on the ‘AI info’ tag will display a message similar to the previous label, explaining why it was applied and acknowledging the use of AI-powered editing tools like Generative Fill. Despite advancements in metadata tagging technology like C2PA, distinguishing between AI-generated and authentic images remains a work in progress.