Meta develops AI technology tailored specifically for Europe

Meta Platforms, the owner of Facebook, announced it is developing AI technology tailored specifically for Europe, taking into account the region’s linguistic, geographic, and cultural nuances. The company will train its large language models using publicly shared content from its platforms, including Instagram and Facebook, ensuring that private posts are excluded to maintain user privacy.

Last month, Meta revealed plans to inform Facebook and Instagram users in Europe and the UK about how their public information is utilised to enhance and develop AI technologies. The move aims to increase transparency and reassure users about data privacy.

By focusing on localised AI development, Meta hopes to serve the European market better, reflecting the region’s diverse characteristics in its technology offerings. That effort underscores Meta’s commitment to respecting user privacy while advancing its AI capabilities.

LinkedIn disables targeted ads tool to comply with EU regulations

In a move to align with EU’s technology regulations, LinkedIn, the professional networking platform owned by Microsoft, has disabled a tool that facilitated targeted advertising. The decision comes in adherence to the Digital Services Act (DSA), which imposes strict rules on tech companies operating within the EU.

The move by LinkedIn followed a complaint by several civil society organizations, including European Digital Rights (EDRi), Gesellschaft für Freiheitsrechte (GFF), Global Witness, and Bits of Freedom, to the European Commission. These groups raised concerns that LinkedIn’s tool might allow advertisers to target users based on sensitive personal data such as racial or ethnic origin, political opinions, and other personal details due to their membership in LinkedIn groups.

In March, the European Commission had sent a request for information to LinkedIn after these groups highlighted potential violations of the DSA. The DSA requires online intermediaries to provide users with more control over their data, including an option to turn off personalised content  and to disclose how algorithms impact their online experience. It also prohibits the use of sensitive personal data, such as race, sexual orientation, or political opinions, for targeted advertising. In recent years, the EU has been at the forefront of enforcing data privacy and protection laws, notably with the GDPR. The DSA builds on these principles, focusing more explicitly on the accountability of online platforms and their role in shaping public discourse.

A LinkedIn spokesperson emphasised that the platform remains committed to supporting its users and advertisers, even as it navigates these regulatory changes. “We are continually reviewing and updating our processes to ensure compliance with applicable laws and regulations,” the spokesperson said. “Disabling this tool is a proactive step to align with the DSA’s requirements and to maintain the trust of our community.” EU industry chief Thierry Breton commented on LinkedIn’s move, stating, “The Commission will monitor the effective implementation of LinkedIn’s public pledge to ensure full compliance with the DSA.” 

Why does it matter?

The impact of LinkedIn’s decision extends beyond its immediate user base and advertisers. Targeted ads have been a lucrative source of income for social media platforms, allowing advertisers to reach niche markets with high precision. By disabling this tool, LinkedIn is setting a precedent for other tech companies to follow, highlighting the importance of regulatory compliance and user trust.

New York lawmakers pass bills on social media restrictions

New York state lawmakers have passed new legislation to restrict social media platforms from showing ‘addictive’ algorithmic content to users under 18 without parental consent. The measure to implement aims to mitigate online risks to children, making New York the latest state to take such action. A companion bill was also passed, which limits online sites from collecting and selling the personal data of minors.

Governor Kathy Hochul is expected to sign both bills into law, calling them a significant step toward addressing the youth mental health crisis and ensuring a safer digital environment. The legislation could impact revenues for social media companies like Meta, which generated significant income from advertising to minors.

While industry associations have criticised the bills as unconstitutional and an assault on free speech, proponents argue that the measures are necessary to protect adolescents from mental health issues linked to excessive social media use. The SAFE (Stop Addictive Feeds Exploitation) for Kids Act will require parental consent for minors to view algorithm-driven content instead of providing a chronological feed of followed accounts and popular content.

The New York Child Data Protection Act, the companion bill, will bar online sites from collecting, using, or selling the personal data of minors without informed consent. Violations could result in significant penalties, adding a layer of protection for young internet users.

Google Play cracks down on AI apps amid deepfake concerns

Google has issued new guidance for developers building AI apps distributed through Google Play in response to growing concerns over the proliferation of AI-powered apps designed to create deepfake nude images. The platform recently announced a crackdown on such applications, signalling a firm stance against the misuse of AI for generating non-consensual and potentially harmful content.

The move comes in the wake of alarming reports highlighting the ease with which these apps can manipulate photos to create realistic yet fabricated nude images of individuals. Reports have surfaced about apps like ‘DeepNude’ and its clones, which can strip clothes from images of women to produce highly realistic nude photos. Another report detailed the widespread availability of apps that could generate deepfake videos, leading to significant privacy invasions and the potential for harassment and blackmail.

Apps offering AI features have to be ‘rigorously tested’ to safeguard against prompts that generate restricted content and have to provide a way for users to signal it. Google strongly suggests that developers document the recommended tests before launching them, as Google could ask them to be reviewed in the future. Additionally, developers can’t advertise that their app breaks any of Google Play’s rules at the risk of getting banned from the app store. The company is also publishing other resources and best practices, like its People + AI Guidebook, which aims to support developers building AI apps.

Why Does It Matter?

The proliferation of AI-driven deepfake apps on platforms like Google Play undermine personal privacy and consent by allowing anyone to generate highly realistic and often explicit content of individuals without their knowledge or consent. Such misuse can lead to severe reputational damage, harassment, and even extortion, affecting both individuals and public figures alike.

AI cameras to catch road offenders in North Lincolnshire and East Yorkshire

A new mobile camera unit will be deployed in UK’s East Yorkshire and North Lincolnshire to catch drivers using mobile phones and those not wearing seat belts. In partnership with National Highways, Safer Roads Humber will operate the AI-equipped camera for a week starting Monday, 10 June. The AI technology identifies potential lawbreakers, with images reviewed by an officer to confirm violations before prosecution.

Offenders face significant penalties: a £200 fine and six points on their licence for using a handheld phone while driving, and a £100 fine for not wearing a seat belt. Drivers are also responsible for ensuring passengers under 14 are belted in. Sometimes, offenders may be offered an educational course instead of prosecution.

Ian Robertson from Safer Roads Humber highlighted the enhanced enforcement capabilities of this new equipment. While their current safety camera vans already detect such offences, the advanced AI technology of the new unit provides added capacity to improve road safety.

The Snowflake cyberattack could become one of the biggest data breaches ever

A recent hack targeting customers of the cloud storage company Snowflake is shaping up to be one of the largest data breaches ever. Criminal hackers have been attempting to access accounts using stolen login details, impacting notable companies like Ticketmaster and Santander. Snowflake initially reported that only a limited number of customer accounts were accessed. Still, cybercriminals have since claimed to be selling data from other major firms, including Advance Auto Parts and LendingTree.

The situation has escalated, with hundreds of Snowflake customer passwords found online and accessible to cybercriminals. The breach underscores the rising use of infostealer malware, which extracts login details from compromised devices. Snowflake, in collaboration with cybersecurity firms CrowdStrike and Mandiant, has determined that the attack primarily targeted accounts with single-factor authentication. The company urges customers to enable multifactor authentication to mitigate the risk.

While the origin of the stolen data remains unclear, it highlights the vulnerabilities inherent in interconnected services provided by third-party vendors. Companies like Snowflake increasingly advise their clients to enforce strict security measures and reset login credentials to prevent further breaches. The US Cybersecurity and Infrastructure Security Agency and Australian Cyber Security Center have issued alerts regarding the incident, emphasising the need for enhanced cybersecurity practices.

Berlin set to launch world’s first cyber brothel

Later this month, Berlin will see the launch of the world’s first cyber brothel, offering customers the opportunity to book time with AI sex dolls. The new service, spearheaded by Cybrothel founder Philipp Fussenegger, allows users to interact verbally and physically with the AI dolls, catering to a growing demand for more interactive AI experiences in the adult entertainment industry.

Generative AI is increasingly being integrated into the adult entertainment sector. A report by SplitMetrics shows that AI companion apps have been downloaded 225 million times on the Google Play Store, indicating a lucrative market. These AI companions often charge fees and collect user data, which is frequently shared with third parties, raising privacy concerns.

Experts have voiced significant concerns about the potential harms of merging AI with adult entertainment. Issues include the reinforcement of gender stereotypes, addiction risks, and privacy violations. AI chatbots, according to Mozilla’s privacy researcher Misha Rykov, target lonely individuals and can exacerbate mental health challenges. Furthermore, content warnings for themes of abuse, violence, and underage relationships have been added to several AI chatbots by Mozilla.

Despite these concerns, some industry leaders argue that AI can enhance the sexual experience without replacing human interaction. Philipp Hamburger from Lovehoney emphasises AI’s role in ethically improving user experience. Additionally, Ruben Cruz from The Clueless Agency believes AI can help mitigate ethical issues by preventing the explicit sexualisation of real individuals in adult content. However, the broader impact on real-world relationships and the potential for harmful assumptions about consent remain critical issues that need addressing.

EU banks’ increasing reliance on US tech giants for AI raises concerns

According to European banking executives, the rise of AI is increasing banks’ reliance on major US tech firms, raising new risks for the financial industry. AI, already used in detecting fraud and money laundering, has gained significant attention following the launch of OpenAI’s ChatGPT in late 2022, with banks exploring more applications of generative AI.

At a fintech conference in Amsterdam, industry leaders expressed concerns about the heavy computational power needed for AI, which forces banks to depend on a few big tech providers. Bahadir Yilmaz, ING’s chief analytics officer, noted that this dependency on companies like Microsoft, Google, IBM, and Amazon poses one of the biggest risks, as it could lead to ‘vendor lock-in’ and limit banks’ flexibility. These facts also imply the strong impact AI could have on retail investor protection.

Britain has proposed regulations to manage financial firms’ reliance on external tech companies, reflecting concerns that issues with a single cloud provider could disrupt services across multiple financial institutions. Deutsche Bank’s technology strategy head, Joanne Hannaford, highlighted that accessing the necessary computational power for AI is feasible only through Big Tech.

The European Union’s securities watchdog recently emphasised that banks and investment firms must protect customers when using AI and maintain boardroom responsibility.

Daixin Team claims Dubai ransomware attack

Dubai, known for its ultra-luxurious lifestyle and wealthy population, has reportedly fallen victim to a ransomware attack by the Daixin Team. The cybercriminal group claimed on their dark blog to have exfiltrated 60-80GB of sensitive data from the Government of Dubai’s network systems, including ID cards, passports, and other personally identifiable information (PII).

The stolen data, which has not yet been fully analysed or released, reportedly includes many personal and business records. Among the sensitive information are details about the residents of this city in the UAE, many of whom are expatriates and high-net-worth individuals. Due to the city’s high concentration of wealthy residents, this data breach poses significant risks, such as identity theft and targeted phishing attacks.

The Daixin Team, a Russian-speaking ransomware group active since at least June 2022, is known for targeting various sectors, including healthcare and utilities. They typically gain access through compromised VPN servers or phishing attacks and often publish stolen data if ransom demands are not met. The Government of Dubai has been contacted for comment but has not yet responded.

Meta faces EU complaints over AI data use

Meta Platforms is facing 11 complaints over proposed changes to its privacy policy that could violate EU privacy regulations. The changes, set to take effect on 26 June, would allow Meta to use personal data, including posts and private images, to train its AI models without user consent. Advocacy group NOYB has urged privacy watchdogs to take immediate action against these changes, arguing that they breach the EU’s General Data Protection Regulation (GDPR).

Meta claims it has a legitimate interest in using users’ data to develop its AI models, which can be shared with third parties. However, NOYB founder Max Schrems contends that the European Court of Justice has previously ruled against Meta’s arguments for similar data use in advertising, suggesting that the company is ignoring these legal precedents. Schrems criticises Meta’s approach, stating that the company should obtain explicit user consent rather than complicating the opt-out process.

In response to the impending policy changes, NOYB has called on data protection authorities across multiple European countries, including Austria, Germany, and France, to initiate an urgent procedure to address the situation. If found in violation of GDPR, Meta could face strict fines.