FCC names Royal Tiger as first official AI robocall scammer gang

The US Federal Communications Commission (FCC) has identified Royal Tiger as the first official AI robocall scammer gang, marking a milestone in efforts to combat sophisticated cyber fraud. Royal Tiger has used advanced techniques like AI voice cloning to impersonate government agencies and financial institutions, deceiving millions of Americans through robocall scams.

These scams involve automated systems that mimic legitimate entities to trick individuals into divulging sensitive information or making fraudulent payments. Despite the FCC’s actions, experts warn that AI-driven scams will likely increase, posing significant challenges in protecting consumers from evolving tactics such as caller ID spoofing and persuasive social engineering.

While the FCC’s move aims to raise awareness and disrupt criminal operations, individuals are urged to remain vigilant. Tips include scepticism towards unsolicited calls, utilisation of call-blocking services, and verification of caller identities by contacting official numbers directly. Avoiding sharing personal information over the phone without confirmation of legitimacy is crucial to mitigating the risks posed by these scams.

Why does it matter?

As technology continues to evolve, coordinated efforts between regulators, companies, and the public are essential in staying ahead of AI-enabled fraud and ensuring robust consumer protection measures are in place. Vigilance and proactive reporting of suspicious activities remain key in safeguarding against the growing threat of AI-driven scams.

Meta halts AI launch in Europe after EU regulator ruling

Meta’s main EU regulator, the Irish Data Protection Commission (DPC), requested that the company delay the training of its large language models (LLMs) on content published publicly by adults on the company’s platforms. In response, Meta announced they would not be launching their AI in Europe for the time being. 

The main reason behind the request is Meta’s plan to use this data to train its AI models without explicitly seeking consent. The company claims it must do so or else its AI ‘won’t accurately understand important regional languages, cultures or trending topics on social media.’ It is already developing continent-specific AI technology. Another cause for concern is Meta’s use of information belonging to people who do not use its services. In a message to its Facebook users, it said that it may process information about non-users if they appear in an image or are mentioned on their platforms. 

The DPC welcomed Meta’s decision to delay its implementation. The commission is leading the regulation of Meta’s AI tools on behalf of EU data protection authorities (DPAs), 11 of which received complaints by advocacy group NOYB (None Of Your Business). NOYB argues that the GDPR is flexible enough to accommodate this AI, as long as it asks for the user’s consent. The delay comes right before Meta’s new privacy policy comes into force on 26 June. 

Beyond the EU, the executive director of the UK’s Information Commissioner’s Office was pleased with the delay, and added that ‘in order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset.’

European groups urge fairness in EU cybersecurity label for Big Tech

A proposed cybersecurity certification scheme (EUCS) for cloud services has raised concerns among 26 industry groups across Europe, who caution against potential discrimination towards major US tech firms like Amazon, Alphabet’s Google, and Microsoft. The European Commission, EU cybersecurity agency ENISA, and EU countries are set to discuss the scheme, which has seen multiple revisions since its draft release in 2020. The EUCS aims to help governments and businesses select secure and reliable cloud vendors, a critical consideration in the rapidly growing global cloud computing industry.

The latest version of the scheme, updated in March, removed stringent sovereignty requirements that would have forced US tech giants to form joint ventures or collaborate with EU-based companies to handle data within the bloc, a criterion for earning the highest EU cybersecurity label. In a joint letter, the industry groups argued for a non-discriminatory EUCS that fosters the free movement of cloud services across Europe, aligning with industry best practices and supporting Europe’s digital goals and security resilience.

The signatories, which include various chambers of commerce and industry associations from several European countries, emphasised the importance of diverse and resilient cloud technologies for their members to compete globally. They welcomed the removal of ownership controls and specific data protection requirements, arguing that these changes would ensure cloud security improvements without discriminating against non-EU companies.

EU cloud vendors like Deutsche Telekom, Orange, and Airbus have advocated for sovereignty requirements, fearing non-EU government access to European data under foreign laws. However, the industry groups contend that the inclusive approach of the revised EUCS will better serve Europe’s digital and security needs while promoting a competitive market environment.

Japan mandates access for third-party apps

Japan has passed a new law requiring tech giants like Google and Apple to allow access to third-party smartphone apps and payment systems on their platforms, threatening substantial fines for non-compliance. Like the EU’s Digital Markets Act, this legislation mandates fair access to operating systems, browsers, and search engines, with fines reaching up to 30% of revenue for continued anti-competitive behaviour.

The law was approved by Japan’s National Diet with no amendments and aimed to align Japan’s digital market regulations with those of the United States and Europe. That move is intended to foster fair competition and improve the competitive environment for software, such as app stores while ensuring consumer security. The law is set to take effect by the end of 2025.

Japan’s Fair Trade Commission highlighted the necessity for this new legal framework to address the dominance of major tech companies. Although the law does not explicitly name companies, it targets those like Google and Apple, often seen as a ‘duopoly’ in the smartphone app market. The EU’s similar regulatory efforts, particularly the Digital Markets Act, have faced criticism from Apple regarding potential risks to user privacy and security.

India’s EU-inspired antitrust law raises concerns among tech giants

India’s recent legislative push to implement antitrust laws like those in the EU has stirred significant concern among technology giants operating within the country, like Google, Meta, Apple and Amazon. That move, aimed at curbing the dominance of big tech companies and fostering a more competitive market environment, was met with a mixed reception, particularly from those within the technology sector.

The proposed antitrust law draws inspiration from the regulatory framework of the EU, which has been at the forefront of global antitrust enforcement. The EU’s regulations are known for their rigorous scrutiny of large tech corporations, often resulting in major fines and operational restrictions for companies that violate competition laws. Adaptation of this model in India signals a shift towards more assertive regulatory practices in the tech industry.

The Indian government is examining a panel’s report proposing a new ‘Digital Competition Bill‘ to complement existing antitrust laws. The law would target ‘systemically significant digital’ companies with a domestic turnover exceeding $480 million or a global turnover over $30 billion, along with a local user base of at least 10 million for its digital services. Companies would be required to operate in a fair and non-discriminatory manner, with the bill recommending a penalty of up to 10% of a company’s global turnover for violations, mirroring the EU’s Digital Markets Act. Big digital companies would be prohibited from exploiting non-public user data and from favoring their own products or services on their platforms. Additionally, they would be barred from restricting users’ ability to download, install, or use third-party apps in any way, and must allow users to select default settings freely.

Both domestic and international tech firms have voiced concerns about the potential impact of these regulations on their operations. A key US lobby group has already opposed the move, fearing its business impact. The primary worry is that the new laws could stifle innovation and place difficult compliance burdens on companies. That sentiment echoes the broader global debate on the balance between regulation and innovation in the tech sector.

Why does it matter?

  •  Market Dynamics: These laws could significantly alter the competitive landscape in India’s tech industry, making it easier for smaller companies to challenge established giants. 
  • Consumer Protection: Robust antitrust regulations are designed to protect consumers from monopolistic practices that can lead to higher prices, reduced choices, and stifled innovation. Ensuring fair competition can enhance consumer welfare.
  • Global Influence: By aligning its regulatory framework with that of the EU, India could influence how other emerging markets approach antitrust issues.
  • Investment Climate: Clear and consistent regulatory standards can attract foreign investment by providing a predictable business environment. However, the perceived stringency of these laws could also deter some investors concerned about compliance costs and regulatory risks.

LinkedIn disables targeted ads tool to comply with EU regulations

In a move to align with EU’s technology regulations, LinkedIn, the professional networking platform owned by Microsoft, has disabled a tool that facilitated targeted advertising. The decision comes in adherence to the Digital Services Act (DSA), which imposes strict rules on tech companies operating within the EU.

The move by LinkedIn followed a complaint by several civil society organizations, including European Digital Rights (EDRi), Gesellschaft für Freiheitsrechte (GFF), Global Witness, and Bits of Freedom, to the European Commission. These groups raised concerns that LinkedIn’s tool might allow advertisers to target users based on sensitive personal data such as racial or ethnic origin, political opinions, and other personal details due to their membership in LinkedIn groups.

In March, the European Commission had sent a request for information to LinkedIn after these groups highlighted potential violations of the DSA. The DSA requires online intermediaries to provide users with more control over their data, including an option to turn off personalised content  and to disclose how algorithms impact their online experience. It also prohibits the use of sensitive personal data, such as race, sexual orientation, or political opinions, for targeted advertising. In recent years, the EU has been at the forefront of enforcing data privacy and protection laws, notably with the GDPR. The DSA builds on these principles, focusing more explicitly on the accountability of online platforms and their role in shaping public discourse.

A LinkedIn spokesperson emphasised that the platform remains committed to supporting its users and advertisers, even as it navigates these regulatory changes. “We are continually reviewing and updating our processes to ensure compliance with applicable laws and regulations,” the spokesperson said. “Disabling this tool is a proactive step to align with the DSA’s requirements and to maintain the trust of our community.” EU industry chief Thierry Breton commented on LinkedIn’s move, stating, “The Commission will monitor the effective implementation of LinkedIn’s public pledge to ensure full compliance with the DSA.” 

Why does it matter?

The impact of LinkedIn’s decision extends beyond its immediate user base and advertisers. Targeted ads have been a lucrative source of income for social media platforms, allowing advertisers to reach niche markets with high precision. By disabling this tool, LinkedIn is setting a precedent for other tech companies to follow, highlighting the importance of regulatory compliance and user trust.

New York files lawsuit over $1 billion crypto scams targeting immigrants

New York Attorney General Letitia James has filed a lawsuit against NovaTech Ltd and AWS Mining Pty Ltd, accusing them of defrauding immigrant communities, particularly Haitians, out of over $1 billion. The suit alleges that these companies lured investors with promises of high returns, leveraging religious faith to gain trust. Instead of using the funds for legitimate trading, the majority was funneled into pyramid and Ponzi schemes by paying existing investors with funds collected from new ones. AWS Mining and its promoters, Cynthia and Eddy Petion, James Corbett, Martin Zizi, and Frantz Ciceron, promised investors 15 to 20 percent monthly returns, 200 percent returns on investments within 15 months, and bonuses for recruiting new investors.

However, the company failed to generate sufficient returns to pay these promised profits and bonuses, leading to its collapse in 2019 and causing millions of dollars in losses. Following AWS Mining’s collapse, Cynthia and Eddy Petion launched NovaTech, continuing to lure investors with promises of high returns and recruitment bonuses. They targeted minority communities, particularly Haitians, using prayer groups and WhatsApp chats, often advertising in Creole and using religious messages. James said Cynthia Petion branded herself ‘Reverend CEO’ and told investors that NovaTech was ‘God’s vision’, but privately called herself the ‘Zookeeper’ and belittled her investors as a ‘cult’ where ‘they just agree with everything you say.’

NovaTech falsely marketed itself as a registered hedge fund broker, misrepresented its licensing status in the US, and advertised high trading profits. Despite market conditions, NovaTech claimed to pay weekly trading profits, but these were fabricated, with payments coming from new investors’ funds. NovaTech collapsed in May 2023, leaving tens of thousands of investors unable to withdraw their cryptocurrency. Investigation by the Office of the Attorney General (OAG) found that from 2019 to 2023, investors deposited over a billion dollars, but less than $26 million was actually traded. The lawsuit seeks restitution, civil penalties, and a ban on their participation in the securities industry.

Why does it matter?

The following case sheds light on the susceptibility of immigrant communities to financial scams, particularly within the relatively unregulated cryptocurrency sector. James said in a statement that they’re ‘seeing the real dangers of unregulated cryptocurrency platforms with schemes like these.’  By exploiting religious faith and community trust, these fraudulent schemes inflict severe financial harm, often devastating victims’ life savings. The lawsuit seeks to recover the lost funds and hold fraudulent actors accountable, highlighting the need for robust consumer protections and the necessity of enforcing regulations to safeguard vulnerable populations. 

EU banks’ increasing reliance on US tech giants for AI raises concerns

According to European banking executives, the rise of AI is increasing banks’ reliance on major US tech firms, raising new risks for the financial industry. AI, already used in detecting fraud and money laundering, has gained significant attention following the launch of OpenAI’s ChatGPT in late 2022, with banks exploring more applications of generative AI.

At a fintech conference in Amsterdam, industry leaders expressed concerns about the heavy computational power needed for AI, which forces banks to depend on a few big tech providers. Bahadir Yilmaz, ING’s chief analytics officer, noted that this dependency on companies like Microsoft, Google, IBM, and Amazon poses one of the biggest risks, as it could lead to ‘vendor lock-in’ and limit banks’ flexibility. These facts also imply the strong impact AI could have on retail investor protection.

Britain has proposed regulations to manage financial firms’ reliance on external tech companies, reflecting concerns that issues with a single cloud provider could disrupt services across multiple financial institutions. Deutsche Bank’s technology strategy head, Joanne Hannaford, highlighted that accessing the necessary computational power for AI is feasible only through Big Tech.

The European Union’s securities watchdog recently emphasised that banks and investment firms must protect customers when using AI and maintain boardroom responsibility.

Italian regulator fines Meta over user data misuse

Italy’s antitrust regulator AGCM (Autorita’ Garante della Concorrenza e del Mercato) has fined Meta, the owner of Facebook and Instagram, for unfair commercial practices. The authority imposed a fine of €3.5 million on Meta Platforms Ireland Ltd. and parent company Meta Platforms Inc. for two deceptive business practices regarding the creation and management of Facebook and Instagram social network accounts.

Namely, the watchdog stated that Instagram users were not adequately informed about how their personal data was used for commercial purposes and that users of both platforms were not given proper information on contesting account suspensions.

Meta has already addressed these issues, according to the regulator. A Meta spokesperson expressed disagreement with AGCM’s decision and mentioned that the company is considering its options. They also highlighted that since August 2023, Meta has implemented changes for Italian users to increase transparency about data usage for advertising on Instagram.

CODE coalition advocates for open digital ecosystems to drive EU growth and innovation

The Coalition for Open Digital Ecosystems (CODE), a collaborative industry initiative launched in late 2023 by tech giants like Meta, Google and Qualcomm, held its first public event in Brussels advocating for open digital ecosystems to stimulate growth, foster innovation, and empower consumers, particularly within the challenging global context of the EU’s economy. The event hosted a high-level panel discussion with representatives from Meta, BEUC, the European Parliament and Copenhagen Business School. 

Qualcomm CEO Cristiano Amon gave an interview to Euractiv where he emphasised CODE’s three key elements of openness – seamless connectivity and interoperability, consumer choice, an an environment of open access. These elements aim to enhance user experience, maintain data access, and provide fair access to digital tools for developers, particularly smaller companies and startups. Amon highlighted the importance of interoperability and fair access for developers, especially as platforms evolve and become more relevant for various devices, including cars. He also stressed the need to provide fair access for smaller companies with new ideas to participate and reach customers in a competitive environment.

He said that Qualcomm is focused on developing computing engines, such as the Neural Processing Unit (NPU), which is designed to run all the time and handle multiple models. This development aims to add computing capability to various devices while addressing the challenge of integrating this new engine into devices without compromising battery life. Amon also expressed a positive view of the EU’s Digital Markets Act (DMA), applauding the European regulatory leadership for their focus on the importance of open and interoperable platforms. 

Why does it matter?

The panel discussion envisioned a positive scenario for the European digital agenda, highlighting the importance of openness, interoperability, and collaboration for consumers, businesses, and innovation. CODE’s emergence as a new stakeholder in the Brussels digital, tech, and competition policy space highlights the growing recognition of the importance of open digital ecosystems in fostering growth, innovation, and consumer empowerment within the EU’s digital landscape.