On Tuesday, a group of current and former OpenAI employees issued an open letter warning that leading AI companies lack necessary transparency and accountability to address potential risks. The letter highlights AI safety concerns, such as deepening inequalities, misinformation, and loss of control over autonomous systems, potentially leading to catastrophic outcomes.
The 16 signatories, including Google DeepMind staff, emphasised that AI firms have financial incentives to avoid effective oversight and criticised their weak obligations to share critical information. They called for stronger whistleblower protections, noting that confidentiality agreements often prevent employees from raising concerns. Some current OpenAI employees signed anonymously, fearing retaliation. AI pioneers like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell also endorsed the letter, criticising inadequate preparations for AI’s dangers.
The letter also calls for AI companies to commit to main principles in order to maintain a curtain level of accountability and transparency. Those principles are – not to enter into or enforce any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism, facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise, and support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.
Why does it matter?
In response, OpenAI defended its record, citing its commitment to safety, rigorous debate, and engagement with various stakeholders. The company highlighted its anonymous integrity hotline and newly formed Safety and Security Committee as channels for employee concerns. The critique of OpenAI comes amid growing scrutiny of CEO Sam Altman’s leadership. The concerns raised by OpenAI insiders highlights the critical need for transparency and accountability in AI development. Ensuring that AI companies are effectively overseen and held accountable and that insiders are enabled to speak out about unethical or dangerous practices without fear of retaliation represent pivotal safeguards to inform the public and the decision makers about AI’s potential capabilities and risks.
Italy’s antitrust regulator AGCM (Autorita’ Garante della Concorrenza e del Mercato) has fined Meta, the owner of Facebook and Instagram, for unfair commercial practices. The authority imposed a fine of €3.5 million on Meta Platforms Ireland Ltd. and parent company Meta Platforms Inc. for two deceptive business practices regarding the creation and management of Facebook and Instagram social network accounts.
Namely, the watchdog stated that Instagram users were not adequately informed about how their personal data was used for commercial purposes and that users of both platforms were not given proper information on contesting account suspensions.
Meta has already addressed these issues, according to the regulator. A Meta spokesperson expressed disagreement with AGCM’s decision and mentioned that the company is considering its options. They also highlighted that since August 2023, Meta has implemented changes for Italian users to increase transparency about data usage for advertising on Instagram.
The first complaint alleges that Microsoft’s contracts with schools attempt to shift responsibility for GDPR compliance onto them despite schools lacking the capacity to monitor or enforce Microsoft’s data practices. That could result in children’s data being processed in ways that do not comply with GDPR. The second complaint highlights the use of tracking cookies within Microsoft 365 Education software, which reportedly collects user browsing data and analyses user behaviour, potentially for advertising purposes.
NOYB claims that such tracking practices occur without users’ consent or the schools’ knowledge, and there appears to be no legal justification for it under GDPR. They request that the Austrian Data Protection Authority investigate the complaints and determine the extent of data processing by Microsoft 365 Education. The group has also urged the authority to impose fines if GDPR violations are confirmed.
Microsoft has not yet responded to the complaints. Still, the company has stated that its 365 for Education complies with GDPR and other applicable privacy laws and that it thoroughly protects the privacy of its young users.
After the adoption of the EU rules which ban real-time facial recognition in public spaces but allows some exceptions for law enforcement, the Swedish government ordered an inquiry into expanded powers for law enforcement to use camera surveillance, including the use of facial recognition technology. The EU exceptions include searching for missing people or specific suspected victims of human trafficking, or preventing imminent threats such as a terrorist attack. It also allows the technology for locating individuals suspected of committing certain criminal offenses.
The Swedish police plan to integrate facial recognition into their daily operations by leveraging a database containing over 40,000 facial images of individuals who have been detained or arrested. This technology enables law enforcement to quickly compare these images with footage from closed-circuit television (CCTV), streamlining the process of identifying suspects and potentially speeding up investigations​.
Why does it matter?
The deployment of FRT by Swedish police is governed by stringent regulations to ensure compliance with both national and EU data protection laws, aligning with Sweden’s Crime Data Act and the EU’s Data Protection Law Enforcement Directive (GDPR). This compliance is crucial to addressing concerns about privacy and civil liberties, which are often raised in discussions about surveillance technologies​. The adoption of FRT in Sweden comes as part of a broader trend within Europe, where several countries are exploring or have already implemented similar technologies. For example, Dutch police utilize a substantial biometric database to aid in their law enforcement efforts.
New York lawmakers are preparing to ban social media companies from using algorithms to control content seen by youth without parental consent. The legal initiative, expected to be voted on this week, aims to protect minors from automated feeds and notifications during overnight hours unless parents approve. The move comes as social media platforms face increasing scrutiny for their addictive nature and impact on young people’s mental health.
Earlier this year, New York City Mayor Eric Adams announced a lawsuit against major social media companies, including Facebook and Instagram, for allegedly contributing to a mental health crisis among youth. Similar actions have been taken by other states, with Florida recently passing a law requiring parental consent for minors aged 14 and 15 to use social media and banning those under 14 from accessing these platforms.
Why does it matter?
The trend started with Utah, which became the first state to regulate children’s social media access last year. States like Arkansas, Louisiana, Ohio, and Texas have since followed suit. The heightened regulation is affecting social media companies, with shares of Meta and Snap seeing a slight decline in extended trading.
Spain’s data protection authority, AEPD, has temporarily suspended two Meta products planned for deployment during the upcoming European election on its social media platforms, Facebook and Instagram. The tools, named ‘Election Day Information’ (EDI) and ‘Voter Information Unit’ (VIU), potentially violate data protection regulations in Spain, according to AEPD. Meta, formerly Facebook, has contested this decision, stating that the tools were designed to respect users’ privacy and comply with GDPR standards.
Meta’s proposed data processing methods, aimed at sending notifications to eligible users reminding them to vote, raised concerns for AEPD. The agency highlighted that Meta’s selection of eligible voters based on user profile data such as city of residence and IP addresses was contrary to Spanish data protection regulations. AEPD deemed this data processing unnecessary, disproportionate, and excessive, as it excluded EU citizens living abroad and targeted non-EU citizens in Europe.
Furthermore, AEPD criticised Meta’s data collection practices regarding users’ ages, stating there was no reliable mechanism to verify self-reported ages. Additionally, the watchdog found Meta’s treatment of interaction data disproportionate to the stated purpose of informing about the elections. Moreover, Meta failed to justify the need to retain the collected data after the election, indicating potential additional purposes for the processing operation, according to AEPD.
TikTok is developing a separate recommendation algorithm for its 170 million US users to address concerns from American lawmakers who are pushing to ban the app. The following action, initiated by ByteDance, TikTok’s Chinese parent company, involves separating millions of lines of code to create an independent US version, potentially paving the way for divestiture of US assets.
The initiative, which predates a bill mandating TikTok’s US operations’ sale, is a response to bipartisan concerns that the app could provide Beijing with access to extensive user data. Despite ByteDance’s legal challenge to the new law, engineers continue to work on the complex and lengthy process of code separation, which is expected to take over a year.
TikTok has stated that selling its US assets is not feasible, citing commercial, technological, and legal constraints. However, the company is exploring options to demonstrate its US operations’ independence, including possibly open-sourcing parts of its algorithm. The success of this separation project could impact TikTok US’s performance, which currently relies on ByteDance’s engineering resources.
The European Securities and Markets Authority (ESMA) has issued its first statement on AI, emphasising that banks and investment firms in the EU must uphold boardroom responsibility and legal obligations to safeguard customers when using AI. ESMA’s guidance, aimed at entities regulated across the EU, outlines how these firms can integrate AI into their daily operations while complying with the EU’s MiFID securities law.
While AI offers opportunities to enhance investment strategies and client services, ESMA underscores its inherent risks, particularly concerning protecting retail investors. The authority stresses that management bodies are ultimately responsible for decisions, regardless of whether humans or AI-based tools make them. ESMA emphasises the importance of acting in clients’ best interests, irrespective of the tools firms choose to employ.
ESMA’s statement extends beyond the direct development or adoption of AI tools by financial institutions, also addressing the use of third-party AI technologies. Whether firms utilise platforms like ChatGPT or Google Bard with or without senior management’s direct knowledge, ESMA emphasises the need for management bodies to understand and oversee the application of AI technologies within their organisations.
Their guidance aligns with the forthcoming EU rules on AI, set to take effect next month, establishing a potential global standard for AI governance across various sectors. Additionally, efforts are underway at the global level, led by the Group of Seven economies (G7), to establish safeguards for AI technology’s safe and responsible development.
In just over two months, Paris will host the eagerly awaited 2024 Summer Olympics, welcoming athletes from around the globe. These athletes had a condensed preparation period due to the COVID-related delay of the 2020 Summer Olympics, which took place in Tokyo in 2021. While athletes hone their skills for the upcoming games, organisers diligently fortify their defences against cybersecurity threats.
As cyber threats become increasingly sophisticated, there’s a growing focus on leveraging AI to combat them. Blackbird.AI has developed Constellation, an AI-powered narrative intelligence platform that identifies and analyses disinformation-driven narratives. By assessing the risk and adding context to these narratives, Constellation equips organisations with invaluable insights for informed decision-making.
The platform’s real-time monitoring capability allows for early detection and mitigation of narrative attacks, which can inflict significant financial and reputational damage. With the ability to analyse various forms of content across multiple platforms and languages, Constellation offers a comprehensive approach to combating misinformation and safeguarding against online threats.
Meanwhile, the International Olympic Committee (IOC) is also embracing AI, recognising its potential to enhance various aspects of sports. From talent identification to improving judging fairness and protecting athletes from online harassment, the IOC is leveraging AI to innovate and enhance the Olympic experience. With cybersecurity concerns looming, initiatives like Viginum, spearheaded by French President Emmanuel Macron, aim to counter online interference and ensure the security of major events like the Olympics.
According to Ireland’s Data Protection Commission, leading global internet companies are working closely with the EU regulators to ensure their AI products comply with the bloc’s stringent data protection laws. This body, which oversees compliance for major firms like Google, Meta, Microsoft, TikTok, and OpenAI, has yet to exercise its full regulatory power over AI but may enforce significant changes to business models to uphold data privacy.
AI introduces several potential privacy issues, such as whether companies can use public data to train AI models and the legal basis for using personal data. AI operators must also guarantee individuals’ rights, including the right to have their data erased and address the risk of AI models generating incorrect personal information. Significant engagement has been noted from tech giants seeking guidance on their AI innovations, particularly large language models.
Following consultations with the Irish regulator, Google has already agreed to delay and modify its Gemini AI chatbot. While Ireland leads regulation due to many tech firms’ EU headquarters being located there, other EU regulators can influence decisions through the European Data Protection Board. AI operators must comply with the new EU AI Act and the General Data Protection Regulation, which imposes fines of up to 4% of a company’s global turnover for non-compliance.
Why does it matter?
Ireland’s broad regulatory authority means that companies failing to perform due diligence on new products could be forced to alter their designs. As the EU’s AI regulatory landscape evolves, these tech firms must navigate both the AI Act and existing data protection laws to avoid substantial penalties.