Reporters Without Borders (RSF) has praised the Council of Europe’s (CoE) new Framework Convention on AI for its progress but criticised its reliance on private sector self-regulation. The Convention, which includes 46 European countries, aims to address the impact of AI on human rights, democracy, and the rule of law. While it acknowledges the threat of AI-fueled disinformation, RSF argues that it fails to provide the necessary mechanisms to achieve its goals.
The CoE Convention mandates strict regulatory measures for AI use in the public sector but allows member states to choose self-regulation for the private sector. RSF believes this distinction is a critical flaw, as the private sector, particularly social media companies and other digital service providers, have historically prioritised business interests over the public good. According to RSF, this approach will not effectively combat the disinformation challenges posed by AI.
RSF urges countries that adopt the Convention to implement robust national legislation to strictly regulate AI development and use. That would ensure that AI technologies are deployed ethically and responsibly, protecting the integrity of information and democratic processes. Vincent Berthier, Head of RSF’s Tech Desk, emphasised the need for legal requirements over self-regulation to ensure AI serves the public interest and upholds the right to reliable information.
RSF’s recommendations provide a framework for AI regulation that addresses the shortcomings of both the Council of Europe’s Framework Convention and the European Union’s AI Act, advocating for stringent measures to safeguard the integrity of information and democracy.
According to European banking executives, the rise of AI is increasing banks’ reliance on major US tech firms, raising new risks for the financial industry. AI, already used in detecting fraud and money laundering, has gained significant attention following the launch of OpenAI’s ChatGPT in late 2022, with banks exploring more applications of generative AI.
At a fintech conference in Amsterdam, industry leaders expressed concerns about the heavy computational power needed for AI, which forces banks to depend on a few big tech providers. Bahadir Yilmaz, ING’s chief analytics officer, noted that this dependency on companies like Microsoft, Google, IBM, and Amazon poses one of the biggest risks, as it could lead to ‘vendor lock-in’ and limit banks’ flexibility. These facts also imply the strong impact AI could have on retail investor protection.
Britain has proposed regulations to manage financial firms’ reliance on external tech companies, reflecting concerns that issues with a single cloud provider could disrupt services across multiple financial institutions. Deutsche Bank’s technology strategy head, Joanne Hannaford, highlighted that accessing the necessary computational power for AI is feasible only through Big Tech.
The European Union’s securities watchdog recently emphasised that banks and investment firms must protect customers when using AI and maintain boardroom responsibility.
Top officials at the US Federal Election Commission (FEC) are divided over a proposal requiring political advertisements on broadcast radio and television to disclose if their content is generated by AI. FEC Vice Chair Ellen Weintraub backs the proposal, initiated by FCC Chairwoman Jessica Rosenworcel, which aims to enhance transparency in political ads, whereas FEC Chair Sean Cooksey opposes it.
The proposal, which does not ban AI-generated content, comes amid increasing concerns in Washington that such content could mislead voters in the upcoming 2024 elections. Rosenworcel emphasised the risk of ‘deepfakes’ and other altered media misleading the public and noted that the FCC has long-standing authority to mandate disclosures. Weintraub also highlighted the importance of transparency for public benefit and called for collaborative regulatory efforts between the FEC and FCC.
However, Cooksey warned that mandatory disclosures might conflict with existing laws and regulations, creating confusion in political campaigns. Republican FCC Commissioner Brendan Carr criticised the proposal, pointing out inconsistencies in regulation, as the FCC cannot oversee internet, social media, or streaming service ads. The debate gained traction following an incident in January where a fake AI-generated robocall impersonating US President Joe Biden aimed to influence New Hampshire’s Democratic primary, leading to charges against a Democratic consultant.
This week, key players in the chip industry, including Nvidia, Intel, AMD, Qualcomm, and Arm, gathered in Taiwan for the annual Computex conference, announcing an ‘AI PC revolution.’ They showcased AI-enabled personal computers with specialised chips for running AI applications directly on the device, promising a significant leap in user interaction with PCs.
Intel CEO Pat Gelsinger called this the most exciting development in 25 years since the arrival of WiFi, while Qualcomm’s Cristiano Amon likened it to the industry being reborn. Microsoft has driven this push by introducing AI PCs equipped with its Copilot assistant and choosing Qualcomm as its initial AI chip supplier. Despite this, Intel and AMD are also gearing up to launch their AI processors soon.
Why does it matter?
The conference was strategically timed to precede Apple’s annual Worldwide Developers Conference, hinting at the competitive landscape in AI advancements. As the PC market shows signs of recovery, analysts predict a rise in AI PC adoption, potentially transforming how PCs are used. However, there needs to be more skepticism about whether consumer demand will justify the higher costs associated with these advanced devices, as the Financial Times reports.
Salesforce has chosen London for its first AI centre, where experts, developers, and customers will collaborate on innovation and skill development. The US cloud software company, which is hosting its annual London World Tour event, announced last year a $4 billion investment in the UK over five years, focusing on AI innovation.
Zahra Bahrololoumi, CEO of Salesforce UK and Ireland, highlighted customer enthusiasm for AI’s benefits while noting caution about potential risks. She emphasised the importance of trust in AI adoption, citing Salesforce’s Einstein technology’s ‘Trust Layer’ that protects customer data.
Moreover, Salesforce’s dedication to responsible AI goes beyond data security. Bahrololoumi emphasises the company’s commitment to making AI a force for good. Their message to customers and partners is clear as they are deeply committed to collaborating closely to ensure that the transformative technology of AI brings about positive impacts.
Meta Platforms is facing 11 complaints over proposed changes to its privacy policy that could violate EU privacy regulations. The changes, set to take effect on 26 June, would allow Meta to use personal data, including posts and private images, to train its AI models without user consent. Advocacy group NOYB has urged privacy watchdogs to take immediate action against these changes, arguing that they breach the EU’s General Data Protection Regulation (GDPR).
Meta claims it has a legitimate interest in using users’ data to develop its AI models, which can be shared with third parties. However, NOYB founder Max Schrems contends that the European Court of Justice has previously ruled against Meta’s arguments for similar data use in advertising, suggesting that the company is ignoring these legal precedents. Schrems criticises Meta’s approach, stating that the company should obtain explicit user consent rather than complicating the opt-out process.
In response to the impending policy changes, NOYB has called on data protection authorities across multiple European countries, including Austria, Germany, and France, to initiate an urgent procedure to address the situation. If found in violation of GDPR, Meta could face strict fines.
Chinese AI chip firms, including industry leaders such as MetaX and Enflame, are downgrading their chip designs in order to comply with Taiwan Semiconductor Manufacturing Company’s (TSMC) stringent supply chain security protocols and regulatory requirements. This strategic adjustment comes amidst heightened scrutiny and restrictions imposed by the US on semiconductor exports to Chinese companies, which includes limitations on accessing advanced manufacturing technologies critical for AI chip production.
The US has imposed strict export controls to obstruct China’s military advancements in AI and supercomputing. These controls include restrictions on sophisticated processors from companies like Nvidia, as well as on-chip manufacturing equipment crucial for advanced semiconductor production. That move has prevented TSMC and other overseas chip manufacturers using US tools from fulfilling orders for these restricted technologies.
In response to these restrictions, top Chinese AI chip firms MetaX and Enflame have reportedly submitted downgraded chip designs to TSMC in late 2023. MetaX, founded by former Advanced Micro Devices (AMD) executives and backed by state support, had to introduce the C280 chip after its more advanced C500 Graphic Processing Unit (GPU) ran out of stock in China earlier in the year. Enflame, also Shanghai-based and supported by Tencent, faces similar challenges.
Why does it matter?
The decision to downgrade chip designs to meet production demands reflects the delicate balance between technological advancement and supply chain resilience. While simplifying designs may expedite production and mitigate supply risks in the short term, it also raises questions about long-term innovation and competitiveness. The ability to innovate and deliver cutting-edge AI technologies hinges on access to advanced chip manufacturing processes, which are increasingly concentrated among a few global players.
On Tuesday, a group of current and former OpenAI employees issued an open letter warning that leading AI companies lack necessary transparency and accountability to address potential risks. The letter highlights AI safety concerns, such as deepening inequalities, misinformation, and loss of control over autonomous systems, potentially leading to catastrophic outcomes.
The 16 signatories, including Google DeepMind staff, emphasised that AI firms have financial incentives to avoid effective oversight and criticised their weak obligations to share critical information. They called for stronger whistleblower protections, noting that confidentiality agreements often prevent employees from raising concerns. Some current OpenAI employees signed anonymously, fearing retaliation. AI pioneers like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell also endorsed the letter, criticising inadequate preparations for AI’s dangers.
The letter also calls for AI companies to commit to main principles in order to maintain a curtain level of accountability and transparency. Those principles are – not to enter into or enforce any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism, facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise, and support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.
Why does it matter?
In response, OpenAI defended its record, citing its commitment to safety, rigorous debate, and engagement with various stakeholders. The company highlighted its anonymous integrity hotline and newly formed Safety and Security Committee as channels for employee concerns. The critique of OpenAI comes amid growing scrutiny of CEO Sam Altman’s leadership. The concerns raised by OpenAI insiders highlights the critical need for transparency and accountability in AI development. Ensuring that AI companies are effectively overseen and held accountable and that insiders are enabled to speak out about unethical or dangerous practices without fear of retaliation represent pivotal safeguards to inform the public and the decision makers about AI’s potential capabilities and risks.
Last Christmas Eve, NewsBreak, a popular news app, published a false report about a shooting in Bridgeton, New Jersey. The Bridgeton police quickly debunked the story, which had been generated by AI, stating that no such event had occurred. NewsBreak, which operates out of Mountain View, California, and has offices in Beijing and Shanghai, removed the erroneous article four days later, attributing the mistake to its content source.
NewsBreak, known for filling the void left by shuttered local news outlets, uses AI to rewrite news from various sources. However, this method has led to multiple errors, including incorrect information about local charities and fictitious bylines. In response to growing criticism, NewsBreak added a disclaimer about potential inaccuracies to its homepage. With over 50 million monthly users, the app primarily targets a demographic of suburban or rural women over 45 without college degrees.
The company has faced legal challenges due to its AI-generated content. Patch Media settled a $1.75 million lawsuit with NewsBreak over copyright infringement, and Emmerich Newspapers reached a settlement in a similar case. Concerns about the company’s ties to China have also been raised, as half of its employees are based there, prompting worries about data privacy and security.
Despite these issues, NewsBreak maintains that it complies with US data laws and operates on US-based servers. The company’s CEO, Jeff Zheng, emphasises its identity as a US-based business, crucial for its long-term credibility and success.
Young Americans are rapidly embracing generative AI, but few use it daily, according to a recent survey by Common Sense Media, Hopelab, and Harvard’s Center for Digital Thriving. The survey, conducted in October and November 2023 with 1,274 US teens and young adults aged 14-22, found that only 4% use AI tools daily. Additionally, 41% have never used AI, and 8% are unaware of what AI tools are. The main uses for AI among respondents are seeking information (53%) and brainstorming (51%).
Demographic differences show that 40% of white respondents use AI for schoolwork, compared to 62% of Black respondents and 48% of Latinos. Looking ahead, 41% believe AI will have both positive and negative impacts in the next decade. Notably, 28% of LGBTQ+ respondents expect mostly negative impacts, compared to 17% of cisgender/straight respondents. Young people have varied opinions on AI, as some view it as a sign of a changing world and are enthusiastic about its future, while others find it unsettling and concerning.
Why does it matter?
Young people globally share concerns over AI, which the IMF predicts will affect nearly 40% of jobs, with advanced economies seeing up to 60%. In comparison to the results above, a survey of 1,000 young Hungarians (aged 15-29) found that frequent AI app users are more positive about its benefits, while 38% of occasional users remain skeptical. Additionally, 54% believe humans will maintain control over AI, with 54% of women fearing loss of control compared to 37% of men.