Survey reveals limited usage of AI tools among general population

A recent study by the Reuters Institute and the University of Oxford sheds light on the general population’s widespread lack of awareness and usage of generative AI tools. Despite their prevalence in tech-centric professions, tools like ChatGPT, Gemini, and Copilot remain unfamiliar to many people, with 20-30% of respondents across six countries admitting they haven’t even heard of them.

The survey, conducted among approximately 12,000 participants in Argentina, Denmark, France, Japan, the UK, and the USA, highlights that most people do not use generative AI tools daily. Even OpenAI’s ChatGPT, the most recognised tool, is used daily by only a small fraction of respondents, ranging from 1% in Japan to 7% in the USA. Other popular tools like Google’s Gemini and Microsoft’s Copilot also have limited daily usage.

Generational differences are evident, with younger demographics more likely to engage with generative AI, while older age groups exhibit lower usage rates. The study suggests that generative AI is primarily utilised for media generation and information retrieval, with 28% using it for various media types and 24% for gathering information.

Respondents anticipate significant impacts of generative AI across sectors such as search engines, social media, news media, and science. However, overall expectations regarding AI’s societal impact lean towards pessimism, particularly concerning issues like the cost of living, equality, and job security.

Meta discovers ‘likely AI-generated’ content praising Israel

Meta reported finding likely AI-generated content used deceptively on Facebook and Instagram, praising Israel’s handling of the Gaza conflict in comments under posts from global news organisations and US lawmakers. This campaign, linked to the Tel Aviv-based political marketing firm STOIC, targeted audiences in the US and Canada by posing as various concerned citizens. STOIC has not commented on the allegations.

Meta’s quarterly security report marks the first disclosure of text-based generative AI technology used in influence operations since its emergence in late 2022. While AI-generated profile photos have been identified in past operations, the use of text-based AI raises concerns about more effective disinformation campaigns. Despite this, Meta’s security team successfully disrupted the Israeli campaign early and maintained confidence in their ability to detect such networks.

The report detailed six covert influence operations disrupted in the first quarter, including an Iran-based network focused on the Israel-Hamas conflict, which did not use generative AI. As Meta and other tech giants continue to address potential AI misuse, upcoming elections in the EU and the US will test their defences against AI-generated disinformation.

Senators to introduce No Fakes Act to regulate AI in music and film industries

US senators are set to introduce a bill in June to regulate AI in the music and movie industries amid rising tensions in Hollywood. The NO FAKES Act, an acronym for Nurture Originals, Foster Art, and Keep Entertainment Safe, aims to prohibit the unauthorised creation of AI-generated replicas of individuals’ likenesses or voices.

Senator Chris Coons (D-Del.) is leading the bipartisan effort with Senators Amy Klobuchar (D-Minn.), Marsha Blackburn (R-Tenn.), and Thom Tillis (R-N.C.). They are working with artists in the recording and movie industries on the bill’s details.

Musicians, in particular, are increasingly worried about the lack of protection for their names, likenesses, and voices from being used in AI-generated songs. During the Grammys on the Hill lobbying event, Sheryl Crow noted the urgency of establishing guidelines and safeguards considering the unsettling trend of artists’ voices being used without consent, even posthumously.

However, before considering a national AI bill, Senators will need to address several issues, including whether the law will override existing state laws like Tennessee’s ELVIS Act and determine the duration of licensing restrictions and postmortem rights for an artist’s digital replica.

As Senate discussions continue, the Recording Academy has supported the bill. Meanwhile, the movie industry also backs the regulation but has raised concerns about potential First Amendment infringements. A similar bill, the No AI Fraud Act, is being considered in the House. Senate Majority Leader Chuck Schumer is also pushing for AI legislation that respects First Amendment principles.

Why does it matter?

Concerns about AI’s impact on the entertainment industry escalated after a dispute between Scarlett Johansson and OpenAI. Johansson accused OpenAI of using an ‘eerily similar’ voice to hers for a new chatbot without her permission. A similar situation happened with singers Ariana Grande and Lainey Wilson, who have also had their voices mimicked without consent. Last year, an anonymous artist released ‘Heart on my Sleeve,’ falsely impersonating Drake and The Weeknd, raising alarm bells across the industry.

AI tools deployed to counter cyber threats at 2024 Olympics

In just over two months, Paris will host the eagerly awaited 2024 Summer Olympics, welcoming athletes from around the globe. These athletes had a condensed preparation period due to the COVID-related delay of the 2020 Summer Olympics, which took place in Tokyo in 2021. While athletes hone their skills for the upcoming games, organisers diligently fortify their defences against cybersecurity threats.

As cyber threats become increasingly sophisticated, there’s a growing focus on leveraging AI to combat them. Blackbird.AI has developed Constellation, an AI-powered narrative intelligence platform that identifies and analyses disinformation-driven narratives. By assessing the risk and adding context to these narratives, Constellation equips organisations with invaluable insights for informed decision-making.

The platform’s real-time monitoring capability allows for early detection and mitigation of narrative attacks, which can inflict significant financial and reputational damage. With the ability to analyse various forms of content across multiple platforms and languages, Constellation offers a comprehensive approach to combating misinformation and safeguarding against online threats.

Meanwhile, the International Olympic Committee (IOC) is also embracing AI, recognising its potential to enhance various aspects of sports. From talent identification to improving judging fairness and protecting athletes from online harassment, the IOC is leveraging AI to innovate and enhance the Olympic experience. With cybersecurity concerns looming, initiatives like Viginum, spearheaded by French President Emmanuel Macron, aim to counter online interference and ensure the security of major events like the Olympics.

EU launches AI Office to regulate AI development

The European Commission has launched the AI Office to oversee the development, deployment, and regulation of AI in the EU. The AI Office ensures that AI fosters societal and economic benefits while managing associated risks. It will play a crucial role in implementing the AI Act, especially for general-purpose AI models. It will also support research and innovation to position the EU as a leader in trustworthy AI.

The AI Office comprises several specialised units. The Regulation and Compliance Unit will enforce the AI Act across the EU, working with member states to administer sanctions and handle investigations. The ‘AI Safety Unit’ will identify and mitigate risks associated with powerful AI models. The ‘Excellence in AI and Robotics Unit’ will fund research and coordinate the GenAI4EU initiative. The ‘AI for Societal Good Unit’ will focus on international collaborations in areas like weather modelling and cancer diagnosis. Lastly, the ‘AI Innovation and Policy Coordination Unit’ will monitor AI trends, stimulate investment, and support testing and regulatory sandboxes.

Led by the Head of the AI Office and advised by a Lead Scientific Adviser and an international affairs expert, the office will employ over 140 staff members. These include technology specialists, lawyers, and policy experts. The AI Office will collaborate with member states and the scientific community through dedicated forums and the European Artificial Intelligence Board. It will also support research and innovation activities, ensuring that AI models developed in Europe are integrated into various applications, thereby stimulating investment.

The AI Office will officially begin its operations on 16 June, with the first meeting of the AI Board scheduled for the end of June. It will issue guidelines on AI system definitions and prohibitions within six months of the AI Act’s enforcement, expected by the end of July 2024. This initiative follows the EU AI Act, provisionally agreed upon in December 2023, and aims to maintain safety and fundamental rights while fostering innovation and investment in AI across Europe.

EU regulators work with tech giants on AI rules

According to Ireland’s Data Protection Commission, leading global internet companies are working closely with the EU regulators to ensure their AI products comply with the bloc’s stringent data protection laws. This body, which oversees compliance for major firms like Google, Meta, Microsoft, TikTok, and OpenAI, has yet to exercise its full regulatory power over AI but may enforce significant changes to business models to uphold data privacy.

AI introduces several potential privacy issues, such as whether companies can use public data to train AI models and the legal basis for using personal data. AI operators must also guarantee individuals’ rights, including the right to have their data erased and address the risk of AI models generating incorrect personal information. Significant engagement has been noted from tech giants seeking guidance on their AI innovations, particularly large language models.

Following consultations with the Irish regulator, Google has already agreed to delay and modify its Gemini AI chatbot. While Ireland leads regulation due to many tech firms’ EU headquarters being located there, other EU regulators can influence decisions through the European Data Protection Board. AI operators must comply with the new EU AI Act and the General Data Protection Regulation, which imposes fines of up to 4% of a company’s global turnover for non-compliance.

Why does it matter?

Ireland’s broad regulatory authority means that companies failing to perform due diligence on new products could be forced to alter their designs. As the EU’s AI regulatory landscape evolves, these tech firms must navigate both the AI Act and existing data protection laws to avoid substantial penalties.

OpenAI CEO leads safety committee for AI model training

OpenAI has established a Safety and Security Committee to oversee the training of its next AI model, the company announced on Tuesday. CEO Sam Altman will lead the committee alongside directors Bret Taylor, Adam D’Angelo, and Nicole Seligman. The committee makes safety and security recommendations to OpenAI’s board.

The committee’s initial task is to review and enhance OpenAI’s existing safety practices over the next 90 days, after which it will present its findings to the board. Following the board’s review, OpenAI plans to share the adopted recommendations publicly. This move follows the disbanding of OpenAI’s Superalignment team earlier this month, which led to the departure of key figures like former Chief Scientist Ilya Sutskever and Jan Leike.

Other members of the new committee include technical and policy experts Aleksander Madry, Lilian Weng, and head of alignment sciences John Schulman. Newly appointed Chief Scientist Jakub Pachocki and head of security Matt Knight will also be part of the committee, contributing to the safety and security oversight of OpenAI’s projects and operations.

Leadership vacancy stalls EU AI office development

Two months after the European Parliament passed the landmark AI Act, the EU Commission office responsible for its implementation remains understaffed and leaderless. Although such a pace is common for public institutions, stakeholders worry it may delay the enactment of the hundreds of pages of the AI Act, especially with some parts coming into effect by the end of the year.

The EU Commission’s Directorate-General for Communication Networks, Content, and Technology (DG Connect), which houses the AI Office, is undergoing a reorganisation. Despite reassurances from officials that preparations are on track, concerns persist about the office’s limited budget, slow hiring process, and the overwhelming workload on the current staff. Three Members of the European Parliament (MEPs) have expressed dissatisfaction with the transparency and progress of the recruitment and leadership processes.

The European Commission has identified 64 deliverables for the AI Office, prohibiting certain AI uses set to take effect by the year’s end. Codes of practice for general-purpose models, such as ChatGPT, will be developed within nine months of the legislation’s enactment. Despite recent recruitment efforts, including two positions opened in March and additional roles for lawyers and AI ethicists soon to be advertised, the hiring process is expected to take several more months.

Why does it matter?

A significant question remains regarding the leadership of the AI Office. The Commission has yet to announce candidates or details of the selection process. Speculation has arisen around MEP Dragoș Tudorache, who has been active in AI policy and is not seeking re-election. However, he has not confirmed any plans post-tenure. The Commission aims to finalise the office’s staffing and leadership to ensure the smooth implementation of the AI Act.

China’s AI chipmakers closing gap on global leaders

China’s domestic AI chipmakers are rapidly closing the gap on international leaders, according to Xu Bing, co-founder of SenseTime Group Inc. Despite the significant lag in computational power compared to the US, China possesses the talent and data necessary to advance in the AI field, Xu stated during an interview at the UBS Asian Investment Conference in Hong Kong. SenseTime, a leading AI company in China, faces challenges due to US sanctions that restrict access to advanced AI technology, such as Nvidia’s accelerators.

The US trade controls have spurred the development of domestic alternatives from companies like Huawei Technologies and Shanghai Biren Technology, both also affected by US restrictions. Xu emphasised that although Asia faces a considerable shortfall in computational resources, the region is abundant in talent and data. He noted that China’s AI chip industry is catching up quickly, with SenseTime collaborating with local semiconductor firms to enhance their computing capabilities.

While the exact gap between Chinese and US AI technology is uncertain, estimated between one to three years, Xu is optimistic that this disadvantage in computing power will be temporary. He believes that, over time, the disparity in computing resources will diminish, viewing computing power as a commodity China will eventually acquire in sufficient quantity. Notable Chinese companies making strides in AI chips include Moore Threads Intelligent Beijing Co., Huawei, and other key players like Baidu Inc. and Naura Technology Group Ltd, which have received government attention and support.

Musk’s xAI secures $6 billion investment for AI development

Elon Musk’s AI startup, xAI, has secured a whopping $6 billion in a recent funding round, marking one of the largest deals in the burgeoning AI sector. The fund positions xAI to fiercely compete with industry rivals such as OpenAI, Microsoft, and Google. Among the notable backers are Valor Equity Partners, Vy Capital, Andreessen Horowitz, Sequoia Capital, and Fidelity, alongside prominent figures like Prince Alwaleed Bin Talal and Kingdom Holding.

The funding round, which values xAI at $18 billion pre-money, underscores Musk’s significant presence in AI. As an early and prominent figure in AI entrepreneurship, Musk’s ventures extend beyond xAI, including his leadership role at Tesla, which pioneers self-driving technologies. Musk’s involvement with OpenAI, however, has been contentious, leading to legal disputes and accusations of mission deviation.

xAI, which emerged just last year from the social network X, has already made strides in AI development, notably with its chatbot ChatGPT-rival Grok 1.0 model. The company’s recent release of the Grok 1.5 model and its exploration of multimodal capabilities indicate a commitment to advancing AI technologies. Despite its ambitions for ‘truthful’ AI systems, concerns have arisen regarding Grok’s news summary feature, which has been reported to generate misleading information.

Why does it matter?

With the new financing, xAI aims to bring its initial products to market, enhance infrastructure, and accelerate research and development efforts. Additionally, the company seeks partnerships to expand Grok’s user base beyond X, signalling its intent to scale its AI innovations and influence in the global market.