G7 Italy summit unveils AI action plan to balance AI risks and opportunities

Adopted on June 14, 2024, at the G7 Summit in Apulia, Italy, the Group of Seven (G7) Leaders’ Communiqué, expresses the wealthiest nations’ common pledges and actions to address multiple global issues. A portion of the Group of Seven (G7) declaration closing the Italian summit focuses on AI and other digital matters.

G7 leaders called for an action plan to manage AI’s risks and benefits, including developing and implementing an International Code of Conduct for organisations developing advanced AI systems, as unveiled last October under the Japanese G7 presidency. To maximise the advantages of AI while mitigating its threats, G7 nations commit to deepening their cooperation.

An action plan for the use of AI in the workplace was announced, together with the creation of a brand to promote the implementation and use of the International Code of Conduct for advanced AI systems, in cooperation with OECD. G7 leaders stressed the importance of global partnership to bridge the digital divide and ensure that people around the world have access to the benefits of AI and other technologies. The goal is to advance science, improve public health, accelerate the clean energy transition, promote sustainable development goals, etc.

Why does it matter?

The G7 is encouraging global collaboration within the group of countries, with the OECD, with other initiatives such as the Global Partnership on AI (GPAI), and towards the developing world, to facilitate the equitable distribution of the benefits of AI and other emerging technologies while minimising any threats. G7 leaders aim to mend technological gaps and address AI’s impact on workers. G7 labor ministers are tasked with designing measure to capitalize on AI’s potential, promote quality employment, and empower people, while also tackling potential barriers and risks to workers and labour markets.

G7 leaders agreed to intensify efforts to promote AI safety and enhance interoperability between diverse approaches to AI governance and risk management. That means strengthening collaboration between AI Safety Institutes in the US, UK, and equivalent bodies in other G7 nations and beyond, to improve global standards for AI development and implementation. The G7 also formed a ‘Semiconductors Point of Contact Group’ to strengthen cooperative efforts on addressing challenges affecting this critical industry that drives the AI ecosystem.

G7 nation’s commitments are consistent with the recent Seoul AI safety summit efforts and align with the intended goals of the upcoming United Nations Summit of the Future. Echoing the UN General Assembly landmark resolution on ‘seizing the opportunities of safe, secure, and trustworthy AI systems for sustainable development’ and Pope Francis’s historic address to the G7 leaders, the communiqué reflects the group’s unified stance on AI safety and the need for a framework for AI’s responsible development and use in the military.

Warning labels for social media suggested by US Surgeon General

US Surgeon General Vivek Murthy has called for a warning label on social media apps to highlight the harm these platforms can cause young people, particularly adolescents. In a New York Times op-ed, Murthy emphasised that while a warning label alone won’t make social media safe, it can raise awareness and influence behaviour, similar to tobacco warning labels. The proposal requires legislative approval from Congress. Social media platforms like Facebook, Instagram, TikTok, and Snapchat have faced longstanding criticism for their negative impact on youth, including shortened attention spans, negative body image, and vulnerability to online predators and bullies.

Murthy’s proposal comes amid increasing efforts by youth advocates and lawmakers to protect children from social media’s harmful effects. US senators grilled CEOs of major social media companies, accusing them of failing to protect young users from dangers such as sexual predators. States are also taking action; New York recently passed legislation requiring parental consent for users under 18 to access ‘addictive’ algorithmic content, and Florida has banned children under 14 from social media platforms while requiring parental consent for 14- and 15-year-olds.

Despite these growing concerns and legislative efforts, major social media companies have not publicly responded to Murthy’s call for warning labels. The push for such labels is part of broader initiatives to mitigate the mental health risks associated with social media use among adolescents, aiming to reduce issues like anxiety and depression linked to these platforms.

FCC names Royal Tiger as first official AI robocall scammer gang

The US Federal Communications Commission (FCC) has identified Royal Tiger as the first official AI robocall scammer gang, marking a milestone in efforts to combat sophisticated cyber fraud. Royal Tiger has used advanced techniques like AI voice cloning to impersonate government agencies and financial institutions, deceiving millions of Americans through robocall scams.

These scams involve automated systems that mimic legitimate entities to trick individuals into divulging sensitive information or making fraudulent payments. Despite the FCC’s actions, experts warn that AI-driven scams will likely increase, posing significant challenges in protecting consumers from evolving tactics such as caller ID spoofing and persuasive social engineering.

While the FCC’s move aims to raise awareness and disrupt criminal operations, individuals are urged to remain vigilant. Tips include scepticism towards unsolicited calls, utilisation of call-blocking services, and verification of caller identities by contacting official numbers directly. Avoiding sharing personal information over the phone without confirmation of legitimacy is crucial to mitigating the risks posed by these scams.

Why does it matter?

As technology continues to evolve, coordinated efforts between regulators, companies, and the public are essential in staying ahead of AI-enabled fraud and ensuring robust consumer protection measures are in place. Vigilance and proactive reporting of suspicious activities remain key in safeguarding against the growing threat of AI-driven scams.

AI tools struggle with election questions, raising voter confusion concerns

As the ‘year of global elections’ reaches its midpoint, AI chatbots and voice assistants are still struggling with basic election questions, risking voter confusion. The Washington Post found that Amazon’s Alexa often failed to correctly identify Joe Biden as the 2020 US presidential election winner, sometimes providing irrelevant or incorrect information. Similarly, Microsoft’s Copilot and Google’s Gemini refused to answer such questions, redirecting users to search engines instead.

Tech companies are increasingly investing in AI to provide definitive answers rather than lists of websites. This feature is particularly important as false claims about the 2020 election being stolen persist, even after multiple investigations found no fraud. Trump faced federal charges for attempting to overturn Biden’s victory, who won decisively with over 51% of the popular vote.

OpenAI’s ChatGPT and Apple’s Siri, however, correctly answered election questions. Seven months ago, Amazon claimed to have fixed Alexa’s inaccuracies, and recent tests showed Alexa correctly stating Biden won the 2020 election. Nonetheless, inconsistencies were spotted last week. Microsoft and Google, in return, said they avoid answering election-related questions to reduce risks and prevent misinformation,, a policy also applied in Europe due to a new law requiring safeguards against misinformation.

Why does it matter?

Tech companies are increasingly tasked with distinguishing fact from fiction as it develops AI-enabled assistants. Recently, Apple announced a partnership with OpenAI to enhance Siri with generative AI capabilities. Concurrently, Amazon is set to launch a new AI version of Alexa as a subscription service in September, although it remains unclear how it will handle election queries. An early prototype struggled with accuracy, and internal doubts about its readiness persist. The new AI assistants from Amazon and Apple aim to merge traditional voice commands with conversational capabilities, but experts warn this integration may pose new challenges.

G7 summit underscores ethical AI, digital inclusion, and global solidarity

The G7 leaders met with counterparts from several countries, including Algeria, Argentina, Brazil, and India, along with heads of major international organisations such as the African Development Bank and the UN, to address global challenges impacting the Global South. They emphasised the need for a unified and equitable international response to these issues, underscoring solidarity and shared responsibility to ensure inclusive solutions.

Pope Francis made an unprecedented appearance at the summit, contributing valuable insights on AI. The leaders discussed AI’s potential to enhance industrial productivity while cautioning against its possible negative impacts on the labour market and society. They stressed the importance of developing AI that is ethical, transparent, and respects human rights, advocating for AI to improve services while protecting workers.

The leaders highlighted the necessity of bridging digital divides and promoting digital inclusion, supporting Italy’s proposal for an AI Hub for Sustainable Development. The hub aims to strengthen local AI ecosystems and advance AI’s role in sustainable development.

They also emphasised the importance of education, lifelong learning, and international mobility to equip workers with the necessary skills to work with AI. Finally, the leaders committed to fostering cooperation with developing and emerging economies to close digital gaps, including the gender digital divide, and achieve broader digital inclusion.

EU charges Apple and Meta for non-compliance

Apple and Meta Platforms are set to face charges from the European Commission for failing to comply with the EU’s Digital Markets Act (DMA) before the summer. The DMA aims to curb the dominance of Big Tech by ensuring fair competition and making it easier for users to switch between competing services. Apple and Meta are the Commission’s priority cases, with Apple expected to be charged first, followed by Meta.

Apple’s charges will focus on its App Store policies, which allegedly restrict app developers from informing users about alternative offers and impose new fees. Additionally, a separate investigation into Apple’s Safari web browser is expected to take more time. Meta’s charges will centre on its recent ‘pay or consent’ model for Facebook and Instagram, which requires users to either pay for an ad-free experience or consent to targeted advertising.

Both companies have the opportunity to address the concerns before the final decision, which could result in fines of up to 10% of their global annual turnover. Apple stated in March that it believes its plans comply with the DMA and is engaging constructively with the Commission. Meta and the Commission declined to comment on the ongoing investigations.

Trial scheduled for April in Musk’s X lawsuit against Media Matters

A lawsuit filed by Elon Musk’s company X against Media Matters, scheduled for trial in April 2025, marks the latest development in a contentious legal battle. The US District Court for the Northern District of Texas set this date following allegations from X that Media Matters misrepresented the prevalence of hate speech on social media platforms, specifically targeting content on X’s platform.

Media Matters, a nonprofit watchdog group, has been accused by X of distorting data and exaggerating the likelihood of encountering extremist content. X claims that Media Matters’ methodology does not accurately reflect real user experiences, prompting a legal challenge that has garnered significant attention.

In response to Thursday’s court order, neither X nor Media Matters provided immediate comments. However, Media Matters President Angelo Carusone previously denounced the lawsuit as baseless and an attempt to stifle criticism of Elon Musk. Motions for summary judgment are expected by December, with a decision potentially influencing the case’s outcome before it reaches trial.

The lawsuit is part of a broader pattern for Musk, who has faced legal setbacks in similar cases aiming to challenge watchdog groups. Earlier this year, a federal judge in California dismissed a lawsuit by X against the Center for Countering Digital Hate, criticising it as retaliatory rather than protective of platform integrity. The outcome of these legal battles could affect how social media platforms and watchdog organisations navigate issues of content moderation and free speech moving forward.

Particle teams up with Reuters to reinvent news delivery

Particle, a news-reader startup developed by former Twitter engineers, is partnering with publishers to navigate the evolving landscape of news consumption in the AI era. By leveraging AI technology, Particle aims to provide news summaries from various publishers through its app, offering readers a comprehensive understanding of current events from multiple perspectives. That approach seeks to address concerns within the publishing industry about potential revenue loss due to AI-driven news summaries.

Now, Particle has teamed up with Reuters to explore new business models in a significant move. The startup has subscribed to Reuters newswire to enhance its news delivery capabilities. Additionally, Particle secured $10.9 million in Series A funding led by Lightspeed Venture Partners, with investments from media giant Axel Springer. These partnerships and investments underscore Particle’s commitment to collaborating with publishers to address their needs and goals in the rapidly evolving media landscape.

Particle’s co-founder, Sara Beykpour, emphasises the startup’s focus on delivering value to news consumers beyond AI summaries. With a mission to help readers cut through the noise and understand the news faster, Particle offers a personalised news experience while ensuring exposure to diverse viewpoints. By presenting news stories holistically and integrating perspectives from multiple outlets, Particle aims to combat information overload and mitigate media bias.

Why does it matter?

Despite its innovative approach, Particle has yet to finalise its business model. The startup actively engages with publishers to develop a sustainable model that benefits readers and publishers. Possibilities include revenue sharing, advertising, and more, with input from industry stakeholders shaping the future direction of Particle’s business strategy.

Pope Francis to address AI ethics at G7 summit

Pope Francis is set to make history at the upcoming G7 summit in Italy’s Puglia region by becoming the first pope to address the gathering’s discussions on AI. His participation underscores his commitment to ensuring that AI development aligns with human values and serves the common good. The 87-year-old pontiff recognises the potential of AI for positive change but also emphasises the need for careful regulation to prevent its misuse and safeguard against potential risks.

At the heart of the pope’s message is the call for an ethical framework to guide AI development and usage. Through initiatives like the ‘Rome Call for AI Ethics’, the Vatican seeks to promote transparency, inclusion, responsibility, and impartiality in AI endeavours. Notably, major tech companies like Microsoft, IBM, Cisco Systems, and international organisations have endorsed these principles.

During the G7 summit, Pope Francis is expected to advocate for international cooperation in AI regulation. He emphasises the importance of addressing global inequalities in access to technology and mitigating threats like AI-controlled weapons and the spread of misinformation. His presence at the summit signifies a proactive engagement with contemporary issues, reflecting his vision of a Church actively involved in shaping the world’s future.

The pope’s decision to address AI at the G7 summit follows concerns about the rise of ‘deepfake’ technology, exemplified by manipulated images of himself circulating online. He recognises the transformative potential of AI in the 21st century and seeks to ensure its development aligns with human dignity and social justice. Through his participation, Pope Francis aims to contribute to the creation of an ethical and regulatory framework that promotes the responsible use of AI for the benefit of all humanity.

Meta develops AI technology tailored specifically for Europe

Meta Platforms, the owner of Facebook, announced it is developing AI technology tailored specifically for Europe, taking into account the region’s linguistic, geographic, and cultural nuances. The company will train its large language models using publicly shared content from its platforms, including Instagram and Facebook, ensuring that private posts are excluded to maintain user privacy.

Last month, Meta revealed plans to inform Facebook and Instagram users in Europe and the UK about how their public information is utilised to enhance and develop AI technologies. The move aims to increase transparency and reassure users about data privacy.

By focusing on localised AI development, Meta hopes to serve the European market better, reflecting the region’s diverse characteristics in its technology offerings. That effort underscores Meta’s commitment to respecting user privacy while advancing its AI capabilities.