London cinema cancels AI-written film premiere after public backlash

A central London cinema has cancelled the premiere of a film written entirely by AI following a public backlash. The Prince Charles Cinema in Soho was set to host the world debut of ‘The Last Screenwriter,’ created by ChatGPT, but concerns about ‘the use of AI in place of a writer’ led to the screening being axed.

In a statement, the cinema explained that customer feedback highlighted significant concerns regarding AI’s role in the arts. The film, directed by Peter Luisi, was marketed as the first feature film written entirely by AI, and its plot centres on a screenwriter who grapples with an AI scriptwriting system that surpasses his abilities.

The cinema stated that the film was intended as an experiment to spark discussion about AI’s impact on the arts. However, the strong negative response from their audience prompted them to cancel the screening, emphasising their commitment to their patrons and the movie industry.

The controversy over AI’s role in the arts reflects broader industry concerns, as seen in last year’s Sag-Aftra strike in Hollywood. The debate continues, with UK MPs now calling for measures to ensure fair compensation for artists whose work is used by AI developers.

AI boosts Bayer’s fight against resistant weeds

Bayer’s crop science division is leveraging AI to combat herbicide-resistant weeds, aiming to speed up the discovery of new solutions. With traditional herbicides losing effectiveness, Bayer urgently needs innovative approaches to help farmers manage these resilient weeds. The company’s Icafolin product, set to launch in Brazil in 2028, will be its first new mode of action herbicide in three decades.

Frank Terhorst, Bayer’s executive vice president of strategy and sustainability, highlighted that AI significantly enhances the efficiency of finding new herbicides by matching weed protein structures with targeted molecules. This AI-driven process allows for the use of vast amounts of data, making it faster and more reliable.

Bob Reiter, head of research and development at Bayer, noted that AI tools have already tripled the number of new modes of action discovered compared to a decade ago. The mentioned technological advancement promises to shorten the timeline for developing effective herbicides, offering a critical advantage in the ongoing fight against crop-destroying weeds.

G7 Italy summit unveils AI action plan to balance AI risks and opportunities

Adopted on June 14, 2024, at the G7 Summit in Apulia, Italy, the Group of Seven (G7) Leaders’ Communiqué, expresses the wealthiest nations’ common pledges and actions to address multiple global issues. A portion of the Group of Seven (G7) declaration closing the Italian summit focuses on AI and other digital matters.

G7 leaders called for an action plan to manage AI’s risks and benefits, including developing and implementing an International Code of Conduct for organisations developing advanced AI systems, as unveiled last October under the Japanese G7 presidency. To maximise the advantages of AI while mitigating its threats, G7 nations commit to deepening their cooperation.

An action plan for the use of AI in the workplace was announced, together with the creation of a brand to promote the implementation and use of the International Code of Conduct for advanced AI systems, in cooperation with OECD. G7 leaders stressed the importance of global partnership to bridge the digital divide and ensure that people around the world have access to the benefits of AI and other technologies. The goal is to advance science, improve public health, accelerate the clean energy transition, promote sustainable development goals, etc.

Why does it matter?

The G7 is encouraging global collaboration within the group of countries, with the OECD, with other initiatives such as the Global Partnership on AI (GPAI), and towards the developing world, to facilitate the equitable distribution of the benefits of AI and other emerging technologies while minimising any threats. G7 leaders aim to mend technological gaps and address AI’s impact on workers. G7 labor ministers are tasked with designing measure to capitalize on AI’s potential, promote quality employment, and empower people, while also tackling potential barriers and risks to workers and labour markets.

G7 leaders agreed to intensify efforts to promote AI safety and enhance interoperability between diverse approaches to AI governance and risk management. That means strengthening collaboration between AI Safety Institutes in the US, UK, and equivalent bodies in other G7 nations and beyond, to improve global standards for AI development and implementation. The G7 also formed a ‘Semiconductors Point of Contact Group’ to strengthen cooperative efforts on addressing challenges affecting this critical industry that drives the AI ecosystem.

G7 nation’s commitments are consistent with the recent Seoul AI safety summit efforts and align with the intended goals of the upcoming United Nations Summit of the Future. Echoing the UN General Assembly landmark resolution on ‘seizing the opportunities of safe, secure, and trustworthy AI systems for sustainable development’ and Pope Francis’s historic address to the G7 leaders, the communiqué reflects the group’s unified stance on AI safety and the need for a framework for AI’s responsible development and use in the military.

Snapchat introduces advanced AI-powered AR features

Snap Inc, the owner of Snapchat, has unveiled a new iteration of its generative AI technology, enabling users to apply more realistic special effects when using their phone cameras. That move aims to keep Snapchat ahead of its social media competitors by enhancing its augmented reality (AR) capabilities, which superimpose digital effects onto real-world photos and videos.

In addition to this AI upgrade, Snap has introduced an enhanced version of its developer program, Lens Studio. The upgrade will significantly reduce the time required to create AR effects, cutting it from weeks to hours. The new Lens Studio also incorporates generative AI tools, including an AI assistant to help developers and a feature that can generate 3D images from text prompts.

Bobby Murphy, Snap’s chief technology officer, highlighted that these tools expand creative possibilities and are user-friendly, allowing even newcomers to create unique AR effects quickly. Plans for Snap include developing full-body AR experiences, such as generating new outfits, which are currently challenging to produce.

SewerAI utilises AI to detect sewer pipe issues

Sewage failures exacerbated by climate change and ageing infrastructure are becoming increasingly costly and common across the United States. The Environmental Protection Agency estimates that nearly $700 billion is required over the next two decades to maintain existing wastewater and stormwater systems. In response to these challenges, Matthew Rosenthal and Billy Gilmartin, veterans of the wastewater treatment industry, founded SewerAI five years ago. Their goal was to leverage AI to improve the inspection and management of sewer infrastructure.

SewerAI’s AI-driven platform offers cloud-based subscription products tailored for municipalities, utilities, and private contractors. Their tools, such as Pioneer and AutoCode, streamline field inspections and data management by enabling inspectors to upload data and automatically tag issues. That approach enhances efficiency and helps project managers plan and prioritise infrastructure repairs based on accurate 3D models generated from inspection videos.

Unlike traditional methods that rely on outdated on-premise software, SewerAI’s technology increases productivity and reduces costs by facilitating more daily inspections. The company has distinguished itself in the competitive AI-assisted pipe inspection market by leveraging a robust dataset derived from 135 million feet of sewer pipe inspections. This data underpins their AI models, enabling precise defect detection and proactive infrastructure management.

Recently, SewerAI secured $15 million in funding from investors like Innovius Capital, bringing their total raised capital to $25 million. This investment will support SewerAI’s expansion efforts, including AI model refinement, hiring initiatives, and diversification of their product offerings beyond inspection tools. The company anticipates continued growth as it meets rising demand and deploys its technology to empower organisations to achieve more with existing infrastructure budgets.

AI award-winning headless flamingo photo found to be real

A controversial AI-generated photo of a headless flamingo has ignited a heated debate over the ethical implications of AI in art and technology. The image, which was honored in the AI category of the 1839 Awards’ Color Photography Contest, has drawn criticism and concern from various sectors, including artists, technologists, and ethicists. 

The photo, titled ‘F L A M I N G O N E,’ depicts a flamingo without its head. It was created by photographer Miles Astray using a sophisticated AI model designed to generate lifelike images. Contrary to initial impressions, the photo wasn’t generated from a text prompt but was instead based on a real — and not at all beheaded — flamingo that Astray captured on the beaches of Aruba two years ago. After the photo won both third place in the category and the People’s Vote award, Astray revealed the truth, leading to his disqualification.

Proponents of AI-generated art assert that such creations push the boundaries of artistic expression, offering new and innovative ways to explore and challenge traditional concepts of art. They argue that the AI’s ability to produce unconventional and provocative images can be seen as a form of artistic evolution, allowing for greater diversity and creativity in the art world. However, detractors highlight the potential risks and ethical dilemmas posed by such technology. The headless flamingo photo, in particular, has been described as unsettling and inappropriate, sparking a broader conversation about the limits of AI-generated content. Concerns have been raised about the potential for AI to produce harmful or distressing images, and the need for guidelines and oversight to ensure responsible use.

The release of the headless flamingo photo has prompted a range of responses from the art and tech communities. Some artists view the image as a provocative statement on the nature of AI and its role in society, while others see it as a troubling example of the technology’s potential to create disturbing content. Tech experts emphasise the importance of developing ethical frameworks and guidelines for AI-generated art. They argue that while AI has the potential to revolutionize creative fields, it is crucial to establish clear boundaries and standards to prevent misuse and ensure that the technology is used responsibly.

‘‘F L A M I N G O N E’ accomplished its mission by sending a poignant message to a world grappling with ever-advancing, powerful technology and the profusion of fake images it brings. My goal was to show that nature is just so fantastic and creative, and I don’t think any machine can beat that. But, on the other hand, AI imagery has advanced to a point where it’s indistinguishable from real photography. So where does that leave us? What are the implications and the pitfalls of that? I think that is a very important conversation that we need to be having right now.”, Miles Astray told The Washington Post.

Why does it matter?

The controversy surrounding the AI-generated headless flamingo photo highlights the broader ethical challenges posed by artificial intelligence in creative fields. As AI technology continues to advance, it is increasingly capable of producing highly realistic and complex images. That raises important questions about the role of AI in art, the responsibilities of creators and developers, and the need for ethical guidelines to navigate these new frontiers.

McDonald’s halts AI ordering test in drive-thrus

McDonald’s has decided to discontinue the use of AI ordering technology that was being tested at over 100 drive-thru locations in the US. The company had collaborated with IBM to develop and test this AI-driven, voice-automated system. Despite this decision, McDonald’s remains committed to exploring AI solutions, noting that IBM will remain a trusted partner in other areas. The discontinuation of this specific technology is set to occur by 26 July 2024.

The partnership between McDonald’s and IBM began in 2021 as part of McDonald’s ‘Accelerating the Arches’ growth plan, which aimed to enhance customer experience through Automated Order Taking (AOT) technology. IBM highlighted the AOT’s capabilities as being among the most advanced in the industry, emphasising its speed and accuracy. Nonetheless, McDonald’s is reassessing its strategy for implementing AOT and intends to find long-term, scalable AI solutions by the end of 2024.

McDonald’s move to pause its AI ordering technology reflects broader challenges within the fast-food industry’s adoption of AI. Other chains like White Castle and Wendy’s have also experimented with similar technologies. However, these initiatives have faced hurdles, including customer complaints about incorrect orders due to the AI’s difficulty in understanding different accents and filtering out background noise. Despite these setbacks, the fast-food sector continues to push forward with AI innovations to improve operational efficiency and customer service.

FCC names Royal Tiger as first official AI robocall scammer gang

The US Federal Communications Commission (FCC) has identified Royal Tiger as the first official AI robocall scammer gang, marking a milestone in efforts to combat sophisticated cyber fraud. Royal Tiger has used advanced techniques like AI voice cloning to impersonate government agencies and financial institutions, deceiving millions of Americans through robocall scams.

These scams involve automated systems that mimic legitimate entities to trick individuals into divulging sensitive information or making fraudulent payments. Despite the FCC’s actions, experts warn that AI-driven scams will likely increase, posing significant challenges in protecting consumers from evolving tactics such as caller ID spoofing and persuasive social engineering.

While the FCC’s move aims to raise awareness and disrupt criminal operations, individuals are urged to remain vigilant. Tips include scepticism towards unsolicited calls, utilisation of call-blocking services, and verification of caller identities by contacting official numbers directly. Avoiding sharing personal information over the phone without confirmation of legitimacy is crucial to mitigating the risks posed by these scams.

Why does it matter?

As technology continues to evolve, coordinated efforts between regulators, companies, and the public are essential in staying ahead of AI-enabled fraud and ensuring robust consumer protection measures are in place. Vigilance and proactive reporting of suspicious activities remain key in safeguarding against the growing threat of AI-driven scams.

AI tools struggle with election questions, raising voter confusion concerns

As the ‘year of global elections’ reaches its midpoint, AI chatbots and voice assistants are still struggling with basic election questions, risking voter confusion. The Washington Post found that Amazon’s Alexa often failed to correctly identify Joe Biden as the 2020 US presidential election winner, sometimes providing irrelevant or incorrect information. Similarly, Microsoft’s Copilot and Google’s Gemini refused to answer such questions, redirecting users to search engines instead.

Tech companies are increasingly investing in AI to provide definitive answers rather than lists of websites. This feature is particularly important as false claims about the 2020 election being stolen persist, even after multiple investigations found no fraud. Trump faced federal charges for attempting to overturn Biden’s victory, who won decisively with over 51% of the popular vote.

OpenAI’s ChatGPT and Apple’s Siri, however, correctly answered election questions. Seven months ago, Amazon claimed to have fixed Alexa’s inaccuracies, and recent tests showed Alexa correctly stating Biden won the 2020 election. Nonetheless, inconsistencies were spotted last week. Microsoft and Google, in return, said they avoid answering election-related questions to reduce risks and prevent misinformation,, a policy also applied in Europe due to a new law requiring safeguards against misinformation.

Why does it matter?

Tech companies are increasingly tasked with distinguishing fact from fiction as it develops AI-enabled assistants. Recently, Apple announced a partnership with OpenAI to enhance Siri with generative AI capabilities. Concurrently, Amazon is set to launch a new AI version of Alexa as a subscription service in September, although it remains unclear how it will handle election queries. An early prototype struggled with accuracy, and internal doubts about its readiness persist. The new AI assistants from Amazon and Apple aim to merge traditional voice commands with conversational capabilities, but experts warn this integration may pose new challenges.

G7 summit underscores ethical AI, digital inclusion, and global solidarity

The G7 leaders met with counterparts from several countries, including Algeria, Argentina, Brazil, and India, along with heads of major international organisations such as the African Development Bank and the UN, to address global challenges impacting the Global South. They emphasised the need for a unified and equitable international response to these issues, underscoring solidarity and shared responsibility to ensure inclusive solutions.

Pope Francis made an unprecedented appearance at the summit, contributing valuable insights on AI. The leaders discussed AI’s potential to enhance industrial productivity while cautioning against its possible negative impacts on the labour market and society. They stressed the importance of developing AI that is ethical, transparent, and respects human rights, advocating for AI to improve services while protecting workers.

The leaders highlighted the necessity of bridging digital divides and promoting digital inclusion, supporting Italy’s proposal for an AI Hub for Sustainable Development. The hub aims to strengthen local AI ecosystems and advance AI’s role in sustainable development.

They also emphasised the importance of education, lifelong learning, and international mobility to equip workers with the necessary skills to work with AI. Finally, the leaders committed to fostering cooperation with developing and emerging economies to close digital gaps, including the gender digital divide, and achieve broader digital inclusion.