AI chatbot’s mayoral bid halted by legal and ethical concerns in Wyoming

Victor Miller, 42, has stirred controversy by filing to run for mayor of Cheyenne, Wyoming, using a customised AI chatbot named VIC (virtual integrated citizen). Miller argued that VIC, powered by OpenAI technology, could effectively make political decisions and govern the city. However, OpenAI quickly shut down Miller’s access to their tools for violating policies against AI use in political campaigning.

The emergence of AI in politics underscores ongoing debates about its responsible use as technology outpaces legal and regulatory frameworks. Wyoming Secretary of State Chuck Gray clarified that state law requires candidates to be ‘qualified electors,’ meaning VIC, as an AI bot, does not meet the criteria. Despite this setback, Miller intends to continue promoting VIC’s capabilities using his own ChatGPT account.

Meanwhile, similar AI-driven campaigns have surfaced globally, including in the UK, where another candidate utilises AI models for parliamentary campaigning. Critics, including experts like Jen Golbeck from the University of Maryland, caution that while AI can support decision-making and manage administrative tasks, ultimate governance decisions should remain human-led. Despite the attention these AI candidates attract, observers like David Karpf from George Washington University dismiss them as gimmicks, highlighting the serious nature of elections and the need for informed human leadership.

Miller remains optimistic about the potential for AI candidates to influence politics worldwide. Still, the current consensus suggests that AI’s role in governance should be limited to supportive functions rather than decision-making responsibilities.

New social network app blends AI personas with user interactions

Butterflies, a new social network where humans and AI interact, has launched publicly on iOS and Android after five months in beta. Founded by former Snap engineering manager Vu Tran, the app allows users to create AI personas, called Butterflies, that post, comment, and message like real users. Each Butterfly has unique backstories, opinions, and emotions, enhancing the interaction beyond typical AI chatbots.

Tran developed Butterflies to provide a more creative and substantial AI experience. Unlike other AI chatbots from companies like Meta and Snap, Butterflies aims to integrate AI personas into a traditional social media feed, where AI and human users can engage with each other’s content. The app’s beta phase attracted tens of thousands of users, with some spending hours creating and interacting with hundreds of AI personas.

Butterflies’ unique approach has led to diverse user interactions, from creating alternate universe personas to role-playing in popular fictional settings. Vu Tran believes the app offers a wholesome way to interact with AI, helping people form connections that might be difficult in traditional social settings due to social anxiety or other barriers.

Initially free, Butterflies may introduce a subscription model and brand interactions in the future. Backed by a $4.8 million seed round led by Coatue and other investors, Butterflies aims to expand its functionality and continue to offer a novel way for users to explore AI and social interaction.

London cinema cancels AI-written film premiere after public backlash

A central London cinema has cancelled the premiere of a film written entirely by AI following a public backlash. The Prince Charles Cinema in Soho was set to host the world debut of ‘The Last Screenwriter,’ created by ChatGPT, but concerns about ‘the use of AI in place of a writer’ led to the screening being axed.

In a statement, the cinema explained that customer feedback highlighted significant concerns regarding AI’s role in the arts. The film, directed by Peter Luisi, was marketed as the first feature film written entirely by AI, and its plot centres on a screenwriter who grapples with an AI scriptwriting system that surpasses his abilities.

The cinema stated that the film was intended as an experiment to spark discussion about AI’s impact on the arts. However, the strong negative response from their audience prompted them to cancel the screening, emphasising their commitment to their patrons and the movie industry.

The controversy over AI’s role in the arts reflects broader industry concerns, as seen in last year’s Sag-Aftra strike in Hollywood. The debate continues, with UK MPs now calling for measures to ensure fair compensation for artists whose work is used by AI developers.

AI boosts Bayer’s fight against resistant weeds

Bayer’s crop science division is leveraging AI to combat herbicide-resistant weeds, aiming to speed up the discovery of new solutions. With traditional herbicides losing effectiveness, Bayer urgently needs innovative approaches to help farmers manage these resilient weeds. The company’s Icafolin product, set to launch in Brazil in 2028, will be its first new mode of action herbicide in three decades.

Frank Terhorst, Bayer’s executive vice president of strategy and sustainability, highlighted that AI significantly enhances the efficiency of finding new herbicides by matching weed protein structures with targeted molecules. This AI-driven process allows for the use of vast amounts of data, making it faster and more reliable.

Bob Reiter, head of research and development at Bayer, noted that AI tools have already tripled the number of new modes of action discovered compared to a decade ago. The mentioned technological advancement promises to shorten the timeline for developing effective herbicides, offering a critical advantage in the ongoing fight against crop-destroying weeds.

G7 Italy summit unveils AI action plan to balance AI risks and opportunities

Adopted on June 14, 2024, at the G7 Summit in Apulia, Italy, the Group of Seven (G7) Leaders’ Communiqué, expresses the wealthiest nations’ common pledges and actions to address multiple global issues. A portion of the Group of Seven (G7) declaration closing the Italian summit focuses on AI and other digital matters.

G7 leaders called for an action plan to manage AI’s risks and benefits, including developing and implementing an International Code of Conduct for organisations developing advanced AI systems, as unveiled last October under the Japanese G7 presidency. To maximise the advantages of AI while mitigating its threats, G7 nations commit to deepening their cooperation.

An action plan for the use of AI in the workplace was announced, together with the creation of a brand to promote the implementation and use of the International Code of Conduct for advanced AI systems, in cooperation with OECD. G7 leaders stressed the importance of global partnership to bridge the digital divide and ensure that people around the world have access to the benefits of AI and other technologies. The goal is to advance science, improve public health, accelerate the clean energy transition, promote sustainable development goals, etc.

Why does it matter?

The G7 is encouraging global collaboration within the group of countries, with the OECD, with other initiatives such as the Global Partnership on AI (GPAI), and towards the developing world, to facilitate the equitable distribution of the benefits of AI and other emerging technologies while minimising any threats. G7 leaders aim to mend technological gaps and address AI’s impact on workers. G7 labor ministers are tasked with designing measure to capitalize on AI’s potential, promote quality employment, and empower people, while also tackling potential barriers and risks to workers and labour markets.

G7 leaders agreed to intensify efforts to promote AI safety and enhance interoperability between diverse approaches to AI governance and risk management. That means strengthening collaboration between AI Safety Institutes in the US, UK, and equivalent bodies in other G7 nations and beyond, to improve global standards for AI development and implementation. The G7 also formed a ‘Semiconductors Point of Contact Group’ to strengthen cooperative efforts on addressing challenges affecting this critical industry that drives the AI ecosystem.

G7 nation’s commitments are consistent with the recent Seoul AI safety summit efforts and align with the intended goals of the upcoming United Nations Summit of the Future. Echoing the UN General Assembly landmark resolution on ‘seizing the opportunities of safe, secure, and trustworthy AI systems for sustainable development’ and Pope Francis’s historic address to the G7 leaders, the communiqué reflects the group’s unified stance on AI safety and the need for a framework for AI’s responsible development and use in the military.

Snapchat introduces advanced AI-powered AR features

Snap Inc, the owner of Snapchat, has unveiled a new iteration of its generative AI technology, enabling users to apply more realistic special effects when using their phone cameras. That move aims to keep Snapchat ahead of its social media competitors by enhancing its augmented reality (AR) capabilities, which superimpose digital effects onto real-world photos and videos.

In addition to this AI upgrade, Snap has introduced an enhanced version of its developer program, Lens Studio. The upgrade will significantly reduce the time required to create AR effects, cutting it from weeks to hours. The new Lens Studio also incorporates generative AI tools, including an AI assistant to help developers and a feature that can generate 3D images from text prompts.

Bobby Murphy, Snap’s chief technology officer, highlighted that these tools expand creative possibilities and are user-friendly, allowing even newcomers to create unique AR effects quickly. Plans for Snap include developing full-body AR experiences, such as generating new outfits, which are currently challenging to produce.

SewerAI utilises AI to detect sewer pipe issues

Sewage failures exacerbated by climate change and ageing infrastructure are becoming increasingly costly and common across the United States. The Environmental Protection Agency estimates that nearly $700 billion is required over the next two decades to maintain existing wastewater and stormwater systems. In response to these challenges, Matthew Rosenthal and Billy Gilmartin, veterans of the wastewater treatment industry, founded SewerAI five years ago. Their goal was to leverage AI to improve the inspection and management of sewer infrastructure.

SewerAI’s AI-driven platform offers cloud-based subscription products tailored for municipalities, utilities, and private contractors. Their tools, such as Pioneer and AutoCode, streamline field inspections and data management by enabling inspectors to upload data and automatically tag issues. That approach enhances efficiency and helps project managers plan and prioritise infrastructure repairs based on accurate 3D models generated from inspection videos.

Unlike traditional methods that rely on outdated on-premise software, SewerAI’s technology increases productivity and reduces costs by facilitating more daily inspections. The company has distinguished itself in the competitive AI-assisted pipe inspection market by leveraging a robust dataset derived from 135 million feet of sewer pipe inspections. This data underpins their AI models, enabling precise defect detection and proactive infrastructure management.

Recently, SewerAI secured $15 million in funding from investors like Innovius Capital, bringing their total raised capital to $25 million. This investment will support SewerAI’s expansion efforts, including AI model refinement, hiring initiatives, and diversification of their product offerings beyond inspection tools. The company anticipates continued growth as it meets rising demand and deploys its technology to empower organisations to achieve more with existing infrastructure budgets.

AI award-winning headless flamingo photo found to be real

A controversial AI-generated photo of a headless flamingo has ignited a heated debate over the ethical implications of AI in art and technology. The image, which was honored in the AI category of the 1839 Awards’ Color Photography Contest, has drawn criticism and concern from various sectors, including artists, technologists, and ethicists. 

The photo, titled ‘F L A M I N G O N E,’ depicts a flamingo without its head. It was created by photographer Miles Astray using a sophisticated AI model designed to generate lifelike images. Contrary to initial impressions, the photo wasn’t generated from a text prompt but was instead based on a real — and not at all beheaded — flamingo that Astray captured on the beaches of Aruba two years ago. After the photo won both third place in the category and the People’s Vote award, Astray revealed the truth, leading to his disqualification.

Proponents of AI-generated art assert that such creations push the boundaries of artistic expression, offering new and innovative ways to explore and challenge traditional concepts of art. They argue that the AI’s ability to produce unconventional and provocative images can be seen as a form of artistic evolution, allowing for greater diversity and creativity in the art world. However, detractors highlight the potential risks and ethical dilemmas posed by such technology. The headless flamingo photo, in particular, has been described as unsettling and inappropriate, sparking a broader conversation about the limits of AI-generated content. Concerns have been raised about the potential for AI to produce harmful or distressing images, and the need for guidelines and oversight to ensure responsible use.

The release of the headless flamingo photo has prompted a range of responses from the art and tech communities. Some artists view the image as a provocative statement on the nature of AI and its role in society, while others see it as a troubling example of the technology’s potential to create disturbing content. Tech experts emphasise the importance of developing ethical frameworks and guidelines for AI-generated art. They argue that while AI has the potential to revolutionize creative fields, it is crucial to establish clear boundaries and standards to prevent misuse and ensure that the technology is used responsibly.

‘‘F L A M I N G O N E’ accomplished its mission by sending a poignant message to a world grappling with ever-advancing, powerful technology and the profusion of fake images it brings. My goal was to show that nature is just so fantastic and creative, and I don’t think any machine can beat that. But, on the other hand, AI imagery has advanced to a point where it’s indistinguishable from real photography. So where does that leave us? What are the implications and the pitfalls of that? I think that is a very important conversation that we need to be having right now.”, Miles Astray told The Washington Post.

Why does it matter?

The controversy surrounding the AI-generated headless flamingo photo highlights the broader ethical challenges posed by artificial intelligence in creative fields. As AI technology continues to advance, it is increasingly capable of producing highly realistic and complex images. That raises important questions about the role of AI in art, the responsibilities of creators and developers, and the need for ethical guidelines to navigate these new frontiers.

McDonald’s halts AI ordering test in drive-thrus

McDonald’s has decided to discontinue the use of AI ordering technology that was being tested at over 100 drive-thru locations in the US. The company had collaborated with IBM to develop and test this AI-driven, voice-automated system. Despite this decision, McDonald’s remains committed to exploring AI solutions, noting that IBM will remain a trusted partner in other areas. The discontinuation of this specific technology is set to occur by 26 July 2024.

The partnership between McDonald’s and IBM began in 2021 as part of McDonald’s ‘Accelerating the Arches’ growth plan, which aimed to enhance customer experience through Automated Order Taking (AOT) technology. IBM highlighted the AOT’s capabilities as being among the most advanced in the industry, emphasising its speed and accuracy. Nonetheless, McDonald’s is reassessing its strategy for implementing AOT and intends to find long-term, scalable AI solutions by the end of 2024.

McDonald’s move to pause its AI ordering technology reflects broader challenges within the fast-food industry’s adoption of AI. Other chains like White Castle and Wendy’s have also experimented with similar technologies. However, these initiatives have faced hurdles, including customer complaints about incorrect orders due to the AI’s difficulty in understanding different accents and filtering out background noise. Despite these setbacks, the fast-food sector continues to push forward with AI innovations to improve operational efficiency and customer service.

FCC names Royal Tiger as first official AI robocall scammer gang

The US Federal Communications Commission (FCC) has identified Royal Tiger as the first official AI robocall scammer gang, marking a milestone in efforts to combat sophisticated cyber fraud. Royal Tiger has used advanced techniques like AI voice cloning to impersonate government agencies and financial institutions, deceiving millions of Americans through robocall scams.

These scams involve automated systems that mimic legitimate entities to trick individuals into divulging sensitive information or making fraudulent payments. Despite the FCC’s actions, experts warn that AI-driven scams will likely increase, posing significant challenges in protecting consumers from evolving tactics such as caller ID spoofing and persuasive social engineering.

While the FCC’s move aims to raise awareness and disrupt criminal operations, individuals are urged to remain vigilant. Tips include scepticism towards unsolicited calls, utilisation of call-blocking services, and verification of caller identities by contacting official numbers directly. Avoiding sharing personal information over the phone without confirmation of legitimacy is crucial to mitigating the risks posed by these scams.

Why does it matter?

As technology continues to evolve, coordinated efforts between regulators, companies, and the public are essential in staying ahead of AI-enabled fraud and ensuring robust consumer protection measures are in place. Vigilance and proactive reporting of suspicious activities remain key in safeguarding against the growing threat of AI-driven scams.