Cities are increasingly turning to AI to enhance waste management and reduce contamination in recycling and composting efforts. In East Lansing, Michigan, where a significant student population often contributes to recycling contamination, city officials have launched a pilot program using AI to address the issue. The initiative includes equipping recycling trucks with AI-powered cameras that identify non-recyclable items and sending personalised postcards to residents to inform them of their mistakes. This approach has reportedly led to a 20% reduction in recycling contamination.
Despite these promising results, privacy concerns have arisen regarding the collection of personal data through these AI systems. Experts warn that the information gathered from residents’ trash could expose sensitive details about their lives, potentially leading to identity theft or misuse by authorities. For instance, a discarded pregnancy test could be used against a woman in states with strict abortion laws. This phenomenon, referred to as ‘mission creep,’ raises alarms about how technologies designed for one purpose can evolve into surveillance tools.
City officials, like East Lansing’s environmental sustainability manager Cliff Walls and Leduc’s environmental manager Michael Hancharyk, acknowledge these privacy issues and are taking steps to mitigate risks. They emphasise working with vendors to ensure data protection and transparency with residents. Hancharyk noted that his city had to comply with Alberta’s privacy regulations before implementing its program.
While acknowledging the importance of improving waste management, cybersecurity experts stress the need for municipalities to carefully weigh the benefits of AI against the potential risks to residents’ privacy. They advocate for thorough assessments of new technologies and their implications, particularly for sensitive populations. As cities continue to innovate in waste management, striking a balance between efficiency and privacy will be crucial.
AI could help reduce the number of missed broken bones during X-ray analysis, according to the National Institute for Health and Care Excellence (NICE). The organisation recommends using four AI tools in urgent care settings in England to assist doctors in detecting fractures. This comes as radiologists and radiographers face high vacancy rates, putting a strain on the system.
NICE estimates that missed fractures account for up to 10% of diagnostic errors in emergency departments in the UK. AI is seen as a solution to this problem, working alongside healthcare professionals to catch mistakes that may occur due to heavy workloads. Experts believe using AI can speed up diagnoses, decrease the need for follow-up appointments, and ultimately ease pressure on hospital staff.
AI will not replace human expertise, as radiologists will still review all X-ray images. However, NICE assures that the technology could offer a more accurate and efficient process without increasing the risk of incorrect diagnoses or unnecessary referrals. The consultation period on this proposed use of AI in fracture detection will conclude on 5 November 2024.
The Canadian Radio-television and Telecommunications Commission (CRTC) is enhancing connectivity and cultural engagement across Canada through its strategic plan, ‘Connecting Canadians through technology and culture.’ The plan prioritises improvements in internet and cellphone services by promoting competition and investment to ensure reliable and affordable access for all Canadians, including those in rural, remote, and Indigenous communities.
Additionally, the CRTC is advancing the amended Broadcasting Act through public consultations that require online streaming services to contribute approximately $200 million annually to the Canadian broadcasting system. The ongoing implementation of the Online News Act reflects the CRTC’s commitment to establishing a robust framework for digital news media, ensuring diverse and reliable sources for Canadians.
CRTC is also focused on investing in its capabilities to serve Canadians better in the future. The commission aims to enhance its effectiveness in regulating telecommunications and broadcasting services by emphasising modernisation and strategic investments. This proactive approach benefits consumers and positions Canada at the forefront of technological innovation and cultural engagement in a rapidly evolving global landscape.
The US government is nearly finalising rules restricting American investments in certain advanced technologies in China, particularly AI, semiconductors, microelectronics, and quantum computing. These regulations are designed to prevent US know-how from contributing to China’s military capabilities following an executive order signed by President Joe Biden in August 2023. The rules are under review by the Office of Management and Budget and are expected to be released soon, possibly before the upcoming US presidential election on 5 November.
The new regulations will require US investors to notify the Treasury Department about specific investments in sensitive technologies. While the rules will ban certain investments outright, they also include several exceptions. For example, some publicly traded securities and certain types of debt financing will not fall under the restrictions. However, US companies and individuals will determine which transactions are subject to the new limits.
Earlier drafts of the rules, published in June, gave the public a chance to provide feedback and proposed banning AI investments that involved systems trained with substantial computing power. The final regulations are expected to provide additional clarity, particularly concerning the thresholds for restricted transactions in AI and the role of limited partners in such investments.
Experts like Laura Black, a former Treasury official, anticipate that the regulations will take effect at least 30 days after release. These measures reflect the US government’s growing focus on curbing China’s access to critical technologies while balancing the need for certain economic exceptions in mutual funds and syndicated debt financing sectors.
The upcoming release will be a significant step in the Biden administration’s broader effort to safeguard US technological advantage and national security interests in the face of growing competition from China.
AI-driven tools are entering wholesale banking, with Intellect Global Transaction Banking introducing the ‘eMACH.ai Cloud’ for the sector. The platform provides banks with a comprehensive suite of services to manage their corporate clients’ complex needs across various industries, supporting both operational efficiency and business growth.
The eMACH.ai Cloud aims to streamline operations by consolidating wholesale banking requirements into one platform, reducing reliance on multiple systems. It offers scalable solutions tailored to different sectors, allowing banks to modernise their operations and meet regulatory requirements.
CEO Manish Maakan highlighted that the platform helps banks reduce costs, unlock new revenue streams, and innovate business models. He stressed the importance of agility in today’s banking landscape, explaining that eMACH.ai empowers banks to deliver greater value to clients while keeping pace with industry changes.
From liquidity management to sustainable finance initiatives, the platform offers tools to address evolving compliance demands and environmental goals. Its integrated design ensures banks can meet both financial and ESG objectives more effectively.
Samsung is taking its commitment to security up a notch by expanding its blockchain technology to cover a wider range of AI-powered home appliances. The South Korean tech giant announced that its Knox Matrix framework, originally designed for mobile devices and televisions, will now protect home devices using a ‘Trust Chain.’ This private blockchain system enables connected devices to monitor each other for potential security issues, keeping users informed in case of any threats.
In addition to blockchain-based security, Samsung is introducing ‘Cross Platform’ technology, ensuring consistent protection across devices, regardless of the operating system. The company also aims to improve privacy with its ‘Credential Sync,’ which encrypts and synchronises user data for enhanced safety.
Expected to roll out these new features next year, Samsung will integrate biometric authentication, allowing users to log into apps with fingerprints instead of passwords. The move builds on the company’s previous blockchain ventures, including its Samsung Blockchain Wallet and Blockchain Keystore.
A portrait of Alan Turing created by Ai-Da, a humanoid robot artist, will be auctioned at Sotheby’s London in a pioneering art sale. Ai-Da, equipped with AI algorithms, cameras, and bionic hands, is among the world’s most advanced robots and is designed to resemble a woman.
The 2.2-metre-high painting, titled ‘AI God’, portrays Turing, a mathematician and WWII codebreaker, and highlights concerns about the role of AI. Its muted colours and fragmented facial planes reflect the challenges Turing warned about in managing AI.
Sotheby’s online auction, running from 31 October to 7 November, will explore the intersection of art and technology. The artwork is estimated to sell for £100,000–£150,000. Ai-Da’s previous work includes painting Glastonbury Festival performers like Billie Eilish and Paul McCartney.
Ai-Da’s creator, Aidan Meller, collaborated with AI experts from Oxford and Birmingham to develop the robot. Meller noted that Ai-Da’s haunting artworks continue to raise questions about the future of AI and the global race to control its potential.
Two independent candidates participated in an online debate on Thursday, engaging with an AI-generated version of incumbent congressman Don Beyer. The digital avatar, dubbed ‘DonBot’, was created using Beyer’s website and public materials to simulate his responses in the event, streamed on YouTube and Rumble.
Beyer, a Democrat seeking re-election, opted not to join the debate in person. His AI representation featured a robotic voice reading answers without imitating his tone. Independent challengers Bentley Hensel and David Kennedy appeared on camera, while the Republican candidate Jerry Torres did not participate. Viewership remained low, peaking at fewer than 20 viewers, and parts of DonBot’s responses were inaudible.
Hensel explained that the AI was programmed to provide unbiased answers using available public information. The debate tackled policy areas such as healthcare, gun control, and aid to Israel. When asked why voters should re-elect Beyer, the AI stated, ‘I believe that I can make a real difference in the lives of the people of Virginia’s 8th district.’
Although the event saw minimal impact, observers suggest the use of AI in politics could become more prevalent. The reliance on such technology raises concerns about transparency, especially if no regulations are introduced to guide its use in future elections.
These artificial avatars would operate on social media and online platforms, featuring realistic expressions and high-quality images akin to government IDs. JSOC also seeks technologies to produce convincing facial and background videos, including ‘selfie videos’, to avoid detection by social media algorithms.
US state agencies have previously announced frameworks to combat foreign information manipulation, citing national security threats from these technologies. Despite recognising the global dangers posed by deepfakes, SOCOM’s initiative underscores a willingness to engage with the technology for potential military advantage.
Experts expressed concern over the ethical implications and potential for increased misinformation, warning of the entirely deceptive nature of deepfakes, with no legitimate applications beyond deceit, possibly encouraging further global misuse. Furthermore, such practices pose the risk of diminished public trust in government communications, exacerbated by perceived hypocrisy in deploying such technology.
Why does it matter?
This plan reflects an ongoing interest in leveraging digital manipulation for military purposes, despite previous incidents where platforms like Meta dismantled similar US-linked networks. It further shows a contradiction in the US’s stance on deepfake use, as it simultaneously condemns similar actions by countries like Russia and China.
The authors make several points relevant to the global AI discussions. First, as AI becomes integral to the global economy, warning echo of the looming threat of concentrated corporate control, which risks stifling innovation, compromising consumer privacy, and undermining democratic values. To combat it, the authors advocate for a diverse AI market that includes public, private, and non-profit stakeholders to ensure the technology’s benefits are widely distributed.
In "Stopping Big Tech from Becoming Big AI" we lay out a series of detailed, practical measures to check rising market concentration and keep AI open for all 🧵 pic.twitter.com/gaHjwHwLYL
Second, the report mentions monopolistic risks, through tactics such as exclusive partnerships and control over computing power that allow dominant firms to consolidate power, restricting competition and innovation. Despite often being unseen by consumers, these practices could centralise AI development and inhibit market diversity. As an action point, the authors call on governments to act swiftly using existing regulatory tools, such as blocking mergers and enforcing ex-ante competition policies, to dismantle these barriers and impose fair access rules on essential AI resources.
Finally, international cooperation is one of the key points, particularly the importance of recognising the global nature of AI development. Authors warn against repeating past mistakes of digital market dominance, emphasising the need for a unified approach to AI regulation. Through fostering competition, the report asserts that AI can deliver broader societal benefits, prioritising innovation and privacy over profit maximisation and surveillance.
Why does it matter?
The global community sees the current moment as a pivotal chance to shape AI’s future for the collective good, urging immediate regulatory intervention. Echoing this approach, this report aims to ensure that AI remains a competitive field characterised by transparency and fairness, safeguarding a digital economy that benefits all stakeholders equally.