AI cameras to catch road offenders in North Lincolnshire and East Yorkshire

A new mobile camera unit will be deployed in UK’s East Yorkshire and North Lincolnshire to catch drivers using mobile phones and those not wearing seat belts. In partnership with National Highways, Safer Roads Humber will operate the AI-equipped camera for a week starting Monday, 10 June. The AI technology identifies potential lawbreakers, with images reviewed by an officer to confirm violations before prosecution.

Offenders face significant penalties: a £200 fine and six points on their licence for using a handheld phone while driving, and a £100 fine for not wearing a seat belt. Drivers are also responsible for ensuring passengers under 14 are belted in. Sometimes, offenders may be offered an educational course instead of prosecution.

Ian Robertson from Safer Roads Humber highlighted the enhanced enforcement capabilities of this new equipment. While their current safety camera vans already detect such offences, the advanced AI technology of the new unit provides added capacity to improve road safety.

New ‘future self’ chatbot revolutionises life planning

Massachusetts Institute of Technology (MIT) researchers have developed an AI chatbot that engages users in conversations with their future selves, aiming to promote thoughtful decision-making on health, finances, and career paths. By simulating an individual’s older self and drawing from synthetic memories, ‘Future You’ encourages users to consider their long-term aspirations and make informed choices in the present.

The process towards creating your chatbot has two main components. First, users initiate conversations by providing personal information, then, they upload a portrait image, digitally aged to depict their future selves. The chatbot utilises this data to generate coherent responses, leveraging OpenAI’s GPT3.5 language model to facilitate meaningful exchanges. The user’s new generated persona can give advice and reflect on their life to provide insights into the user’s future.

Why is this chatbot useful?

Users are told the ‘future self’ is not a prediction but rather a potential future self based on the information they provided. They are encouraged to explore different futures by changing their answers to the questionnaire. After several conversations with his ‘future self’, MIT Media Lab’s Pat Pataranutaporn reported his most profound discussion was when the chatbot reminded him to spend time with his parents, as they would not be around forever.

In a preprint scientific paper, the MIT project has shown promise in alleviating anxiety and fostering a stronger connection to one’s future self, empowering individuals to pursue specific goals, adopt healthier habits, and plan for their financial future. Behavioural science experts commend the project’s innovative approach, emphasising the potential of such tools to influence beneficial decision-making by making the future self more salient and relevant to the present.

Berlin set to launch world’s first cyber brothel

Later this month, Berlin will see the launch of the world’s first cyber brothel, offering customers the opportunity to book time with AI sex dolls. The new service, spearheaded by Cybrothel founder Philipp Fussenegger, allows users to interact verbally and physically with the AI dolls, catering to a growing demand for more interactive AI experiences in the adult entertainment industry.

Generative AI is increasingly being integrated into the adult entertainment sector. A report by SplitMetrics shows that AI companion apps have been downloaded 225 million times on the Google Play Store, indicating a lucrative market. These AI companions often charge fees and collect user data, which is frequently shared with third parties, raising privacy concerns.

Experts have voiced significant concerns about the potential harms of merging AI with adult entertainment. Issues include the reinforcement of gender stereotypes, addiction risks, and privacy violations. AI chatbots, according to Mozilla’s privacy researcher Misha Rykov, target lonely individuals and can exacerbate mental health challenges. Furthermore, content warnings for themes of abuse, violence, and underage relationships have been added to several AI chatbots by Mozilla.

Despite these concerns, some industry leaders argue that AI can enhance the sexual experience without replacing human interaction. Philipp Hamburger from Lovehoney emphasises AI’s role in ethically improving user experience. Additionally, Ruben Cruz from The Clueless Agency believes AI can help mitigate ethical issues by preventing the explicit sexualisation of real individuals in adult content. However, the broader impact on real-world relationships and the potential for harmful assumptions about consent remain critical issues that need addressing.

ByteDance to invest $2.13 billion in Malaysia AI hub

China’s ByteDance, the parent company of TikTok, plans to invest around $2.13 billion to establish an AI hub in Malaysia. The plan includes an additional $320 million to expand data centre facilities in Johor state, according to Malaysia’s Trade Minister Tengku Zafrul Aziz.

The development follows significant investments by other tech giants in Malaysia. Google recently announced a $2 billion investment to create its first data centre and Google Cloud region in the country, while Microsoft is set to invest $2.2 billion to enhance cloud and AI services.

The investment is expected to boost Malaysia’s digital economy, aiming to increase its contribution to 22.6% of its GDP by 2025, underscoring the county’s growing importance as a digital economy hub in Southeast Asia.

AI growth faces data shortage

The surge in AI, particularly with systems like ChatGPT, is facing a potential slowdown due to the impending depletion of publicly available text data, according to a study by Epoch AI. The shortage is projected to occur between 2026 and 2032, highlighting a critical challenge in maintaining the rapid advancement of AI.

AI’s growth has relied heavily on vast amounts of human-generated text data, but this finite resource is diminishing. Companies like OpenAI and Google are currently purchasing high-quality data sources, such as content from Reddit and news outlets, to sustain their AI training. However, the scarcity of fresh data might soon force them to consider using sensitive private data or less reliable synthetic data.

The Epoch AI study emphasises that scaling AI models, which requires immense computing power and large data sets, may become unfeasible as data sources dwindle. While new techniques have somewhat mitigated this issue, the fundamental need for high-quality human-generated data remains. Some experts suggest focusing on specialised AI models rather than larger ones to address this bottleneck.

In response to these challenges, AI developers are exploring alternative methods, including generating synthetic data. However, concerns about the quality and efficiency of such data persist, underlining the complexity of sustaining AI advancements in the face of limited natural resources.

RSF urges countries adopting CoE’s AI Framework to avoid self-regulation

Reporters Without Borders (RSF) has praised the Council of Europe’s (CoE) new Framework Convention on AI for its progress but criticised its reliance on private sector self-regulation. The Convention, which includes 46 European countries, aims to address the impact of AI on human rights, democracy, and the rule of law. While it acknowledges the threat of AI-fueled disinformation, RSF argues that it fails to provide the necessary mechanisms to achieve its goals.

The CoE Convention mandates strict regulatory measures for AI use in the public sector but allows member states to choose self-regulation for the private sector. RSF believes this distinction is a critical flaw, as the private sector, particularly social media companies and other digital service providers, have historically prioritised business interests over the public good. According to RSF, this approach will not effectively combat the disinformation challenges posed by AI.

RSF urges countries that adopt the Convention to implement robust national legislation to strictly regulate AI development and use. That would ensure that AI technologies are deployed ethically and responsibly, protecting the integrity of information and democratic processes. Vincent Berthier, Head of RSF’s Tech Desk, emphasised the need for legal requirements over self-regulation to ensure AI serves the public interest and upholds the right to reliable information.

RSF’s recommendations provide a framework for AI regulation that addresses the shortcomings of both the Council of Europe’s Framework Convention and the European Union’s AI Act, advocating for stringent measures to safeguard the integrity of information and democracy.

Meta launches AI-driven ads on WhatsApp

Meta has launched its first AI-driven ad targeting program for businesses on WhatsApp, aiming to generate revenue from the popular chat service. CEO Mark Zuckerberg announced the new tools at a conference in Brazil, marking a significant shift for WhatsApp, which has traditionally avoided targeted advertising.

The new AI tools will use behaviour data from Facebook and Instagram to target messages more effectively to users who are likely to engage, provided they use the same phone number across accounts. The new feature is crucial for businesses as it allows for optimised ad delivery, making their marketing efforts more cost-effective.

Meta is also testing a new AI chatbot for business inquiries on WhatsApp. Namely, the chatbot will handle common requests like finding catalogues or consulting business hours, pushing towards automated customer service solutions. Additionally, Meta is integrating Brazil’s popular digital payment method, PIX, into WhatsApp’s payment tool, enhancing its functionality in the country.

These developments come as part of Meta’s broader strategy to monetise WhatsApp, which, despite its massive user base, has yet to contribute significantly to Meta’s overall revenue. The new initiatives are seen as steps to leverage WhatsApp’s extensive reach and user engagement for greater financial returns.

EU banks’ increasing reliance on US tech giants for AI raises concerns

According to European banking executives, the rise of AI is increasing banks’ reliance on major US tech firms, raising new risks for the financial industry. AI, already used in detecting fraud and money laundering, has gained significant attention following the launch of OpenAI’s ChatGPT in late 2022, with banks exploring more applications of generative AI.

At a fintech conference in Amsterdam, industry leaders expressed concerns about the heavy computational power needed for AI, which forces banks to depend on a few big tech providers. Bahadir Yilmaz, ING’s chief analytics officer, noted that this dependency on companies like Microsoft, Google, IBM, and Amazon poses one of the biggest risks, as it could lead to ‘vendor lock-in’ and limit banks’ flexibility. These facts also imply the strong impact AI could have on retail investor protection.

Britain has proposed regulations to manage financial firms’ reliance on external tech companies, reflecting concerns that issues with a single cloud provider could disrupt services across multiple financial institutions. Deutsche Bank’s technology strategy head, Joanne Hannaford, highlighted that accessing the necessary computational power for AI is feasible only through Big Tech.

The European Union’s securities watchdog recently emphasised that banks and investment firms must protect customers when using AI and maintain boardroom responsibility.

US officials clash over AI disclosure in political ads

Top officials at the US Federal Election Commission (FEC) are divided over a proposal requiring political advertisements on broadcast radio and television to disclose if their content is generated by AI. FEC Vice Chair Ellen Weintraub backs the proposal, initiated by FCC Chairwoman Jessica Rosenworcel, which aims to enhance transparency in political ads, whereas FEC Chair Sean Cooksey opposes it.

The proposal, which does not ban AI-generated content, comes amid increasing concerns in Washington that such content could mislead voters in the upcoming 2024 elections. Rosenworcel emphasised the risk of ‘deepfakes’ and other altered media misleading the public and noted that the FCC has long-standing authority to mandate disclosures. Weintraub also highlighted the importance of transparency for public benefit and called for collaborative regulatory efforts between the FEC and FCC.

However, Cooksey warned that mandatory disclosures might conflict with existing laws and regulations, creating confusion in political campaigns. Republican FCC Commissioner Brendan Carr criticised the proposal, pointing out inconsistencies in regulation, as the FCC cannot oversee internet, social media, or streaming service ads. The debate gained traction following an incident in January where a fake AI-generated robocall impersonating US President Joe Biden aimed to influence New Hampshire’s Democratic primary, leading to charges against a Democratic consultant.

Tech titans announce AI-driven PC revolution at Computex 2024

This week, key players in the chip industry, including Nvidia, Intel, AMD, Qualcomm, and Arm, gathered in Taiwan for the annual Computex conference, announcing an ‘AI PC revolution.’ They showcased AI-enabled personal computers with specialised chips for running AI applications directly on the device, promising a significant leap in user interaction with PCs.

Intel CEO Pat Gelsinger called this the most exciting development in 25 years since the arrival of WiFi, while Qualcomm’s Cristiano Amon likened it to the industry being reborn. Microsoft has driven this push by introducing AI PCs equipped with its Copilot assistant and choosing Qualcomm as its initial AI chip supplier. Despite this, Intel and AMD are also gearing up to launch their AI processors soon.

Why does it matter?

The conference was strategically timed to precede Apple’s annual Worldwide Developers Conference, hinting at the competitive landscape in AI advancements. As the PC market shows signs of recovery, analysts predict a rise in AI PC adoption, potentially transforming how PCs are used. However, there needs to be more skepticism about whether consumer demand will justify the higher costs associated with these advanced devices, as the Financial Times reports.