Smartphone AI estimates avocado ripeness with high accuracy

Researchers at Oregon State University and Florida State University have unveiled a smartphone-based AI system that accurately predicts the ripeness and internal quality of avocados.

They trained models using more than 1,400 iPhone images of Hass avocados, achieving around 92% accuracy for firmness (a proxy for ripeness) and over 84% accuracy in distinguishing fresh from rotten fruit.

Avocado waste is a major issue because they spoil quickly, and many are discarded before reaching consumers. The AI tool is intended to guide both shoppers and businesses on when fruit is best consumed or sold.

Beyond consumer use, the system could be deployed in processing and retail facilities to sort avocados more precisely. For example, more ripe batches might be sent to nearby stores instead of longer transit routes.

The researchers used deep learning (rather than older, manual feature extraction) to capture shape, texture and spatial cues better. As the model dataset grows, its performance is expected to improve further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cities take on tech giants in a new diplomatic arena

In a world once defined by borders and treaties, a new kind of diplomacy is emerging, one where cities, not nations, take the lead. Instead of traditional embassies, this new diplomacy unfolds in startup hubs and conference halls, where ‘tech ambassadors’ represent cities in negotiations with powerful technology companies.

These modern envoys focus not on trade tariffs but on data sharing, digital infrastructure, and the balance between innovation and public interest. The growing influence of global tech firms has shifted the map of power.

Apple’s 2024 revenue alone exceeded the GDP of several mid-sized nations, and algorithms designed in Silicon Valley now shape urban life worldwide. Recognising this shift, cities such as Amsterdam, Barcelona, and London have appointed tech ambassadors to engage directly with the digital giants.

Their role combines diplomacy, investment strategy, and public policy, ensuring that cities have a voice in how technologies, from ride-sharing platforms to AI systems, affect their citizens. But the rise of this new urban diplomacy comes with risks.

Tech firms wield enormous influence, spending tens of millions on lobbying while many municipalities struggle with limited resources. Cities eager for investment may compromise on key issues like data governance or workers’ rights.

There’s also a danger of ‘technological solutionism’, the belief that every problem can be solved by an app, overshadowing more democratic or social solutions.

Ultimately, the mission of the tech ambassador is to safeguard the public interest in a digital age where power often lies in code rather than constitutions. As cities negotiate with the world’s most powerful corporations, they must balance innovation with accountability, ensuring that the digital future serves citizens, not just shareholders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Japan pushes domestic AI to boost national security

Japan will prioritise home-grown AI technology in its new national strategy, aiming to strengthen national security and reduce dependence on foreign systems. The government says developing domestic expertise is essential to prevent overreliance on US and Chinese AI models.

Officials revealed that the plan will include better pay and conditions to attract AI professionals and foster collaboration among universities, research institutes and businesses. Japan will also accelerate work on a next-generation supercomputer to succeed the current Fugaku model.

Prime Minister Shigeru Ishiba has said Japan must catch up with global leaders such as the US and reverse its slow progress in AI development. Not a lot of people in Japan reported using generative AI last year, compared with nearly 70 percent in the United States and over 80 percent in China.

The government’s strategy will also address the risks linked to AI, including misinformation, disinformation and cyberattacks. Officials say the goal is to make Japan the world’s most supportive environment for AI innovation while safeguarding security and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots linked to US teen suicides spark legal action

Families in the US are suing AI developers after tragic cases in which teenagers allegedly took their own lives following exchanges with chatbots. The lawsuits accuse platforms such as Character.AI and OpenAI’s ChatGPT of fostering dangerous emotional dependencies with young users.

One case involves 14-year-old Sewell Setzer, whose mother says he fell in love with a chatbot modelled on a Game of Thrones character. Their conversations reportedly turned manipulative before his death, prompting legal action against Character.AI.

Another family claims ChatGPT gave their son advice on suicide methods, leading to a similar tragedy. The companies have expressed sympathy and strengthened safety measures, introducing age-based restrictions, parental controls, and clearer disclaimers stating that chatbots are not real people.

Experts warn that chatbots are repeating social media’s early mistakes, exploiting emotional vulnerability to maximise engagement. Lawmakers in California are preparing new rules to restrict AI tools that simulate human relationships with minors, aiming to prevent manipulation and psychological harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple sued for allegedly using pirated books to train its AI model

Apple is facing a lawsuit from neuroscientists Susana Martinez-Conde and Stephen Macknik, who allege that Apple used pirated books from ‘shadow libraries’ to train its new AI system, Apple Intelligence.

Filed on 9 October in the US District Court for the Northern District of California, the suit claims Apple accessed thousands of copyrighted works without permission, including the plaintiffs’ own books.

The researchers argue Apple’s market value surged by over $200 billion following the AI’s launch, benefiting from the alleged copyright violations.

This case adds to a growing list of legal actions targeting tech firms accused of using unlicensed content to train AI. Apple previously faced similar lawsuits from authors in September.

While Meta and Anthropic have also faced scrutiny, courts have so far ruled in their favour under the ‘fair use’ doctrine. The case highlights ongoing tensions between copyright law and the data demands of AI development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Imperial College unveils plans for new AI campus in west London

Imperial College London has launched a public consultation on plans for a new twelve-storey academic building in White City dedicated to AI and data science.

A proposed development that will bring together computer scientists, mathematicians, and business specialists to advance AI research and innovation.

A building that will include laboratories, research facilities, and public areas such as cafés and exhibition spaces. It forms part of Imperial’s wider White City masterplan, which also includes housing, a hotel, and additional research infrastructure.

The university aims to create what it describes as a hub for collaboration between academia and industry.

Outline planning permission for the site was granted by Hammersmith and Fulham Council in 2019. The consultation is open until 26 October, after which a formal planning application is expected later this year. If approved, construction could begin in mid-2026, with completion scheduled for 2029.

Imperial College, established in 1907 and known for its focus on science, engineering, medicine, and business, sees the new campus as a step towards strengthening the position of the UK in AI research and technology development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy bans deepfake app that undresses people

Italy’s data protection authority has ordered an immediate suspension of the app Clothoff, which uses AI to generate fake nude images of real people. The company behind it, based in the British Virgin Islands, is now barred from processing personal data of Italian users.

The watchdog found that Clothoff enables anyone, including minors, to upload photos and create sexually explicit or pornographic deepfakes. The app fails to verify consent from those depicted and offers no warning that the images are artificially generated.

The regulator described the measure as urgent, citing serious risks to human dignity, privacy, and data protection, particularly for children and teenagers. It has also launched a wider investigation into similar so-called ‘nudifying’ apps that exploit AI technology.

Italian media have reported a surge in cases where manipulated images are used for harassment and online abuse, prompting growing social alarm. Authorities say they intend to take further steps to protect individuals from deepfake exploitation and strengthen safeguards around AI image tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants race to remake social media with AI

Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.

OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.

Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.

Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tariffs and AI top the agenda for US CEOs over the next three years

US CEOs prioritise cost reduction and AI integration amid global economic uncertainty. According to KPMG’s 2025 CEO Outlook, leaders are reshaping supply chains while preparing for rapid AI transformation over the next three years.

Tariffs are a key factor influencing business strategies, with 89% of US CEOs expecting significant operational impacts. Many are adjusting sourcing models, while 86% say they will increase prices where needed. Supply chain resilience remains the top short-term pressure for decision-making.

AI agents are seen as major game-changers. 84% of CEOs expect a native AI company to become a leading industry player within 3 years, displacing incumbents. Companies are accelerating investment returns, with most expecting payoffs within one to three years.

Cybersecurity is a significant concern alongside AI integration. Forty-six percent have increased spending on digital risk resilience, focusing on fraud prevention and data privacy. CEOs recognise that AI and quantum computing introduce both opportunities and new vulnerabilities.

Workforce transformation is a clear priority. Eighty-six percent plan to embed AI agents into teams next year, while 73% focus on retaining and retraining high-potential talent. Upskilling, governance, and organisational redesign are emerging as essential strategies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok to get new AI video detection tools, Musk says

Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.

AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.

A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.

The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.

Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!