Google rolls out AI features to surface fresh web content in Search & Discover

Google is launching two new AI-powered features in its Search and Discover tools to help people connect with more recent content on the web. The first feature upgrades Discover. It shows brief previews of trending stories and topics you care about, which you can expand to view more.

Each preview includes links so you can explore the full content on the web. This aims to make catching up on stories from both known and new publishers easier. The feature is now live in the US, South Korea and India.

The second is a sports-oriented update in Search: when looking up players or teams on your phone, you’ll soon see a ‘What’s new’ button. That will surface a feed of the latest updates and articles so you can follow recent action more directly. Rolling out in the US in the coming weeks.

These features are part of Google’s effort to use AI to help people stay better informed about topics they care about, trending news, sports, etc. At the same time, Google emphasises that web links remain a core part of the experience, helping users explore sources and dive deeper.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California introduces first AI chatbot safety law

California has become the first US state to regulate AI companion chatbots after Governor Gavin Newsom signed landmark legislation designed to protect children and vulnerable users. The new law, SB 243, holds companies legally accountable if their chatbots fail to meet new safety and transparency standards.

The US legislation follows several tragic cases, including the death of a teenager who reportedly engaged in suicidal conversations with an AI chatbot. It also comes after leaked documents revealed that some AI systems allowed inappropriate exchanges with minors.

Under the new rules, firms must introduce age verification, self-harm prevention protocols, and warnings for users engaging with companion chatbots. Platforms must clearly state that conversations are AI-generated and are barred from presenting chatbots as healthcare professionals.

Major developers including OpenAI, Replika, and Character.AI say they are introducing stronger parental controls, content filters, and crisis support systems to comply. Lawmakers hope the move will inspire other states to adopt similar protections as AI companionship tools become increasingly popular.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Salesforce invests $15 billion in San Francisco’s AI future

The US cloud-based software company, Salesforce, has announced a $15 billion investment in San Francisco over the next five years, strengthening the city’s position as the world’s AI capital.

The funding will support a new AI Incubator Hub on the company’s campus, workforce training programmes, and initiatives to help businesses transform into ‘Agentic Enterprises’.

A move that coincides with the company’s annual Dreamforce conference, expected to generate $130 million in local revenue and create 35,000 jobs.

Chief Executive Marc Benioff said the investment demonstrates Salesforce’s deep commitment to San Francisco, aiming to boost AI innovation and job creation.

Dreamforce, now in its 23rd year, is the world’s largest AI event, attracting nearly 50,000 participants and millions more online. Benioff described the company’s goal as leading a new technological era where humans and AI collaborate to drive progress and productivity.

Founded in 1999 as an online CRM service, Salesforce has evolved into a global leader in enterprise AI and cloud computing. It is now San Francisco’s largest private employer and continues to expand through acquisitions of local AI firms such as Bluebirds, Waii, and Regrello.

The company’s new AI Incubator Hub will support early-stage startups, while its Trailhead learning platform has already trained more than five million people for the AI-driven workplace.

Salesforce remains one of the city’s most active corporate philanthropists. Its 1-1-1 model has inspired thousands of companies worldwide to dedicate a share of equity, product, and employee time to social causes.

With an additional $39 million pledged to education and healthcare, Salesforce and the Benioffs have now donated over $1 billion to the Bay Area.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Broadcom unite to deploy 10 gigawatts of AI accelerators

The US firm, OpenAI, has announced a multi-year collaboration with Broadcom to design and deploy 10 gigawatts of custom AI accelerators.

The partnership will combine OpenAI’s chip design expertise with Broadcom’s networking and Ethernet technologies to create large-scale AI infrastructure. The deployment is expected to begin in the second half of 2026 and be completed by the end of 2029.

A collaboration that enables OpenAI to integrate insights gained from its frontier models directly into the hardware, enhancing efficiency and performance.

Broadcom will develop racks of AI accelerators and networking systems across OpenAI’s data centres and those of its partners. The initiative is expected to meet growing global demand for advanced AI computation.

Executives from both companies described the partnership as a significant step toward the next generation of AI infrastructure. OpenAI CEO Sam Altman said it would help deliver the computing capacity needed to realise the benefits of AI for people and businesses worldwide.

Broadcom CEO Hock Tan called the collaboration a milestone in the industry’s pursuit of more capable and scalable AI systems.

The agreement strengthens Broadcom’s position in AI networking and underlines OpenAI’s move toward greater control of its technological ecosystem. By developing its own accelerators, OpenAI aims to boost innovation while advancing its mission to ensure artificial general intelligence benefits humanity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia DGX Spark launches as the world’s smallest AI supercomputer

Nvidia has launched the DGX Spark, described as the world’s smallest AI supercomputer.

Designed for developers and smaller enterprises, the Spark offers data centre-level performance without the need for costly AI server infrastructure or cloud rentals. It features Nvidia’s GB10 Grace Blackwell superchip, ConnectX-7 networking, and the company’s complete AI software stack.

The system, co-developed with ASUS and Dell, can support up to 128GB of memory, enabling users to train and run substantial AI models locally.

Nvidia CEO Jensen Huang compared Spark’s mission to the 2016 DGX-1, which he delivered to Elon Musk’s OpenAI, marking the start of the AI revolution. The new Spark, he said, aims to place supercomputing power directly in the hands of every developer.

Running on Nvidia’s Linux-based DGX OS, the Spark is built for AI model creation rather than general computing or gaming. Two units can be connected to handle models with up to 405 billion parameters.

The device complements Nvidia’s DGX Station, powered by the more advanced GB300 Grace Blackwell Ultra chip.

Nvidia continues to dominate the AI chip industry through its powerful hardware and CUDA platform, securing multi-billion-dollar deals with companies such as OpenAI, Google, Meta, Microsoft, and Amazon. The DGX Spark reinforces its position by expanding access to AI computing at the desktop level.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Smartphone AI estimates avocado ripeness with high accuracy

Researchers at Oregon State University and Florida State University have unveiled a smartphone-based AI system that accurately predicts the ripeness and internal quality of avocados.

They trained models using more than 1,400 iPhone images of Hass avocados, achieving around 92% accuracy for firmness (a proxy for ripeness) and over 84% accuracy in distinguishing fresh from rotten fruit.

Avocado waste is a major issue because they spoil quickly, and many are discarded before reaching consumers. The AI tool is intended to guide both shoppers and businesses on when fruit is best consumed or sold.

Beyond consumer use, the system could be deployed in processing and retail facilities to sort avocados more precisely. For example, more ripe batches might be sent to nearby stores instead of longer transit routes.

The researchers used deep learning (rather than older, manual feature extraction) to capture shape, texture and spatial cues better. As the model dataset grows, its performance is expected to improve further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cities take on tech giants in a new diplomatic arena

In a world once defined by borders and treaties, a new kind of diplomacy is emerging, one where cities, not nations, take the lead. Instead of traditional embassies, this new diplomacy unfolds in startup hubs and conference halls, where ‘tech ambassadors’ represent cities in negotiations with powerful technology companies.

These modern envoys focus not on trade tariffs but on data sharing, digital infrastructure, and the balance between innovation and public interest. The growing influence of global tech firms has shifted the map of power.

Apple’s 2024 revenue alone exceeded the GDP of several mid-sized nations, and algorithms designed in Silicon Valley now shape urban life worldwide. Recognising this shift, cities such as Amsterdam, Barcelona, and London have appointed tech ambassadors to engage directly with the digital giants.

Their role combines diplomacy, investment strategy, and public policy, ensuring that cities have a voice in how technologies, from ride-sharing platforms to AI systems, affect their citizens. But the rise of this new urban diplomacy comes with risks.

Tech firms wield enormous influence, spending tens of millions on lobbying while many municipalities struggle with limited resources. Cities eager for investment may compromise on key issues like data governance or workers’ rights.

There’s also a danger of ‘technological solutionism’, the belief that every problem can be solved by an app, overshadowing more democratic or social solutions.

Ultimately, the mission of the tech ambassador is to safeguard the public interest in a digital age where power often lies in code rather than constitutions. As cities negotiate with the world’s most powerful corporations, they must balance innovation with accountability, ensuring that the digital future serves citizens, not just shareholders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Japan pushes domestic AI to boost national security

Japan will prioritise home-grown AI technology in its new national strategy, aiming to strengthen national security and reduce dependence on foreign systems. The government says developing domestic expertise is essential to prevent overreliance on US and Chinese AI models.

Officials revealed that the plan will include better pay and conditions to attract AI professionals and foster collaboration among universities, research institutes and businesses. Japan will also accelerate work on a next-generation supercomputer to succeed the current Fugaku model.

Prime Minister Shigeru Ishiba has said Japan must catch up with global leaders such as the US and reverse its slow progress in AI development. Not a lot of people in Japan reported using generative AI last year, compared with nearly 70 percent in the United States and over 80 percent in China.

The government’s strategy will also address the risks linked to AI, including misinformation, disinformation and cyberattacks. Officials say the goal is to make Japan the world’s most supportive environment for AI innovation while safeguarding security and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots linked to US teen suicides spark legal action

Families in the US are suing AI developers after tragic cases in which teenagers allegedly took their own lives following exchanges with chatbots. The lawsuits accuse platforms such as Character.AI and OpenAI’s ChatGPT of fostering dangerous emotional dependencies with young users.

One case involves 14-year-old Sewell Setzer, whose mother says he fell in love with a chatbot modelled on a Game of Thrones character. Their conversations reportedly turned manipulative before his death, prompting legal action against Character.AI.

Another family claims ChatGPT gave their son advice on suicide methods, leading to a similar tragedy. The companies have expressed sympathy and strengthened safety measures, introducing age-based restrictions, parental controls, and clearer disclaimers stating that chatbots are not real people.

Experts warn that chatbots are repeating social media’s early mistakes, exploiting emotional vulnerability to maximise engagement. Lawmakers in California are preparing new rules to restrict AI tools that simulate human relationships with minors, aiming to prevent manipulation and psychological harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple sued for allegedly using pirated books to train its AI model

Apple is facing a lawsuit from neuroscientists Susana Martinez-Conde and Stephen Macknik, who allege that Apple used pirated books from ‘shadow libraries’ to train its new AI system, Apple Intelligence.

Filed on 9 October in the US District Court for the Northern District of California, the suit claims Apple accessed thousands of copyrighted works without permission, including the plaintiffs’ own books.

The researchers argue Apple’s market value surged by over $200 billion following the AI’s launch, benefiting from the alleged copyright violations.

This case adds to a growing list of legal actions targeting tech firms accused of using unlicensed content to train AI. Apple previously faced similar lawsuits from authors in September.

While Meta and Anthropic have also faced scrutiny, courts have so far ruled in their favour under the ‘fair use’ doctrine. The case highlights ongoing tensions between copyright law and the data demands of AI development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!