Google launches AI feature to reshape how search results appear

Google has introduced a new experimental feature named Web Guide, aimed at reorganising search results by using AI to group information based on the query’s different aspects.

Available through Search Labs, the tool helps users explore topics in a more structured way instead of relying on the standard, linear results page.

Powered by Google’s Gemini AI, Web Guide works particularly well for open-ended or complex queries. For example, searches such as ‘how to solo travel in Japan’ would return results neatly arranged into guides, safety advice, or personal experiences instead of a simple list.

The feature handles multi-sentence questions, offering relevant answers broken into themed sections.

Users who opt in can access Web Guide via the Web tab and toggle it off without exiting the entire experiment. While it works only on that tab, Google plans to expand it to the broader ‘All’ tab in time.

The move follows Google’s broader push to incorporate Gemini into tools like AI Mode, Flow, and other experimental products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LegalOn raises 50 million to expand AI legal tools

LegalOn Technologies has secured 50 million dollars in Series E funding to expand its AI-powered contract review platform.

The Japanese startup, backed by SoftBank and Goldman Sachs, aims to streamline legal work by reducing the time spent reviewing and managing documents.

Its core product, Review, identifies contract risks and suggests edits using expert-built legal playbooks. The company says it improves accuracy while cutting review time by up to 85 percent across 7,000 client organisations in Japan, the US and the UK.

LegalOn plans to develop AI agents to handle tasks before and after the review process, including contract tracking and workflow integration. A new tool, Matter Management, enables teams to efficiently assign contract responsibilities, collaborate, and link documents.

While legal AI adoption grows, CEO Daniel Lewis insists the technology will support rather than replace lawyers. He believes professionals who embrace AI will gain the most leverage, as human oversight remains vital to legal judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and quantum tech reshape global business

AI and quantum computing are reshaping global industries as investment surges and innovation accelerates across sectors like finance, healthcare and logistics. Microsoft and Amazon are driving a major shift in AI infrastructure, transforming cloud services into profitable platforms.

Quantum computing is moving beyond theory, with real-world applications emerging in pharmaceuticals and e-commerce. Google’s development of quantum-inspired algorithms for virtual shopping and faster analytics demonstrates its potential to revolutionise decision-making.

Sustainability is also gaining ground, with companies adopting AI-powered solutions for renewable energy and eco-friendly manufacturing. At the same time, digital banks are integrating AI to challenge legacy finance systems, offering personalised, accessible services.

Despite rapid progress, ethical concerns and regulatory challenges are mounting. Data privacy, AI bias, and antitrust issues highlight the need for responsible innovation, with industry leaders urged to balance risk and growth for long-term societal benefit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaping the US labour market

AI is often seen as a job destroyer, but it’s also emerging as a significant source of new employment, according to a new Brookings report. The number of job postings mentioning AI has more than doubled in the past year, with demand continuing to surge across various industries and regions.

Over the past 15 years, AI-related job listings have grown nearly 29% annually, far outpacing the 11% growth rate of overall job postings in the broader economy.

Brookings based its findings on data from Lightcast, a labour market analytics firm, and noted rising demand for AI skills across sectors, including manufacturing. According to the US Census Bureau’s Business Trends Survey, the share of manufacturers using AI has jumped from 4% in early 2023 to 9% by mid-2025.

Yet, AI jobs still form a small part of the market. Goldman Sachs predicts widespread AI adoption will peak in the early 2030s, with a slower near-term influence on jobs. ‘AI is visible in the micro labour market data, but it doesn’t dominate broader job dynamics,’ said Joseph Briggs, an economist at Goldman Sachs.

Roles range from AI engineers and data scientists to consultants and marketers learning to integrate AI into business operations responsibly and ethically. In 2025, over 80,000 job postings cited generative AI skills—up from fewer than 4,000 in 2010, Brookings reported, indicating explosive long-term growth.

Job openings involving ‘responsible AI’—those addressing ethical AI use in business and society—are also rising, according to data from Indeed and Lightcast. ‘As AI evolves, so does what counts as an AI job,’ said Cory Stahle of the Indeed Hiring Lab, noting that definitions shift with new business applications.

AI skills carry financial value, too. Lightcast found that jobs requiring AI expertise offer an average salary premium of $18,000, or 28% more annually. Unsurprisingly, tech hubs like Silicon Valley and Seattle dominate AI hiring, but job growth spreads to regions like the Sunbelt and the East Coast.

Mark Muro of Brookings noted that universities play a key role in AI job growth across new regions by fuelling local innovation. AI is also entering non-tech fields such as finance, human resources, and marketing, with more than half of AI-related postings now being outside IT roles.

Muro expects more widespread AI adoption in the next few years, as employers gain clarity on its value, limitations and potential for productivity. ‘There’s broad consensus that AI boosts productivity and economic competitiveness,’ he said. ‘It energises regional leaders and businesses to act more quickly.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge broader values in AI development

Since the launch of ChatGPT in late 2023, the private sector has led AI innovation. Major players like Microsoft, Google, and Alibaba—alongside emerging firms such as Anthropic and Mistral—are racing to monetise AI and secure long-term growth in the technology-driven economy.

But during the Fortune Brainstorm AI conference in Singapore this week, experts stressed the importance of human values in shaping AI’s future. Anthea Roberts, founder of Dragonfly Thinking, argued that AI must be built not just to think faster or cheaper, but also to think better.

She highlighted the risk of narrow thinking—national, disciplinary or algorithmic—and called for diverse, collaborative thinking to counter it. Roberts sees potential in human-AI collaboration, which can help policymakers explore different perspectives, boosting the chances of sound outcomes.

Russell Wald, executive director at Stanford’s Institute for Human-Centred AI, called AI a civilisation-shifting force. He stressed the need for an interdisciplinary ecosystem—combining academia, civil society, government and industry—to steer AI development.

‘Industry must lead, but so must academia,’ Wald noted, as well as universities’ contributions to early research, training, and transparency. Despite widespread adoption, AI scepticism persists, due to issues like bias, hallucination, and unpredictable or inappropriate language.

Roberts said most people fall into two camps: those who use AI uncritically, such as students and tech firms, and those who reject it entirely.

She labelled the latter as practising ‘critical non-use’ due to concerns over bias, authenticity and ethical shortcomings in current models. Inviting a broader demographic into AI governance, Roberts urged more people—especially those outside tech hubs like Silicon Valley—to shape its future.

Wald noted that in designing AI, developers must reflect the best of humanity: ‘Not the crazy uncle at the Thanksgiving table.’

Both experts believe the stakes are high, and the societal benefits of getting AI right are too great to ignore or mishandle. ‘You need to think not just about what people want,’ Roberts said, ‘but what they want to want—their more altruistic instincts.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Big companies grapple with AI’s legal, security, and reputational threats

A recent Quartz investigation reveals that concerns over AI are increasingly overshadowing corporate enthusiasm, especially among Fortune 500 companies.

More than 69% now reference generative AI in their annual reports as a risk factor, while only about 30% highlight its benefits, a dramatic shift toward caution in corporate discourse.

These risks range from cybersecurity threats, such as AI-generated phishing, model poisoning, and adversarial attacks, to operational and reputational dangers stemming from opaque AI decision-making, including hallucinations and biassed outputs.

Privacy exposure, legal liability, task misalignment, and overpromising AI capabilities, so-called ‘AI washing’, compound corporate exposure, particularly for boards and senior leadership facing directors’ and officers’ liability risks.

Other structural risks include vendor lock-in, disproportionate market influence by dominant AI providers, and supply chain dependencies that constrain flexibility and resilience.

Notably, even cybersecurity experts warn of emerging threats from AI agents, autonomous systems capable of executing actions that complicate legal accountability and oversight.

Companies are advised to adopt comprehensive AI risk-management strategies to navigate this evolving landscape.

Essential elements include establishing formal governance frameworks, conducting bias and privacy audits, documenting risk assessments, ensuring human-in-the-loop oversight, revising vendor contracts, and embedding AI ethics into policy and training, particularly at the board level.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Democratising inheritance: AI tool handles estate administration

Lauren Kolodny, who early backed Chime and earned a spot on the Forbes Midas list, is leading a $20 million Series A funding round into Alix, a San Francisco-based startup using AI to revolutionise estate settlement. Founder Alexandra Mysoor conceived the idea after spending nearly 1,000 hours over 18 months managing a friend’s family estate, highlighting a widespread, emotionally taxing administrative gap.

Using AI agents, Alix automates tedious elements of the estate process, including scanning documents, extracting data, pre-populating legal forms, and liaising with financial institutions. This contrasts sharply with the traditional, costly probate system. The startup’s pricing model charges around 1% of estate value, translating to approximately $9,000–$12,000 for smaller estates.

Kolodny sees Alix as part of a new wave of startups harnessing AI to democratise services once accessible only to high-net-worth individuals. As trillions of dollars transfer to millennials and Gen Z in the coming decades, Alix aims to simplify one of the most complex and emotionally fraught administrative tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers and students warn: AI is eroding engagement

A student from San Jose and an English teacher in Chicago co-authored a Boston Globe opinion warning that widespread use of AI in schools damages the vital student-teacher bond.

While marketed as efficiency boosters, AI tools encourage students to forgo independent thinking.

Many simply generate entire assignments via AI, reformat the text to avoid detection, and undermine honest academic interaction.

Educators report feeling increasingly marginalised as AI handles much of their workload, including grading, lesson planning, and feedback within classrooms.

Though schools and tech companies promote these tools as educational enhancements, many schools have eroded trust, as teachers struggle to assess real student ability.

The authors call for a return to supervised in-class assignments, using pen and paper, strict scrutiny of AI vendors in education, and outright bans on unsupervised AI classroom tools to help reset the learning relationship.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek and others gain traction in US and EU

A recent survey has found that most US and the EU users are open to using Chinese large language models, even amid ongoing political and cybersecurity scrutiny.

According to the report, 71 percent of respondents in the US and 87 percent in the EU would consider adopting models developed in China.

The findings highlight increasing international curiosity about the capabilities of Chinese AI firms such as DeepSeek, which have recently attracted global attention.

While the technology is gaining credibility, many Western users remain cautious about data privacy and infrastructure control.

More than half of those surveyed said they would only use Chinese AI models if hosted outside China. However, this suggests that while trust in the models’ performance is growing, concerns over data governance remain a significant barrier to adoption.

The results come amid heightened global competition in the AI race, with Chinese developers rapidly advancing to challenge US-based leaders. DeepSeek and similar firms now face balancing global outreach with geopolitical limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!