The US push for AI dominance through openness

In a bold move to maintain its edge in the global AI race—especially against China—the United States has unveiled a sweeping AI Action Plan with 103 recommendations. At its core lies an intriguing paradox: the push for open-source AI, typically associated with collaboration and transparency, is now being positioned as a strategic weapon.

As Jovan Kurbalija points out, this plan marks a turning point where open-weight models are framed not just as tools of innovation, but as instruments of geopolitical influence, with the US aiming to seed the global AI ecosystem with American-built systems rooted in ‘national values.’

The plan champions Silicon Valley by curbing regulations, limiting federal scrutiny, and shielding tech giants from legal liability—potentially reinforcing monopolies. It also underlines a national security-first mentality, urging aggressive safeguards against foreign misuse of AI, cyber threats, and misinformation. Notably, it proposes DARPA-led initiatives to unravel the inner workings of large language models, acknowledging that even their creators often can’t fully explain how these systems function.

Internationally, the plan takes a competitive, rather than cooperative, stance. Allies are expected to align with US export controls and values, while multilateral forums like the UN and OECD are dismissed as bureaucratic and misaligned. That bifurcation risks alienating global partners—particularly the EU, which favours heavy AI regulation—while increasing pressure on countries like India and Japan to choose sides in the US–China tech rivalry.

Despite its combative framing, the strategy also nods to inclusion and workforce development, calling for tax-free employer-sponsored AI training, investment in apprenticeships, and growing military academic hubs. Still, as Kurbalija warns, the promise of AI openness may clash with the plan’s underlying nationalistic thrust—raising questions about whether it truly aims to democratise AI, or merely dominate it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings Gemini AI shortcut to Android home screens

Google has launched a new AI Mode shortcut in Android Search, offering direct home-screen access to its Gemini-powered tools. The upgrade brings conversational AI to everyday mobile searches, enabling users to ask complex questions and receive context-rich responses without leaving the home screen.

AI Mode, introduced in Google Labs and now available on a wide range of Android devices, marks a leap in integrating AI across Android’s ecosystem. The feature’s rise from a limited beta to mass adoption follows enhancements powered by Gemini 2.5 Pro and Deep Search, now with 100 million monthly users.

Key functions include multimodal inputs, advanced planning tools, and even the ability for AI to call businesses to verify local information. These capabilities are already live for paid subscribers, while core features remain free, drawing comparisons with rivals such as ChatGPT and Bing AI.

Privacy concerns surfaced as real-time interactions expand, but Google claims strong data protection controls are in place. As AI-powered results blend into traditional search, SEO strategies and user trust will be tested, signalling a new era in mobile discovery and digital engagement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women-only dating app Tea suffers catastrophic data leak

Tea, a women-only dating app, has suffered a massive data breach after its backend was found completely unsecured. Over 72,000 private images and more than 13,000 government-issued IDs were leaked online.

Some documents were dated as recently as 2025, contradicting the company’s claim that only ‘old data’ was affected. The data, totalling 59.3 GB, included verification selfies, DMs, and public posts. It spread rapidly through 4chan and decentralised platforms like BitTorrent.

Critics have blamed Tea’s use of ‘vibe coding’, AI-generated code with no proper review, which reportedly left its Firebase database open with no authentication.

Experts warn that relying on AI tools to build apps without security checks is becoming increasingly risky. Research shows nearly half of AI-generated code contains vulnerabilities, yet many startups still use it for core features. Tea users are now urged to monitor their identity and financial data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI startup enables context across thousands of hours of video

Samsung Next has invested in Memories.ai, a startup specialising in long-duration video analysis capable of processing up to 10 million hours of footage.

The tool uses AI to transform massive video archives into searchable, structured datasets, even across multiple videos spanning hours or days.

The solution employs a layered pipeline: it filters noise, compresses critical segments, indexes content for natural-language queries, segments into meaningful units, and aggregates those insights into digestible reports. This structure enables users to search and analyse complex visual datasets seamlessly.

Memories.ai’s co-founders, Dr Shawn Shen and Enmin (Ben) Zhou, bring backgrounds from Meta’s Reality Labs and machine learning engineering.

The company raised $8 million in seed funding, surpassing its $4 million goal, led by Susa Ventures, including Samsung Next, Fusion Fund, Crane Ventures, Seedcamp, and Creator Ventures.

Samsung is banking on Memories.ai’s edge computing strengths, particularly to enable privacy-conscious applications such as home security analytics without cloud dependency. Its startup focus includes security firms and marketers needing scalable tools to sift through extensive video content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Parents grapple with teaching kids responsible AI use

Experts say many families face a dilemma between protecting children from AI and preventing them from falling behind in an increasingly AI-driven world.

In interviews, parents expressed unease about deepfakes, blurred lines between reality and AI-generated content, and potential threats they feel unprepared to teach their children to identify.

Still, some parents are introducing AI tools to their children under supervision, viewing guided exposure as safer and more beneficial than strict prohibition. These parents emphasise helping kids learn AI responsibly rather than barring them from using it.

Experts warn that many parents delay engagement with AI out of fear or lack of knowledge, isolating themselves from opportunities to guide children.

Instead, they recommend an informed, gradual introduction, including open discussions about AI risks and benefits. Careful mediation, honesty, and education may help children develop healthy tech habits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI feature to reshape how search results appear

Google has introduced a new experimental feature named Web Guide, aimed at reorganising search results by using AI to group information based on the query’s different aspects.

Available through Search Labs, the tool helps users explore topics in a more structured way instead of relying on the standard, linear results page.

Powered by Google’s Gemini AI, Web Guide works particularly well for open-ended or complex queries. For example, searches such as ‘how to solo travel in Japan’ would return results neatly arranged into guides, safety advice, or personal experiences instead of a simple list.

The feature handles multi-sentence questions, offering relevant answers broken into themed sections.

Users who opt in can access Web Guide via the Web tab and toggle it off without exiting the entire experiment. While it works only on that tab, Google plans to expand it to the broader ‘All’ tab in time.

The move follows Google’s broader push to incorporate Gemini into tools like AI Mode, Flow, and other experimental products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LegalOn raises 50 million to expand AI legal tools

LegalOn Technologies has secured 50 million dollars in Series E funding to expand its AI-powered contract review platform.

The Japanese startup, backed by SoftBank and Goldman Sachs, aims to streamline legal work by reducing the time spent reviewing and managing documents.

Its core product, Review, identifies contract risks and suggests edits using expert-built legal playbooks. The company says it improves accuracy while cutting review time by up to 85 percent across 7,000 client organisations in Japan, the US and the UK.

LegalOn plans to develop AI agents to handle tasks before and after the review process, including contract tracking and workflow integration. A new tool, Matter Management, enables teams to efficiently assign contract responsibilities, collaborate, and link documents.

While legal AI adoption grows, CEO Daniel Lewis insists the technology will support rather than replace lawyers. He believes professionals who embrace AI will gain the most leverage, as human oversight remains vital to legal judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta boosts teen safety as it removes hundreds of thousands of harmful accounts

Meta has rolled out new safety tools to protect teenagers on Instagram and Facebook, including alerts about suspicious messages and a one-tap option to block or report harmful accounts.

The company said it is increasing efforts to prevent inappropriate contact from adults and has removed over 635,000 accounts that sexualised or targeted children under 13.

Of those accounts, 135,000 were caught posting sexualised comments, while another 500,000 were flagged for inappropriate interactions.

Meta said teen users blocked over one million accounts and reported another million after receiving in-app warnings encouraging them to stay cautious in private messages.

The company also uses AI to detect users lying about their age on Instagram. If flagged, those accounts are automatically converted to teen accounts with stronger privacy settings and messaging restrictions. Since 2024, all teen accounts are set to private by default.

Meta’s move comes as it faces mounting legal pressure from dozens of US states accusing the company of contributing to the youth mental health crisis by designing addictive features on Instagram and Facebook. Critics argue that more must be done to ensure safety instead of relying on user action alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and quantum tech reshape global business

AI and quantum computing are reshaping global industries as investment surges and innovation accelerates across sectors like finance, healthcare and logistics. Microsoft and Amazon are driving a major shift in AI infrastructure, transforming cloud services into profitable platforms.

Quantum computing is moving beyond theory, with real-world applications emerging in pharmaceuticals and e-commerce. Google’s development of quantum-inspired algorithms for virtual shopping and faster analytics demonstrates its potential to revolutionise decision-making.

Sustainability is also gaining ground, with companies adopting AI-powered solutions for renewable energy and eco-friendly manufacturing. At the same time, digital banks are integrating AI to challenge legacy finance systems, offering personalised, accessible services.

Despite rapid progress, ethical concerns and regulatory challenges are mounting. Data privacy, AI bias, and antitrust issues highlight the need for responsible innovation, with industry leaders urged to balance risk and growth for long-term societal benefit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaping the US labour market

AI is often seen as a job destroyer, but it’s also emerging as a significant source of new employment, according to a new Brookings report. The number of job postings mentioning AI has more than doubled in the past year, with demand continuing to surge across various industries and regions.

Over the past 15 years, AI-related job listings have grown nearly 29% annually, far outpacing the 11% growth rate of overall job postings in the broader economy.

Brookings based its findings on data from Lightcast, a labour market analytics firm, and noted rising demand for AI skills across sectors, including manufacturing. According to the US Census Bureau’s Business Trends Survey, the share of manufacturers using AI has jumped from 4% in early 2023 to 9% by mid-2025.

Yet, AI jobs still form a small part of the market. Goldman Sachs predicts widespread AI adoption will peak in the early 2030s, with a slower near-term influence on jobs. ‘AI is visible in the micro labour market data, but it doesn’t dominate broader job dynamics,’ said Joseph Briggs, an economist at Goldman Sachs.

Roles range from AI engineers and data scientists to consultants and marketers learning to integrate AI into business operations responsibly and ethically. In 2025, over 80,000 job postings cited generative AI skills—up from fewer than 4,000 in 2010, Brookings reported, indicating explosive long-term growth.

Job openings involving ‘responsible AI’—those addressing ethical AI use in business and society—are also rising, according to data from Indeed and Lightcast. ‘As AI evolves, so does what counts as an AI job,’ said Cory Stahle of the Indeed Hiring Lab, noting that definitions shift with new business applications.

AI skills carry financial value, too. Lightcast found that jobs requiring AI expertise offer an average salary premium of $18,000, or 28% more annually. Unsurprisingly, tech hubs like Silicon Valley and Seattle dominate AI hiring, but job growth spreads to regions like the Sunbelt and the East Coast.

Mark Muro of Brookings noted that universities play a key role in AI job growth across new regions by fuelling local innovation. AI is also entering non-tech fields such as finance, human resources, and marketing, with more than half of AI-related postings now being outside IT roles.

Muro expects more widespread AI adoption in the next few years, as employers gain clarity on its value, limitations and potential for productivity. ‘There’s broad consensus that AI boosts productivity and economic competitiveness,’ he said. ‘It energises regional leaders and businesses to act more quickly.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!