AI market surge raises alarm over financial stability

AI has become one of the dominant forces in global markets, with AI-linked firms now making up around 44% of the S&P 500’s market capitalisation. Their soaring valuations have pushed US stock indices near levels last seen in the dot com bubble.

While optimism remains high, the future is uncertain. AI’s infrastructure demands are immense, with estimates suggesting that trillions of dollars will be needed to build and power new data centres by 2030.

Much of this investment is expected to be financed through debt, increasing exposure to potential market shocks. Analysts warn that any slowdown in AI progress or monetisation could trigger sharp corrections in AI-related asset prices.

The Bank of England has noted that financial stability risks could rise if AI infrastructure expansion continues at its current pace. Banks and private credit funds may face growing exposure to highly leveraged sectors, while power and commodity markets could also come under strain from surging AI energy needs.

Although AI remains a powerful growth driver for the US economy, its rapid expansion is creating new systemic vulnerabilities. Policymakers and financial institutions are urged to monitor the sector closely as the next phase of AI-driven growth unfolds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta AI brings conversational edits to Instagram Stories

Instagram is rolling out generative AI editing for Stories, expanding June’s tools with smarter prompts and broader effects. Type what you want removed or changed, and Meta AI does it. Think conversational edits, similar to Google Photos.

New controls include an Add Yours sticker for sharing your custom look with friends. A Presets browser shows available styles at a glance. Seasonal effects launch for Halloween, Diwali, and more.

Restyle Video brings preset effects to short clips, with options to add flair or remove objects. Edits aim to be fast, fun, and reversible. Creativity first, heavy lifting handled by AI.

Text gets a glow-up: Instagram is testing AI restyle for captions. Pick built-ins like ‘chrome’ or ‘balloon,’ or prompt Meta AI for custom styles.

Meta AI hasn’t wowed Instagram users, but this could change sentiment. The pitch: fewer taps, better results, and shareable looks. If it sticks, creating Stories becomes meaningfully easier.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sky acquisition by OpenAI signals ChatGPT’s push into native workflows

OpenAI acquired Software Applications Incorporated, the maker of Sky, to accelerate the development of interfaces that understand context, adapt to intent, and act across apps. Sky’s macOS layer sees what’s on screen and executes tasks. Its team joins OpenAI to bake these capabilities into ChatGPT.

Sky turns the Mac into a cooperative workspace for writing, planning, coding, and daily tasks. It can control native apps, invoke workflows, and ground actions in on-screen context. That tight integration now becomes a core pillar of ChatGPT’s product roadmap.

OpenAI says the goal is capability plus usability: not just answers, but actions completed in your tools. VP Nick Turley framed it as moving from prompts to productivity. Expect ChatGPT features that feel ambient, proactive, and native on desktop.

Sky’s founders say large language models finally enable intuitive, customizable computing. CEO Ari Weinstein described Sky as a layer that ‘floats’ over your desktop, helping you think and create. OpenAI plans to bring that experience to hundreds of millions of users.

A disclosure notes that a fund associated with Sam Altman held a passive stake in Software Applications Incorporated. Nick Turley and Fidji Simo led the deal. OpenAI’s independent Transaction and Audit Committees reviewed and approved the acquisition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

At UMN, AI meets ethics, history, and craft

AI is remaking daily life, but it can’t define what makes us human. The liberal arts help us probe ethics, meaning, and power as algorithms scale. At the University of Minnesota Twin Cities, that lens anchors curiosity with responsibility.

In the College of Liberal Arts, scholars are treating AI as both a tool and a textbook. They test its limits, trace its histories, and surface trade-offs around bias, authorship, and agency. Students learn to question design choices rather than just consume outputs.

Linguist Amanda Dalola, who directs the Language Center, experiments with AI as a language partner and reflective coach. Her aim isn’t replacement but augmentation, faster feedback, broader practice, richer cultural context. The point is discernment: when to use, when to refuse.

Statistician Galin Jones underscores the scaffolding beneath the hype. You cannot do AI without statistics, he tells students, so the School of Statistics emphasises inference, uncertainty, and validation. Graduates leave fluent in models, and in the limits of what models claim.

Composer Frederick Kennedy’s opera I am Alan Turing turns theory into performance. By staging Turing’s questions about machine thought and human identity, the work fuses history, sound design, and code. Across philosophy, music, and more, CLA frames AI as a human story first.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft revives friendly AI helper with Mico

Microsoft has unveiled a new AI companion called Mico, designed to replace the infamous Clippy as the friendly face of its Copilot assistant. The animated avatar, shaped like a glowing flame or blob, reacts emotionally and visually during conversations with users.

Executives said Mico aims to balance warmth and utility, offering human-like cues without becoming intrusive. Unlike Clippy, the character can easily be switched off and is intended to feel supportive rather than persistent or overly personal.

Mico’s launch reflects growing debate about personality in AI assistants as tech firms navigate ethical concerns. Microsoft stressed that its focus remains on productivity and safety, distancing itself from flirtatious or emotionally manipulative AI designs seen elsewhere.

The character will first appear in US versions of Copilot on laptops and mobile apps. Microsoft also revealed an AI tutoring mode for students, reinforcing its efforts to create more educational and responsibly designed AI experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gigawatt-scale AI marks Anthropic’s next compute leap

Anthropic will massively expand on Google Cloud, planning to deploy up to 1 million TPUs and bring well over a gigawatt online in 2026. The multiyear investment totals tens of billions to accelerate research and product development.

Google Cloud CEO Thomas Kurian said Anthropic’s move reflects TPUs’ price-performance and efficiency, citing ongoing innovations and the seventh-generation ‘Ironwood’ TPU. Google will add capacity and drive further efficiency across its accelerator portfolio.

Anthropic now serves over 300,000 business customers, with large accounts up nearly sevenfold year over year. Added compute will meet demand while enabling deeper testing, alignment research, and responsible deployment at a global scale.

CFO Krishna Rao said the expansion keeps Claude at the frontier for Fortune 500s and AI-native startups alike. Increased capacity ensures reliability as usage and mission-critical workloads grow rapidly.

Anthropic’s diversified strategy spans Google TPUs, Amazon Trainium, and NVIDIA GPUs. It remains committed to Amazon as its primary training partner, including Project Rainier’s vast US clusters, and will continue investing to advance model capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands AI safety tools for teens

Meta has announced new AI safety tools to give parents greater control over how teenagers use its AI features. The update will first launch on Instagram, allowing parents to disable one-on-one chats between teens and AI characters.

Parents will be able to block specific AI assistants and see topics teens discuss with them. Meta said the goal is to encourage transparency and support families as young users learn to navigate AI responsibly.

Teen protections already include PG-13-guided responses and restrictions on sensitive discussions, such as self-harm or eating disorders. The company said it also uses AI detection systems to apply safeguards when suspected minors misreport their age.

The new parental controls will roll out in English early next year across the US, UK, Canada, and Australia. Meta said it will continue updating features to address parents’ concerns about privacy, safety, and teen wellbeing online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Train your own language model for $100 with NanoChat

Andrej Karpathy has unveiled NanoChat, an open-source framework that lets users train a small-scale language model for around $100 in just a few hours. Designed for accessibility and education, the project offers a simplified path into AI model development without requiring large-scale hardware.

Running on a single GPU, NanoChat automates the full training process, from tokenisation and pretraining to fine-tuning and deployment, using a single script. The resulting model contains about 1.9 billion parameters trained on 38 billion tokens, capable of basic reasoning, text generation, and code completion.

The framework’s compact 8,000-line Python codebase is readable and modifiable, encouraging users to experiment with model design and performance benchmarks such as MMLU and ARC. Released under the MIT Licence, NanoChat provides open access to documentation and scripts on GitHub, making it an ideal resource for students, researchers, and AI enthusiasts eager to learn how language models work.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CraftGPT: Language model built in Minecraft

A Minecraft creator known as Sammyuri has unveiled CraftGPT, an in-game language model powered entirely by redstone circuits. Built from nearly 439 million blocks, the project turns Minecraft into a working simulation of AI.

CraftGPT was trained on a tiny dataset called TinyChat, containing only 64 tokens—a minuscule amount compared to the billions used by modern large language models. Despite its simplicity, it demonstrates how AI can turn input into structured responses.

The model works by translating player inputs into redstone signals that flow through logic gates and memory circuits. It’s a creative and educational blend of engineering and imagination, showing how fundamental AI concepts can be explored inside a virtual game world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!