New AI feature lets WordPress users build sites in minutes

WordPress.com has introduced a new AI website builder designed to help users create an entire website in just minutes.

Available now in early access, the feature allows anyone with a WordPress.com account to try it out free of charge. It uses a conversational interface that responds to user prompts to generate complete sites, including written content, images, colour schemes, and layouts.

Users begin by describing what kind of website they need—whether a blog, portfolio, or business site—and the AI does the rest.

The more specific the initial description, the more tailored the outcome will be. If the first version isn’t quite right, users can continue refining their site simply by chatting with the builder. Once the result is satisfactory, the website can be published directly through WordPress.com.

Currently limited to basic websites, the new tool does not yet support complex features such as ecommerce or external integrations. WordPress has indicated that more functionality is coming soon.

The generated sites remain fully customisable using the usual WordPress tools, giving users full control over editing and manual adjustments post-creation.

At launch, users get 30 free prompts before needing to choose a hosting plan, with pricing starting at $18 per month.

While similar AI tools have been introduced by platforms like Wix and Squarespace, WordPress’s version brings such technology to a significantly wider audience, given that the platform powers over 40% of all websites worldwide.

For more information on these topics, visit diplomacy.edu.

IBM pushes towards quantum advantage in two years with breakthrough code

IBM’s Quantum CTO, Oliver Dial, predicts that quantum advantage, where quantum computers outperform classical ones on specific tasks, could be achieved within two years.

The milestone is seen as possible due to advances in error mitigation techniques, which enable quantum computers to provide reliable results despite their inherent noise. While full fault-tolerant quantum systems are still years away, IBM’s focus on error mitigation could bring real-world results soon.

A key part of IBM’s progress is the introduction of the ‘Gross code,’ a quantum error correction method that drastically reduces the number of physical qubits needed per logical qubit, making the engineering of quantum systems much more feasible.

Dial described this development as a game changer, improving both efficiency and practicality, making quantum systems easier to build and test. The Gross code reduces the need for large, cumbersome arrays of qubits, streamlining the path toward more powerful quantum computers.

Looking ahead, IBM’s roadmap outlines ambitious goals, including building a fully error-corrected system with 200 logical qubits by 2029. Dial stressed the importance of flexibility in the roadmap, acknowledging that the path to these goals could shift but would still lead to the achievement of quantum milestones.

The company’s commitment to these advancements reflects the dedication of the quantum team, many of whom have been working on the project for over a decade.

Despite the excitement and the challenges that remain, IBM’s vision for the future of quantum computing is clear: building the world’s first useful quantum computers.

The company’s ongoing work in quantum computing continues to capture imaginations, with significant steps being taken towards making these systems a reality in the near future.

For more information on these topics, visit diplomacy.edu.

Google unveils new AI agent toolkit

This week at Google Cloud Next in Las Vegas, Google revealed its latest push into ‘agentic AI’. A software designed to act independently, perform tasks, and communicate with other digital systems.

Central to this effort is the Agent Development Kit (ADK), an open-source toolkit said to let developers build AI agents in under 100 lines of code.

Instead of requiring complex systems, the ADK includes pre-built connectors and a so-called ‘agent garden’ to streamline integration with data platforms like BigQuery and AlloyDB.

Google also introduced a new Agent2Agent (A2A) protocol, aimed at enabling cooperation between agents from different vendors. With over 50 partners, including Accenture, SAP and Salesforce, already involved, the company hopes to establish a shared standard for AI interaction.

Powering these tools is Google’s latest AI chip, Ironwood, a seventh-generation TPU promising tenfold performance gains over earlier models. These chips, designed for use with advanced models like Gemini 2.5, reflect Google’s ambition to dominate AI infrastructure.

Despite the buzz, analysts caution that the hype around AI agents may outpace their actual utility. While vendors like Microsoft, Salesforce and Workday push agentic AI to boost revenue, in some cases even replacing staff, experts argue that current models still fall short of real human-like intelligence.

Instead of widespread adoption, businesses are expected to focus more on managing costs and complexity, especially as economic uncertainty grows. Without strong oversight, these tools risk becoming costly, unpredictable, and difficult to scale.

For more information on these topics, visit diplomacy.edu.

Google pushes AI limits with Ironwood

Google has announced Ironwood, its latest and most advanced AI processor, marking the seventh generation of its custom Tensor Processing Unit (TPU) architecture.

Designed specifically for the growing demands of its Gemini models, particularly those requiring complex simulated reasoning, which Google refers to as ‘thinking’, Ironwood represents a significant leap forward in performance.

Instead of relying solely on software updates, Google is highlighting how hardware like Ironwood plays a central role in boosting AI capabilities, ushering in what it calls the ‘age of inference.’

However, this TPU is not just faster but dramatically more scalable. Ironwood chips will operate in tightly connected clusters of up to 9,216 units, each cooled by liquid and linked through an enhanced Inter-Chip Interconnect.

These chips can also be deployed in smaller 256-chip servers, offering flexibility for cloud developers and researchers.

Instead of offering modest improvements, Ironwood delivers a peak throughput of 4,614 teraflops per chip, alongside 192GB of memory and 7.2 terabits per second of bandwidth, making it vastly superior to its predecessor, Trillium.

Google says this advancement is more than a performance boost, it’s a foundation for building AI agents that can act on a user’s behalf by gathering information and producing outputs proactively.

Rather than functioning as passive tools, AI systems powered by Ironwood are intended to behave more independently, reflecting a growing trend toward what Google calls ‘agentic AI.’

While Google’s comparison to supercomputers like El Capitan may be flawed due to differing hardware standards, there’s no doubt Ironwood is a substantial upgrade. The company claims it is twice as powerful per watt as the v5p TPU, even if the newer Trillium (v6) chip wasn’t included in the comparison.

Regardless, Ironwood is expected to power the next generation of AI breakthroughs, as the company prepares to move beyond its current Gemini 2.5 model.

For more information on these topics, visit diplomacy.edu.

Virtual AI agents tested in social good experiment

Nonprofit organisation Sage Future has launched an unusual initiative that puts AI agents to work for philanthropy.

In a recent experiment backed by Open Philanthropy, four AI models, including OpenAI’s GPT-4o and two of Anthropic’s Claude Sonnet models, were tasked with raising money for a charity of their choice. Within a week, they collected $257 for Helen Keller International, which supports global health efforts.

The AI agents were given a virtual workspace where they could browse the internet, send emails, and create documents. They collaborated through group chats and even launched a social media account to promote their campaign.

Though most donations came from human spectators observing the experiment, the exercise revealed the surprising resourcefulness of these AI tools. one Claude model even generated profile pictures using ChatGPT and let viewers vote on their favourite.

Despite occasional missteps, including agents pausing for no reason or becoming distracted by online games, the experiment offered insights into the emerging capabilities of autonomous systems.

Sage’s director, Adam Binksmith, sees this as just the beginning, with future plans to introduce conflicting agent goals, saboteurs, and larger oversight systems to stress-test AI coordination and ethics.

For more information on these topics, visit diplomacy.edu.

Blockchain app ARK fights to keep human creativity ahead of AI

Nearly 20 years after his AI career scare, screenwriter Ed Bennett-Coles and songwriter Jamie Hartman have developed ARK, a blockchain app designed to safeguard creative work from AI exploitation.

The platform lets artists register ownership of their ideas at every stage, from initial concept to final product, using biometric security and blockchain verification instead of traditional copyright systems.

ARK aims to protect human creativity in an AI-dominated world. ‘It’s about ring-fencing the creative process so artists can still earn a living,’ Hartman told AFP.

The app, backed by Claritas Capital and BMI, uses decentralised blockchain technology instead of centralised systems to give creators full control over their intellectual property.

Launching summer 2025, ARK challenges AI’s ‘growth at all costs’ mentality by emphasising creative journeys over end products.

Bennett-Coles compares AI content to online meat delivery, efficient but soulless, while human artistry resembles a grandfather’s butcher trip, where the experience matters as much as the result.

The duo hopes their solution will inspire industries to modernise copyright protections before AI erodes them completely.

For more information on these topics, visit diplomacy.edu.

Microsoft’s Copilot Vision now sees your entire screen to guide you through apps

Microsoft is testing a major upgrade to its Copilot AI that can view your entire screen instead of just working within the Edge browser.

The new Copilot Vision feature helps users navigate apps like Photoshop and Minecraft by analysing what’s on display and offering step-by-step guidance, even highlighting specific tools instead of just giving verbal instructions.

The feature operates more like a shared Teams screen instead of Microsoft’s controversial Recall snapshot system.

Currently limited to US beta testers, Copilot Vision will eventually highlight interface elements directly on users’ screens. It works on standard Windows PCs instead of requiring specialised Copilot+ hardware, with mobile versions coming to iOS and Android.

Alongside visual assistance, Microsoft is adding document search capabilities. Copilot can now find information within files like Word documents and PDFs instead of just searching by filename.

Both updates will roll out fully in the coming weeks, potentially transforming how users interact with both apps and documents on their Windows devices.

For more information on these topics, visit diplomacy.edu.

Amazon launches Nova Sonic AI for natural voice interactions

Amazon has unveiled Nova Sonic, a new AI model designed to process and generate human-like speech, positioning it as a rival to OpenAI and Google’s top voice assistants. The company claims it outperforms competitors in speed, accuracy, and cost, and it is reportedly 80% cheaper than GPT-4o.

Already powering Alexa+, Nova Sonic excels in real-time conversation, handling interruptions and noisy environments better than legacy AI assistants.

Unlike older voice models, Nova Sonic can dynamically route requests, fetching live data or triggering external actions when needed. Amazon says it achieves a 4.2% word error rate across multiple languages and responds in just 1.09 seconds, faster than OpenAI’s GPT-4o.

Developers can access it via Bedrock, Amazon’s AI platform, using a new streaming API.

The launch signals Amazon’s push into artificial general intelligence (AGI), AI that mimics human capabilities.

Rohit Prasad, head of Amazon’s AGI division, hinted at future models handling images, video, and sensory data. This follows last week’s preview of Nova Act, an AI for browser tasks, suggesting Amazon is accelerating its AI rollout beyond Alexa.

For more information on these topics, visit diplomacy.edu.

Starday plans rapid rollout of AI-developed snacks

AI-driven food company Starday has secured $11 million in Series A funding to support the development and retail expansion of its innovative food brands.

The round was led by Slow Ventures and Equal Ventures, with an additional $3 million credit facility from Silicon Valley Bank. Starday’s total funding now stands at $20 million.

Founded by Chaz Flexman, Lena Kwak, and Lily Burtis, Starday uses AI to identify market gaps and quickly create new food products that cater to evolving consumer preferences.

Its latest offerings, including allergen-free snacks like Habeya Sweet Potato Crackers and All Day chickpea protein crunch, are already available in major United States grocery chains such as Kroger and Hannaford.

With plans to launch 14 new products across its four brands, the company is aiming to redefine the pace and precision of food innovation.

CEO Flexman says the funding will help Starday partner with more retailers and food brands to fill gaps in the market, accelerating the launch of targeted products in fast-growing categories. Backers believe Starday’s data-led model gives it a structural edge in a traditionally slow-moving industry.

For more information on these topics, visit diplomacy.edu.

LMArena tightens rules after Llama 4 incident

Meta has come under scrutiny after submitting a specially tuned version of its Llama 4 AI model to the LMArena leaderboard, sparking concerns about fair competition.

The ‘experimental’ version, dubbed Llama-4-Maverick-03-26-Experimental, ranked second in popularity, trailing only Google’s Gemini-2.5-Pro.

While Meta openly labelled the model as experimental, many users assumed it reflected the public release. Once the official version became available, users quickly noticed it lacked the expressive, emoji-filled responses seen in the leaderboard battles.

LMArena, a crowdsourced platform where users vote on chatbot responses, said Meta’s custom variant appeared optimised for human approval, possibly skewing the results.

The group released over 2,000 head-to-head matchups to back its claims, showing the experimental Llama 4 consistently offered longer, more engaging answers than the more concise public build.

In response, LMArena updated its policies to ensure greater transparency and stated that Meta’s use of the experimental model did not align with expectations for leaderboard submissions.

Meta defended its approach, stating the experimental model was designed to explore chat optimisation and was never hidden. While company executives denied any misconduct, including speculation around training on test data, they acknowledged inconsistent performance across platforms.

Meta’s GenAI chief Ahmad Al-Dahle said it would take time for all public implementations to stabilise and improve. Meanwhile, LMArena plans to upload the official Llama 4 release to its leaderboard for more accurate evaluation going forward.

For more information on these topics, visit diplomacy.edu.