New AI feature lets WordPress users build sites in minutes

WordPress.com has introduced a new AI website builder designed to help users create an entire website in just minutes.

Available now in early access, the feature allows anyone with a WordPress.com account to try it out free of charge. It uses a conversational interface that responds to user prompts to generate complete sites, including written content, images, colour schemes, and layouts.

Users begin by describing what kind of website they need—whether a blog, portfolio, or business site—and the AI does the rest.

The more specific the initial description, the more tailored the outcome will be. If the first version isn’t quite right, users can continue refining their site simply by chatting with the builder. Once the result is satisfactory, the website can be published directly through WordPress.com.

Currently limited to basic websites, the new tool does not yet support complex features such as ecommerce or external integrations. WordPress has indicated that more functionality is coming soon.

The generated sites remain fully customisable using the usual WordPress tools, giving users full control over editing and manual adjustments post-creation.

At launch, users get 30 free prompts before needing to choose a hosting plan, with pricing starting at $18 per month.

While similar AI tools have been introduced by platforms like Wix and Squarespace, WordPress’s version brings such technology to a significantly wider audience, given that the platform powers over 40% of all websites worldwide.

For more information on these topics, visit diplomacy.edu.

Victims of AI-driven sex crimes in Korea continue to grow

South Korea is facing a sharp rise in AI-related digital sex crimes, with deepfake pornography and online abuse increasingly affecting young women and children.

According to figures released by the Ministry of Gender Equality and Family and the Women’s Human Rights Institute, over 10,000 people sought help last year, marking a 14.7 percent increase from 2023.

Women made up more than 70 percent of those who contacted the Advocacy Center for Online Sexual Abuse Victims.

The majority were in their teens or twenties, with abuse often occurring via social media, messaging apps, and anonymous platforms. A growing portion of victims, including children under 10, were targeted due to the easy accessibility of AI tools.

The most frequently reported issue was ‘distribution anxiety,’ where victims feared the release of sensitive or manipulated videos, followed by blackmail and illegal filming.

Deepfake cases more than tripled in one year, with synthetic content often involving the use of female students’ images. In one notable incident, a university student and his peers used deepfake techniques to create explicit fake images of classmates and shared them on Telegram.

With over 300,000 pieces of illicit content removed in 2024, authorities warn that the majority of illegal websites are hosted overseas, complicating efforts to take down harmful material.

The South Korean government plans to strengthen its response by expanding educational outreach, supporting victims further, and implementing new laws to prevent secondary harm by allowing the removal of personal information alongside explicit images.

For more information on these topics, visit diplomacy.edu.

AI feud intensifies as OpenAI sues Elon Musk

OpenAI has filed a countersuit against Elon Musk, accusing the billionaire entrepreneur of a sustained campaign of harassment intended to damage the company and regain control over its AI developments.

The legal filing comes in response to Musk’s lawsuit earlier this year, in which he claimed OpenAI had strayed from its founding mission of developing AI for the benefit of humanity.

In its countersuit, OpenAI urged a federal court to block Musk from taking further ‘unlawful and unfair actions’ and hold him accountable for the alleged damage already inflicted.

The company cited press attacks, legal pressure, and social media posts to Musk’s 200 million followers as tactics aimed at undermining its operations and reputation.

It also described Musk’s demands for corporate records and attempted acquisition efforts as part of a broader scheme to derail OpenAI’s progress.

The legal conflict highlights the growing rivalry between OpenAI and xAI, the AI firm Musk launched in 2023.

OpenAI maintains that Musk’s actions are motivated by self-interest and a desire to slow down a competing organisation. A jury trial has been scheduled for spring 2026 to resolve the escalating dispute.

For more information on these topics, visit diplomacy.edu.

Microsoft pauses $1 billion data centre project in Ohio

Microsoft has announced it is ‘slowing or pausing’ some data centre construction projects, including a $1 billion plan in Ohio, amid shifting demand for AI infrastructure.

The company confirmed it would halt early-stage development on rural land in Licking County, near Columbus, and will repurpose two of the sites for farmland.

The decision follows Microsoft’s rapid scaling of infrastructure to meet the soaring demand for AI and cloud services, which has since softened. The company acknowledged that such large projects require continuous adaptation to align with customer needs.

While Microsoft did not specify other paused projects, it revealed the suspension of later stages of a Wisconsin data centre expansion.

The slowdown also coincides with changes in Microsoft’s partnership with OpenAI, with the two companies revising their agreement to allow OpenAI to build its own AI infrastructure. This move reflects broader trends in AI computing needs, which are expensive and energy-intensive.

Despite the pause in Ohio, Microsoft plans to invest over $80 billion in AI infrastructure this fiscal year, continuing its global expansion, though it will now strategically pace its growth to align with evolving business priorities.

Local officials in Licking County expressed their disappointment, as the area had been a hub for significant tech investments, including those from Google and Meta.

For more information on these topics, visit diplomacy.edu.

IBM pushes towards quantum advantage in two years with breakthrough code

IBM’s Quantum CTO, Oliver Dial, predicts that quantum advantage, where quantum computers outperform classical ones on specific tasks, could be achieved within two years.

The milestone is seen as possible due to advances in error mitigation techniques, which enable quantum computers to provide reliable results despite their inherent noise. While full fault-tolerant quantum systems are still years away, IBM’s focus on error mitigation could bring real-world results soon.

A key part of IBM’s progress is the introduction of the ‘Gross code,’ a quantum error correction method that drastically reduces the number of physical qubits needed per logical qubit, making the engineering of quantum systems much more feasible.

Dial described this development as a game changer, improving both efficiency and practicality, making quantum systems easier to build and test. The Gross code reduces the need for large, cumbersome arrays of qubits, streamlining the path toward more powerful quantum computers.

Looking ahead, IBM’s roadmap outlines ambitious goals, including building a fully error-corrected system with 200 logical qubits by 2029. Dial stressed the importance of flexibility in the roadmap, acknowledging that the path to these goals could shift but would still lead to the achievement of quantum milestones.

The company’s commitment to these advancements reflects the dedication of the quantum team, many of whom have been working on the project for over a decade.

Despite the excitement and the challenges that remain, IBM’s vision for the future of quantum computing is clear: building the world’s first useful quantum computers.

The company’s ongoing work in quantum computing continues to capture imaginations, with significant steps being taken towards making these systems a reality in the near future.

For more information on these topics, visit diplomacy.edu.

Google unveils new AI agent toolkit

This week at Google Cloud Next in Las Vegas, Google revealed its latest push into ‘agentic AI’. A software designed to act independently, perform tasks, and communicate with other digital systems.

Central to this effort is the Agent Development Kit (ADK), an open-source toolkit said to let developers build AI agents in under 100 lines of code.

Instead of requiring complex systems, the ADK includes pre-built connectors and a so-called ‘agent garden’ to streamline integration with data platforms like BigQuery and AlloyDB.

Google also introduced a new Agent2Agent (A2A) protocol, aimed at enabling cooperation between agents from different vendors. With over 50 partners, including Accenture, SAP and Salesforce, already involved, the company hopes to establish a shared standard for AI interaction.

Powering these tools is Google’s latest AI chip, Ironwood, a seventh-generation TPU promising tenfold performance gains over earlier models. These chips, designed for use with advanced models like Gemini 2.5, reflect Google’s ambition to dominate AI infrastructure.

Despite the buzz, analysts caution that the hype around AI agents may outpace their actual utility. While vendors like Microsoft, Salesforce and Workday push agentic AI to boost revenue, in some cases even replacing staff, experts argue that current models still fall short of real human-like intelligence.

Instead of widespread adoption, businesses are expected to focus more on managing costs and complexity, especially as economic uncertainty grows. Without strong oversight, these tools risk becoming costly, unpredictable, and difficult to scale.

For more information on these topics, visit diplomacy.edu.

Google pushes AI limits with Ironwood

Google has announced Ironwood, its latest and most advanced AI processor, marking the seventh generation of its custom Tensor Processing Unit (TPU) architecture.

Designed specifically for the growing demands of its Gemini models, particularly those requiring complex simulated reasoning, which Google refers to as ‘thinking’, Ironwood represents a significant leap forward in performance.

Instead of relying solely on software updates, Google is highlighting how hardware like Ironwood plays a central role in boosting AI capabilities, ushering in what it calls the ‘age of inference.’

However, this TPU is not just faster but dramatically more scalable. Ironwood chips will operate in tightly connected clusters of up to 9,216 units, each cooled by liquid and linked through an enhanced Inter-Chip Interconnect.

These chips can also be deployed in smaller 256-chip servers, offering flexibility for cloud developers and researchers.

Instead of offering modest improvements, Ironwood delivers a peak throughput of 4,614 teraflops per chip, alongside 192GB of memory and 7.2 terabits per second of bandwidth, making it vastly superior to its predecessor, Trillium.

Google says this advancement is more than a performance boost, it’s a foundation for building AI agents that can act on a user’s behalf by gathering information and producing outputs proactively.

Rather than functioning as passive tools, AI systems powered by Ironwood are intended to behave more independently, reflecting a growing trend toward what Google calls ‘agentic AI.’

While Google’s comparison to supercomputers like El Capitan may be flawed due to differing hardware standards, there’s no doubt Ironwood is a substantial upgrade. The company claims it is twice as powerful per watt as the v5p TPU, even if the newer Trillium (v6) chip wasn’t included in the comparison.

Regardless, Ironwood is expected to power the next generation of AI breakthroughs, as the company prepares to move beyond its current Gemini 2.5 model.

For more information on these topics, visit diplomacy.edu.

Virtual AI agents tested in social good experiment

Nonprofit organisation Sage Future has launched an unusual initiative that puts AI agents to work for philanthropy.

In a recent experiment backed by Open Philanthropy, four AI models, including OpenAI’s GPT-4o and two of Anthropic’s Claude Sonnet models, were tasked with raising money for a charity of their choice. Within a week, they collected $257 for Helen Keller International, which supports global health efforts.

The AI agents were given a virtual workspace where they could browse the internet, send emails, and create documents. They collaborated through group chats and even launched a social media account to promote their campaign.

Though most donations came from human spectators observing the experiment, the exercise revealed the surprising resourcefulness of these AI tools. one Claude model even generated profile pictures using ChatGPT and let viewers vote on their favourite.

Despite occasional missteps, including agents pausing for no reason or becoming distracted by online games, the experiment offered insights into the emerging capabilities of autonomous systems.

Sage’s director, Adam Binksmith, sees this as just the beginning, with future plans to introduce conflicting agent goals, saboteurs, and larger oversight systems to stress-test AI coordination and ethics.

For more information on these topics, visit diplomacy.edu.

Brinc drones raises $75M to boost emergency drone tech

Brinc Drones, a Seattle-based startup founded by 25-year-old Blake Resnick, has secured $75 million in fresh funding led by Index Ventures.

Known for its police and public safety drones, Brinc is scaling its presence across emergency services, with the new funds bringing total investment to over $157 million. The round also includes participation from Motorola Solutions, a major player in US security infrastructure.

The company, founded in 2017, is part of a growing wave of American drone startups benefiting from tightened restrictions on Chinese drone manufacturers.

Brinc’s drones are designed for rapid response in hard-to-reach areas and boast unique features, such as the ability to break windows or deliver emergency supplies.

The new partnership with Motorola will enable tighter integration into 911 call centres, allowing AI systems to dispatch drones directly to emergency scenes.

Despite growing competition from other US startups like Flock Safety and Skydio, Brinc remains confident in the market’s potential.

With its enhanced funding and Motorola collaboration, the company is aiming to position itself as a leader in AI-integrated public safety technology while helping shift drone manufacturing back to the US.

For more information on these topics, visit diplomacy.edu.

ChatGPT accused of enabling fake document creation

Concerns over digital security have intensified after reports revealed that OpenAI’s ChatGPT has been used to generate fake identification cards.

The incident follows the recent introduction of a popular Ghibli-style feature, which led to a sharp rise in usage and viral image generation across social platforms.

Among the fakes circulating online were forged versions of India’s Aadhaar ID, created with fabricated names, photos, and even QR codes.

While the Ghibli release helped push ChatGPT past 150 million active users, the tool’s advanced capabilities have now drawn criticism.

Some users demonstrated how the AI could replicate Aadhaar and PAN cards with surprising accuracy, even using images of well-known figures like OpenAI CEO Sam Altman and Tesla’s Elon Musk. The ease with which these near-perfect replicas were produced has raised alarms about identity theft and fraud.

The emergence of AI-generated IDs has reignited calls for clearer AI regulation and transparency. Critics are questioning how AI systems have access to the formatting of official documents, with accusations that sensitive datasets may be feeding model development.

As generative AI continues to evolve, pressure is mounting on both developers and regulators to address the growing risk of misuse.

For more information on these topics, visit diplomacy.edu.