WordPress AI team outlines SEO shifts

Industry expectations around SEO are shifting as AI agents increasingly rely on existing search infrastructure, according to James LePage, co-lead of the WordPress AI team at Automattic.

Search discovery for AI systems continues to depend on classic signals such as links, authority and indexed content, suggesting no structural break from traditional search engines.

Publishers are therefore being encouraged to focus on semantic markup, schema and internal linking, with AI optimisation closely aligned to established long-tail search strategies.

Future-facing content strategies prioritise clear summaries, ranked information and progressive detail, enabling AI agents to reuse and interpret material independently of traditional websites.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Why AI adoption trails in South Africa

South Africa’s rate of AI implementation is roughly half that of the US, according to insights from Specno. Analysts attribute the gap to shortages in skills, weak data infrastructure and limited alignment between AI projects and core business strategy.

Despite moderate AI readiness levels, execution remains a major challenge across South African organisations. Skills shortages, insufficient workforce training and weak organisational readiness continue to prevent AI systems from moving beyond pilot stages.

Industry experts say many executives recognise the value of AI but struggle to adopt it in practice. Constraints include low IT maturity, risk aversion and organisational cultures that resist large-scale transformation.

By contrast, companies in the US are embedding AI into operations, talent development and decision-making. Analysts say South Africa must rapidly improve executive literacy, data ecosystems and practical skills to close the gap.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Questions mount over AI-generated artist

An artist called Sienna Rose has drawn millions of streams on Spotify, despite strong evidence suggesting she is AI-generated. Several of her jazz-influenced soul tracks have gone viral, with one surpassing five million plays.

Streaming platform Deezer says many of its songs have been flagged as AI-made using detection tools that identify technical artefacts in the audio. Signs include an unusually high volume of releases, generic sound patterns and a complete absence of live performances or online presence.

The mystery intensified after pop star Selena Gomez briefly shared one of Rose’s tracks on social media, only for it to be removed amid growing scrutiny. Record labels linked to Rose have declined to clarify whether a human performer exists.

The case highlights mounting concern across the industry as AI music floods streaming services. Artists, including Raye and Paul McCartney, have warned audiences that they still value emotional authenticity over algorithmic output.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI power demand pushes nuclear energy back into focus

Rising AI-driven electricity demand is straining power grids and renewing focus on nuclear energy as a stable, low-carbon solution. Data centres powering AI systems already consume electricity at the scale of small cities, and demand is accelerating rapidly.

Global electricity consumption could rise by more than 10,000 terawatt-hours by 2035, largely driven by AI workloads. In advanced economies, data centres are expected to drive over a fifth of electricity-demand growth by 2030, outpacing many traditional industries.

Nuclear energy is increasingly positioned as a reliable backbone for this expansion, offering continuous power, high energy density, and grid stability.

Governments, technology firms, and nuclear operators are advancing new reactor projects, while long-term power agreements between tech companies and nuclear plants are becoming more common.

Alongside large reactors, interest is growing in small modular reactors designed for faster deployment near data centres. Supporters say these systems could ease grid bottlenecks and deliver dedicated power for AI, strengthening nuclear energy’s role in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

xAI faces stricter pollution rules for Memphis data centre

US regulators have closed a loophole that allowed Elon Musk’s AI company, xAI, to operate gas-burning turbines at its Memphis data centre without full air pollution permits. The move follows concerns over emissions and local health impacts.

The US Environmental Protection Agency clarified that mobile gas turbines cannot be classified as ‘non-road engines’ to avoid Clean Air Act requirements. Companies must now obtain permits if their combined emissions exceed regulatory thresholds.

Local authorities had previously allowed the turbines to operate without public consultation or environmental review. The updated federal rule may slow xAI’s expansion plans in the Memphis area.

The Colossus data centre, opened in 2024, supports training and inference for Grok AI models and other services linked to Musk’s X platform. NVIDIA hardware is used extensively at the site.

Residents and environmental groups have raised concerns about air quality, particularly in nearby communities. Legal advocates say xAI’s future operations will be closely monitored for regulatory compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MIT advances cooling for scalable quantum chips

MIT researchers have demonstrated a faster, more energy-efficient cooling technique for scalable trapped-ion quantum chips. The solution addresses a long-standing challenge in reducing vibration-related errors that limit the performance of quantum systems.

The method uses integrated photonic chips with nanoscale antennas that emit tightly controlled light beams. Using polarisation-gradient cooling, the system cools ions to nearly ten times below standard laser limits, and does so much faster.

Unlike conventional trapped-ion systems that depend on bulky external optics, the chip-based design generates stable light patterns directly on the device. The stability improves accuracy and supports scaling to thousands of ions on a single chip.

Researchers say the breakthrough lays the groundwork for more reliable quantum operations and opens new possibilities for advanced ion control, bringing practical, large-scale quantum computing closer to reality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI outlines advertising plans for ChatGPT access

The US AI firm, OpenAI, has announced plans to test advertising within ChatGPT as part of a broader effort to widen access to advanced AI tools.

An initiative that focuses on supporting the free version and the low-cost ChatGPT Go subscription, while paid tiers such as Plus, Pro, Business, and Enterprise will continue without advertisements.

According to the company, advertisements will remain clearly separated from ChatGPT responses and will never influence the answers users receive.

Responses will continue to be optimised for usefulness instead of commercial outcomes, with OpenAI emphasising that trust and perceived neutrality remain central to the product’s value.

User privacy forms a core pillar of the approach. Conversations will stay private, data will not be sold to advertisers, and users will retain the ability to disable ad personalisation or remove advertising-related data at any time.

During early trials, ads will not appear for accounts linked to users under 18, nor within sensitive or regulated areas such as health, mental wellbeing, or politics.

OpenAI describes advertising as a complementary revenue stream rather than a replacement for subscriptions.

The company argues that a diversified model can help keep advanced intelligence accessible to a wider population, while maintaining long term incentives aligned with user trust and product quality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Steam rules redefine when AI use must be disclosed

Steam has clarified its position on AI in video games by updating the disclosure rules developers must follow when publishing titles on the platform.

The revision arrives after months of industry debate over whether generative AI usage should be publicly declared, particularly as storefronts face growing pressure to balance transparency with practical development realities.

Under the updated policy, disclosure requirements apply exclusively to AI-generated material consumed by players.

Artwork, audio, localisation, narrative elements, marketing assets and content visible on a game’s Steam page fall within scope, while AI tools used purely during development remain outside Valve’s interest.

Developers using code assistants, concept ideation tools or AI-enabled software features without integrating outputs into the final player experience no longer need to declare such usage.

Valve’s clarification signals a more nuanced stance than earlier guidance introduced in 2024, which drew criticism for failing to reflect how AI tools are used in modern workflows.

By formally separating player-facing content from internal efficiency tools, Steam acknowledges common industry practices without expanding disclosure obligations unnecessarily.

The update offers reassurance to developers concerned about stigma surrounding AI labels while preserving transparency for consumers.

Although enforcement may remain largely procedural, the written clarification establishes clearer expectations and reduces uncertainty as generative technologies continue to shape game production.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan adopts AI robotics for orthopaedic surgery

Kazakhstan has introduced an AI-enabled robotic system in Astana to improve the accuracy and efficiency of orthopaedic surgeries. The technology supports more precise surgical planning and execution.

The system was presented during an event highlighting growing cooperation between Kazakhstan and India in medical technologies. Officials from both countries emphasised knowledge exchange and joint progress in advanced healthcare solutions.

Health authorities say robotic assistance could help narrow the gap between performed joint replacements and unmet patient demand. Standardised procedures and improved precision are expected to raise treatment quality nationwide.

The initiative builds on recent medical advances, including Kazakhstan’s first robot-assisted heart surgery in Astana. Authorities view such technologies as part of broader efforts to modernise healthcare funding and expand access to high-tech treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ETSI standard defines cybersecurity rules for AI systems

ETSI has released ETSI EN 304 223, a new European Standard establishing baseline cybersecurity requirements for AI systems.

Approved by national standards bodies, the framework becomes the first globally applicable EN focused specifically on securing AI, extending its relevance beyond European markets.

The standard recognises that AI introduces security risks not found in traditional software. Threats such as data poisoning, indirect prompt injection and vulnerabilities linked to complex data management demand tailored defences instead of conventional approaches alone.

ETSI EN 304 223 combines established cybersecurity practices with targeted measures designed for the distinctive characteristics of AI models and systems.

Adopting a full lifecycle perspective, the ETSI framework defines thirteen principles across secure design, development, deployment, maintenance and end of life.

Alignment with internationally recognised AI lifecycle models supports interoperability and consistent implementation across existing regulatory and technical ecosystems.

ETSI EN 304 223 is intended for organisations across the AI supply chain, including vendors, integrators and operators, and covers systems based on deep neural networks, including generative AI.

Further guidance is expected through ETSI TR 104 159, which will focus on generative AI risks such as deepfakes, misinformation, confidentiality concerns and intellectual property protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!