xAI plans $20 billion data centre investment in Mississippi

The US AI company, xAI, plans to establish a large-scale data centre in Southaven, Mississippi, representing an investment of more than $20 billion. The project is expected to create several hundred permanent jobs across DeSoto County.

xAI has acquired an existing facility that will be refurbished to support data centre operations, located near additional energy and computing infrastructure already linked to xAI.

Once operational, the Southaven site in the US is expected to expand the company’s overall computing capacity significantly.

State and local authorities approved incentive measures for the project, including tax exemptions available to certified data centres.

Officials indicated that the investment is expected to contribute to local tax revenues supporting public services and infrastructure, while operations are scheduled to begin in February 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Robotics industry sets out key trends for 2026

The global market for industrial robot installations has reached a record value of $16.7bn in 2025. The International Federation of Robotics expects further growth through technological change and labour pressures.

AI-driven autonomy is becoming central to robotics development, enabling machines to learn tasks and operate independently. Agentic AI combines analytical and generative models to improve decision-making in complex environments.

Robots are also becoming more versatile as IT and operational systems converge across factories and logistics. Humanoid robots are moving beyond prototypes, with reliability and efficiency now critical for industrial adoption.

Safety, cybersecurity and workforce acceptance remain key challenges for the sector. Industry leaders see robots as allies addressing labour shortages while governments expand skills and retraining programmes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York launches 2026 with new AI proposals

New York is beginning 2026 with a renewed push to shape how AI is used, focusing on consumer protection while continuing to attract major tech investment. The move follows the recent signing of the RAISE Act, a landmark law aimed at enhancing safety standards for advanced AI models, and signals that state leaders intend to remain active in AI governance this year.

Governor Kathy Hochul has unveiled a new package of proposals, primarily aimed to protecting children online. The measures would expand age verification requirements, set safer default settings on social media platforms for minors, limit certain AI chatbot features for children, and give parents greater control over their children’s financial transactions. The proposals, part of Hochul’s annual ‘State of the State’ agenda, must still pass the state legislature before becoming law.

At the same time, New York is positioning itself as a welcoming environment for AI and semiconductor development. Hochul recently announced a $33 million research and development expansion in Manhattan by London-based AI company ElevenLabs.

In addition, Micron is expected to begin construction later this month on a massive semiconductor facility in White Plains, part of a broader $100 billion investment plan that underscores the state’s ambitions in advanced technology and manufacturing.

Beyond child safety and economic development, state officials are also focusing to how algorithms impact everyday costs. Attorney General Letitia James is investigating Instacart over allegations that its pricing systems charge different customers different prices for the same products.

The probe follows the implementation of New York’s Algorithmic Pricing Disclosure Act, which took effect late last year, requiring companies to be more transparent about the use of automated pricing tools.

The attorney general’s office is also examining broader accountability issues tied to AI systems, including reports involving the misuse of generative AI. Together, these actions underscore New York’s commitment to addressing voter concerns regarding affordability, safety, and transparency, while also harnessing the economic potential of rapidly evolving AI technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Why data centres are becoming a flashpoint in US towns

As AI and cloud computing drive unprecedented demand for digital infrastructure, Big Tech’s rapid expansion of data centres is increasingly colliding with resistance at the local level. Across the United States, communities are pushing back against large-scale facilities they say threaten their quality of life, environment, and local character.

Data centres, massive complexes packed with servers and supported by vast energy and water resources, are multiplying quickly as companies race to secure computing power and proximity to electricity grids. But as developers look beyond traditional tech hubs and into suburbs, small towns, and rural areas, they are finding residents far less welcoming than anticipated.

What were once quiet municipal board meetings are now drawing standing-room-only crowds. Residents argue that data centres bring few local jobs while consuming enormous amounts of electricity and water, generating constant noise, and relying on diesel generators that can affect air quality. In farming communities, the loss of open land and agricultural space has become a significant concern, as homeowners worry about declining property values and potential health risks.

Opposition efforts are becoming more organised and widespread. Community groups increasingly share tactics online, learning from similar struggles in other states. Yard signs, door-to-door campaigns, and legal challenges have become common tools for advocacy. According to industry observers, the level of resistance has reached unprecedented heights in infrastructure development.

Tracking groups report that dozens of proposed data centre projects worth tens of billions of dollars have recently been delayed or blocked due to local opposition and regulatory hurdles. In some US states, more than half of proposed developments are now encountering significant pushback, forcing developers to reconsider timelines, locations, or even entire projects.

Electricity costs are a major concern, fueling public anger. In regions already experiencing rising utility bills, residents fear that large data centres will further strain power grids and push prices even higher.

Water use is another flashpoint, particularly in areas that rely on wells and aquifers. Environmental advocates warn that long-term impacts are still poorly understood, leaving communities to shoulder the risks.

The growing resistance is having tangible consequences for the industry. Developers say uncertainty around zoning approvals and public support is reshaping investment strategies. Some companies are choosing to sell sites once they secure access to power, often the most valuable part of a project, rather than risk prolonged local battles that could ultimately derail construction.

Major technology firms, including Microsoft, Google, Amazon, and Meta, have largely avoided public comment on the mounting opposition. However, Microsoft has acknowledged in regulatory filings that community resistance and local moratoriums now represent a material risk to its infrastructure plans.

Industry representatives argue that misinformation has contributed to public fears, claiming that modern data centres are far cleaner and more efficient than critics suggest. In response, trade groups are urging developers to engage with communities earlier, be more transparent, and highlight the economic benefits, such as tax revenue and infrastructure investment. Promises of water conservation, energy efficiency, and community funding have become central to outreach efforts.

In some communities, frustration has been amplified by revelations that plans were discussed quietly among government agencies and utilities long before residents were informed. Once disclosed, these projects have sparked accusations of secrecy, accelerating public distrust and mobilisation.

Despite concessions and promises of further dialogue, many opponents say their fight is far from over. As demand for data centres continues to grow, the clash between global technology ambitions and local community concerns is shaping up to be one of the defining infrastructure battles of the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI sovereignty test in South Korea reaches a critical phase

South Korea’s flagship AI foundation model project has entered a decisive phase after accusations that leading participants relied on foreign open source components instead of building systems entirely independently.

The controversy has reignited debate over how ‘from scratch’ development should be defined within government-backed AI initiatives aimed at strengthening national sovereignty.

Scrutiny has focused on Naver Cloud after developers identified near-identical similarities between its vision encoder and models released by Alibaba, alongside disclosures that audio components drew on OpenAI technology.

The dispute now sits with the Ministry of Science and ICT, which must determine whether independence applies only to a model’s core or extends to all major components.

An outcome that is expected to shape South Korea’s AI strategy by balancing deeper self-reliance against the realities of global open-source ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gmail enters the Gemini era with AI-powered inbox tools

Google is reshaping Gmail around its Gemini AI models, aiming to turn email into a proactive assistant for more than three billion users worldwide.

With inbox volumes continuing to rise, the focus shifts towards managing information flows instead of simply sending and receiving messages.

New AI Overviews allow Gmail to summarise long email threads and answer natural language questions directly from inbox content.

Users can retrieve details from past conversations without complex searches, while conversation summaries roll out globally at no cost, with advanced query features reserved for paid AI subscriptions.

Writing tools are also expanding, with Help Me Write, upgraded Suggested Replies, and Proofread features designed to speed up drafting while preserving individual tone and style.

Deeper personalisation is planned through connections with other Google services, enabling emails to reflect broader user context.

A redesigned AI Inbox further prioritises urgent messages and key tasks by analysing communication patterns and relationships.

Powered by Gemini 3, these features begin rolling out in the US in English, with additional languages and regions scheduled to follow during 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to strengthen Digital Markets Act oversight

Rivals of major technology firms have criticised the European Commission for weak enforcement of the Digital Markets Act, arguing that slow procedures and limited transparency undermine the regulation’s effectiveness.

Feedback gathered during a Commission consultation highlights concerns about delaying tactics, interface designs that restrict user choice, and circumvention strategies used by designated gatekeepers.

The Digital Markets Act entered into force in March 2024, prompting several non-compliance investigations against Apple, Meta and Google. Although Apple and Meta have already faced fines, follow-up proceedings remain ongoing, while Google has yet to receive sanctions.

Smaller technology firms argue that enforcement lacks urgency, particularly in areas such as self-preferencing, data sharing, interoperability and digital advertising markets.

Concerns also extend to AI and cloud services, where respondents say the current framework fails to reflect market realities.

Generative AI tools, such as large language models, raise questions about whether existing platform categories remain adequate or whether new classifications are necessary. Cloud services face similar scrutiny, as major providers often fall below formal thresholds despite acting as critical gateways.

The Commission plans to submit a review report to the European Parliament and the Council by early May, drawing on findings from the consultation.

Proposed changes include binding timelines and interim measures aimed at strengthening enforcement and restoring confidence in the bloc’s flagship competition rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netomi shows how to scale enterprise AI safely

Netomi has developed a blueprint for scaling enterprise AI, utilising GPT-4.1 for rapid tool use and GPT-5.2 for multi-step reasoning. The platform supports complex workflows, policy compliance, and heavy operational loads, serving clients such as United Airlines and DraftKings.

The company emphasises three core lessons. First, systems must handle real-world complexity, orchestrating multiple APIs, databases, and tools to maintain state and situational awareness across multi-step workflows.

Second, parallelised architectures ensure low latency even under extreme demand, keeping response times fast and reliable during spikes in activity.

Third, governance is embedded directly into the runtime, enforcing compliance, protecting sensitive data, and providing deterministic fallbacks when AI confidence is low.

Netomi demonstrates how agentic AI can be safely scaled, providing enterprises with a model for auditable, predictable, and resilient intelligent systems. These practices serve as a roadmap for organisations seeking to move AI from experimental tools to production-ready infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot