Google commits 40 billion dollars to expand Texas AI infrastructure

Google will pour 40 billion dollars into Texas by 2027, expanding digital infrastructure. Funding focuses on new cloud and AI facilities alongside existing campuses in Midlothian and Dallas.

Three new US data centres are planned, one in Armstrong County and two in Haskell County. One Haskell site will sit beside a solar plant and battery storage facility. Investment is accompanied by agreements for more than 6,200 megawatts of additional power generation.

Google will create a 30 million dollar Energy Impact Fund supporting Texan energy efficiency and affordability projects. The company backs training for existing electricians and over 1,700 apprentices through electrical training programmes.

Spending strengthens Texas as a major hub for data centres and AI development. Google says expanded infrastructure and workforce will help maintain US leadership in advanced computing technologies. Company highlights its 15 year presence in Texas and pledges ongoing community support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New blueprint ensures fair AI in democratic processes

A rights-centred AI blueprint highlights the growing use of AI in analysing citizen submissions during public participation, promising efficiency but raising questions about fairness, transparency and human rights. Experts caution that poorly designed AI could silence minority voices, deepen inequalities and weaken trust in democratic decision-making.

The European Centre for Not-for-Profit Law (ECNL) provides detailed guidance for governments, civil society organisations and technology developers on how to implement AI responsibly. Recommendations include conducting human rights impact assessments, involving marginalised communities from the design stage, testing AI accuracy across demographics, and ensuring meaningful human oversight at every stage.

Transparency and accountability are key pillars of the framework, providing guidance on publishing assessments, documenting AI decision-making processes, and mitigating bias. Experts stress that efficiency gains should never come at the expense of inclusiveness, and that AI tools must be monitored and updated continually to reflect community feedback and rights considerations.

The blueprint also emphasises collaboration and sustainability, urging multistakeholder governance, civil society co-design, and ongoing training for public servants and developers. By prioritising rights, transparency and community engagement, AI in public participation can enhance citizen voices rather than undermining them, but only if implemented deliberately and inclusively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI supports doctors in spotting broken bones

Hospitals in Lincolnshire, UK, are introducing AI to assist doctors in identifying fractures and dislocations, with the aim to speeding up treatment and improving patient care. The Northern Lincolnshire and Goole NHS Foundation Trust will launch a two-year NHS England pilot later this month.

AI software will provide near-instant annotated X-rays alongside standard scans, highlighting potential issues for clinicians to review. Patients under the age of two, as well as those undergoing chest, spine, skull, facial or soft tissue imaging, will not be included in the pilot.

Consultants emphasise that AI is an additional tool, not a replacement, and clinicians will retain the final say on diagnosis and treatment. Early trials in northern Europe suggest the technology can help meet rising demand, and the trust is monitoring its impact closely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital accessibility drives revenue as AI adoption rises

Research highlights that digital accessibility is now viewed as a driver of business growth rather than a compliance requirement.

A survey of over 1,600 professionals across the US, UK, and Europe found 75% of organisations linking accessibility improvements to revenue gains, while 91% reported enhanced user experience and 88% noted brand reputation benefits.

AI is playing an increasingly central role in accessibility initiatives. More than 80% of organisations now use AI tools to support accessibility, particularly in mature programmes with formal policies, accountability structures, and dedicated budgets.

Leaders in these organisations view AI as a force multiplier, complementing human expertise rather than replacing it. Despite progress, many organisations still implement accessibility late in digital development processes. Only around 28% address accessibility during planning, and 27% during design stages.

Leadership support and effective training emerged as key success factors. Organisations with engaged executives and strong accessibility training were far more likely to achieve revenue and operational benefits while reducing perceived legal risk.

As AI adoption accelerates and regulatory frameworks expand, companies treating accessibility strategically are better positioned to gain competitive advantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teenagers still face harmful content despite new protections

In the UK and other countries, teenagers continue to encounter harmful social media content, including posts about bullying, suicide and weapons, despite the Online Safety Act coming into effect in July.

A BBC investigation using test profiles revealed that some platforms continue to expose young users to concerning material, particularly on TikTok and YouTube.

The experiment, conducted with six fictional accounts aged 13 to 15, revealed differences in exposure between boys and girls.

While Instagram showed marked improvement, with no harmful content displayed during the latest test, TikTok users were repeatedly served posts about self-harm and abuse, and one YouTube profile encountered videos featuring weapons and animal harm.

Experts warned that changes will take time and urged parents to monitor their children’s online activity actively. They also recommended open conversations about content, the use of parental controls, and vigilance rather than relying solely on the new regulatory codes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tools help eBay stage a comeback

eBay is deepening its investment in AI as part of a multi-year effort to revive the platform after years of stagnant growth.

The company, which saw renewed momentum during the pandemic, has launched five new AI features this year, including AI-generated shipping estimates, an AI shopping agent and a partnership with OpenAI.

Chief executive Jamie Iannone argues that eBay’s long history gives it an advantage in the AI era, citing decades of product listings, buyer behaviour data and more than two billion active listings. That data underpins tools such as the ‘magical listing’ feature, which automatically produces item descriptions from photos, and an AI assistant that answers buyer questions based on a listing’s details.

These tools are also aimed at unlocking supply: eBay says the average US household holds thousands of dollars’ worth of unused goods.

Analysts note that helping casual sellers overcome the friction of listing and photographing items could lift the company’s gross merchandise volume, which grew 10 percent in the most recent quarter.

AI is also reshaping the buyer experience. Around 70 percent of eBay transactions come from enthusiasts who already know how to navigate the platform. The new ‘eBay.ai’ tool is designed to help less experienced users by recommending products based on natural-language descriptions.

Despite this push, the platform still faces intense competition from Amazon, Google, Shein and emerging AI-shopping agents. Iannone has hinted that eBay may integrate with external systems such as OpenAI’s instant-checkout tools to broaden discovery beyond the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How neurotech is turning science fiction into lived reality

Some experts now say neurotechnology could be as revolutionary as AI, as devices advance rapidly from sci-fi tropes into practical reality. Researchers can already translate thoughts into words through brain implants, and spinal implants are helping people with paralysis regain movement.

King’s College London neuroscientist Anne Vanhoestenberghe told AFP, ‘People do not realise how much we’re already living in science fiction.’

Her lab works on implants for both brain and spinal systems, not just restoring function, but reimagining communication.

At the same time, the technology carries profound ethical risks. There is growing unease about privacy, data ownership and the potential misuse of neural data.

Some even warn that our ‘innermost thoughts are under threat.’ Institutions like UNESCO are already moving to establish global neurotech governance frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT launches group chats in Asia-Pacific pilot

OpenAI has introduced a new group chat feature in its ChatGPT app, currently piloted across Japan, New Zealand, South Korea and Taiwan. The rollout aims to test how users will interact in multi-participant conversations with the AI.

The pilot enables Free, Plus, and Team users on both mobile and web platforms to start or join group chats of up to 20 participants, where ChatGPT can participate as a member.

Human-to-human messages do not count against AI usage quotas; usage only applies when the AI replies. Group creators remain in charge of membership; invite links are used for access, and additional safeguards are applied when participants under the age of 18 are present.

This development marks a significant pivot from one-on-one AI assistants toward collaborative workflows, messaging and shared decision-making.

From a digital policy and governance perspective, this new feature raises questions around privacy, data handling in group settings, the role of AI in multi-user contexts and how usage quotas or model performance might differ across plans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Most workers see AI risk but not for themselves

A new survey by YouGov and Udemy reveals that while workers across the US, UK, India and Brazil see AI as a significant economic force, many believe their own jobs are unlikely to be affected.

Over 4,500 adults were polled, highlighting a clear gap between concern for the broader economy and personal job security.

In the UK, 70% of respondents expressed concern about AI’s impact on the economy, but only 39% worried about its effects on their own occupation.

Similarly, in the US, 72% feared wider economic effects, while 47% concerned about personal job loss. Experts suggest this reflects a psychological blind spot similar to early reactions to the internet.

The survey also highlighted a perceived AI skills gap, particularly in the UK, where 55% of workers had received no AI training. Many employees acknowledged awareness of AI’s rise but lacked motivation to develop skills immediately, a phenomenon researchers describe as an ‘awareness action gap’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Salesforce unveils eVerse for dependable enterprise AI

The US cloud-based software company, Salesforce and its Research AI department, have unveiled eVerse, a new environment designed to train voice and text agents through synthetic data generation, stress testing and reinforcement learning.

In an aim to resolve a growing reliability problem known as jagged intelligence, where systems excel at complex reasoning yet falter during simple interactions.

The company views eVerse as a key requirement for creating an Agentic Enterprise, where human staff and digital agents work together smoothly and dependably.

eVerse supports continuous improvement by generating large volumes of simulated interactions, measuring performance and adjusting behaviour over time, rather than waiting for real-world failures.

A platform that played a significant role in the development of Agentforce Voice, giving AI agents the capacity to cope with unpredictable calls involving noise, varied accents and weak connections.

Thousands of simulated conversations enabled teams to identify problems early and deliver stronger performance.

The technology is also being tested with UCSF Health, where clinical experts are working with Salesforce to refine agents that support billing services. Only a portion of healthcare queries can typically be handled automatically, as much of the knowledge remains undocumented.

eVerse enhances coverage by enabling agents to adapt to complex cases through reinforcement learning, thereby improving performance across both routine and sophisticated tasks.

Salesforce describes eVerse as a milestone in a broader effort to achieve Enterprise General Intelligence. The goal is a form of AI designed for dependable business use, instead of the more creative outputs that dominate consumer systems.

It also argues that trust and consistency will shape the next stage of enterprise adoption and that real-world complexity must be mirrored during development to guarantee reliable deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!