AI reshapes India IT services outlook

India’s $300bn outsourcing industry is facing mounting pressure as AI tools threaten to disrupt traditional business models. A recent sell-off in technology stocks reflects investor concern over automation replacing labour-intensive services.

Fears intensified after new AI tools demonstrated the ability to automate legal, compliance and data processes. Analysts warn such advances could reduce demand for routine IT services and reshape client engagements.

Industry leaders in India argue AI will also create opportunities, particularly in consulting and system modernisation. Firms expect partnerships with AI developers to drive new areas of growth despite near-term disruption.

Revenue growth may slow, and hiring could remain subdued as the sector adapts. Analysts in India expect a gradual shift towards outcome-based services while companies invest in new AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Advanced AI education unlocks powerful opportunities across Africa

Advanced AI education is expanding across Africa. Google DeepMind has launched new courses to support the next generation of technical learners and reduce the gap between AI talent and opportunities on the continent.

At the same time, the initiative is supported by targeted funding. Google.org is providing $4 million to train lecturers and develop educational toolkits, aiming to strengthen local capacity and scale AI education.

Moreover, the curriculum focuses on practical and technical skills. Learners gain hands-on experience with generative AI models and transformers, including building and fine-tuning language models, moving beyond basic AI literacy.

In addition, the programme is adapted to African contexts. Developed with input from local experts and institutions, such as the African Institute for Mathematical Sciences, the courses include real-world use cases relevant to the continent.

Furthermore, the initiative aims to address Africa’s underrepresentation in AI research. By expanding access to advanced training, it seeks to increase participation and ensure more inclusive global AI development.

Finally, the programme is designed to scale through educators and institutions. Universities and NGOs can integrate the curriculum, supported by training programmes that equip educators to deliver AI courses effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in filmmaking raises job fears as creative roles face pressure

Growing concern over AI in filmmaking emerged at a major conference, where veteran director Steven Spielberg rejected its use as a replacement for human creativity. He emphasised that storytelling should remain in human hands rather than being driven by automation.

Rapid advances in AI video tools have unsettled the industry, raising fears among editors and visual effects workers. Joshua Davies, chief innovation officer at a video platform, pointed to concerns over jobs, copyright and future production methods.

Current tools remain limited, particularly when handling complex camera movements or maintaining consistency across scenes. AI is instead being used to support production by filling gaps where footage cannot be filmed due to time or budget limits.

Studios are already exploring how AI can be integrated into production pipelines following recent disruptions. A fast and low-cost Super Bowl advert highlighted its potential, although human creative input remained essential.

Lower production costs are expected, but full automation is still unlikely in the near term. AI could help independent creators compete, while strong storytelling continues to define success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Microsoft Purview tools target data oversharing and AI governance

Microsoft has announced new integrations between Microsoft Purview and Microsoft Fabric, aimed at helping organisations identify AI-driven data risks, prevent sensitive data from being overshared, and strengthen governance across their data estates.

The updates come as enterprises accelerate AI adoption and face growing pressure to ensure that the data powering those systems is both protected and trustworthy.

Key new capabilities include Data Loss Prevention policies for Fabric workloads such as Warehouse and databases, Insider Risk Management tools that can detect risky actions such as unauthorised data exports from Fabric lakehouses, and new preview features for managing AI data exposure, including the ability to identify sensitive data appearing in Copilot prompts and responses.

Data Security Posture Management tools provide risk assessments to surface unprotected assets and recommend corrective action.

On the governance side, updates to Microsoft Purview Unified Catalogue introduce centralised workflows for data owners to control the publication of data products and run quality checks on unmanaged assets, enabling faster validation at scale.

Microsoft describes the combined offering as an ‘integrated and unified foundation’ that allows organisations to innovate with AI whilst keeping their data protected, governed, and trusted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents test limits of EU rules

AI agents are rapidly gaining traction, raising questions about whether existing EU rules can keep pace. Unlike chatbots, these systems can act autonomously and interact with digital tools on behalf of users.

Experts warn that AI agents require deeper access to personal data and online services to function effectively. Regulators in Europe are monitoring potential risks as the technology becomes more integrated into daily life.

Lawmakers are examining whether current legislation, such as the AI Act and GDPR, adequately covers agent-based systems. Legal experts highlight challenges around contracts, liability and accountability when AI acts independently.

Despite concerns, many governments remain reluctant to introduce new rules, citing regulatory fatigue. Policymakers may rely on existing frameworks unless major incidents force a reassessment of AI oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Publishers challenge OpenAI over alleged copyright infringement

Legal pressure is increasing on OpenAI as Encyclopaedia Britannica and Merriam-Webster file a lawsuit accusing the company of large-scale copyright violations.

According to the complaint, nearly 100,000 copyrighted articles were allegedly used without authorisation to train large language models. Publishers also argue that AI-generated outputs can reproduce parts of their content, raising concerns about unauthorised distribution.

Additional claims focus on how AI systems retrieve and present information. The lawsuit argues that retrieval-augmented generation tools may rely on proprietary databases, potentially undermining publishers’ business models by reducing traffic to original sources.

Concerns are also raised about inaccurate outputs attributed to publishers, which could affect trust in established information providers. The case highlights ongoing tensions between AI development and intellectual property protections.

Growing legal disputes involving media organisations, including The New York Times, suggest that courts will play a key role in defining how copyrighted material can be used in AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google launches AI skills initiative to support Europe’s workforce transition

At the Future of Work Forum, Google introduced ‘AI Works for Europe’, a programme aimed at strengthening digital skills and supporting workforce adaptation to AI across the region.

Funding of $30 million will be directed through Google.org to expand training opportunities, alongside broader access to AI certification programmes designed to help individuals and businesses adopt new technologies in practical contexts.

A central focus involves preparing workers and students for labour market changes.

Partnerships with organisations such as INCO are supporting the development of targeted training programmes, particularly in sectors where demand for AI-related skills is increasing, including finance, logistics and marketing.

New educational pathways are also being introduced, including an expanded AI Professional Certificate available in multiple European languages. These initiatives aim to improve AI literacy and provide hands-on experience aligned with employer expectations.

Collaboration with local organisations and institutions remains a key element, reflecting a broader strategy to ensure access to training across different regions and communities.

Efforts to expand AI capabilities across Europe highlight the growing importance of skills development as AI becomes more integrated into economic activity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

xAI faces lawsuit over alleged misuse of AI image generation

Legal action has been filed against xAI in a US federal court, with plaintiffs alleging that its AI system Grok was used to generate harmful and explicitly manipulated images of minors.

The lawsuit claims that xAI failed to implement adequate safeguards to prevent the creation of such content, despite similar protections adopted by other AI developers.

According to the filing, the technology enabled the transformation of real images into explicit material without sufficient restrictions.

Plaintiffs seek to establish a class action, arguing that the company should be held accountable for both direct and third-party uses of its models. Legal arguments focus on whether responsibility extends to external applications built using the same underlying AI systems.

The case also highlights broader regulatory challenges surrounding AI-generated content, particularly the difficulty of preventing misuse when systems can modify real images. Questions around platform liability, safety standards, and enforcement are likely to shape future policy discussions.

Growing scrutiny of AI developers reflects increasing concern over how generative systems are deployed, especially in contexts involving sensitive or harmful content.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Green light for massive UK AI tech park

North Lincolnshire Council has granted outline planning permission for the Elsham Tech Park, a proposed AI data centre campus that would rank among the largest of its kind in the UK.

At full build-out, the site would include up to 15 hyperscale data centre buildings across 176 hectares, delivering more than 1.5 million square metres of floorspace and up to 1GW of computing capacity.

The development is expected to cost between £5.5 billion and £7.5 billion to build and could attract up to £10 billion in private investment over its lifetime.

Developer Greystoke plans to begin construction in 2027, with the first phase due to open in 2029, and the full campus to be delivered in phases over approximately a decade.

The project is also required to source at least 30% of build costs from businesses within a 30-mile radius, injecting an estimated £1.65 billion to £2.25 billion into the local economy.

The scheme received over 380 letters of objection from residents and environmental groups. Critics raised concerns, including loss of privacy for neighbouring properties, around-the-clock noise and light, and the scale of carbon emissions, with one campaign group estimating the equivalent of twice the woodland of Wales would be needed to offset the development’s environmental impact.

Permission was nonetheless granted unanimously by councillors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA expands physical AI ecosystem to accelerate real world robotics

Partnerships across the robotics sector are positioning NVIDIA at the centre of what is increasingly described as ‘physical AI’, a shift towards intelligent machines capable of perceiving, reasoning and acting in real environments.

A new generation of tools, including NVIDIA Cosmos world models and updated NVIDIA Isaac simulation frameworks, aims to support developers in training and validating robots before deployment.

These systems enable companies to simulate complex environments, reducing the risks and costs of real-world testing.

Industrial robotics leaders such as ABB Robotics, KUKA, and FANUC are integrating NVIDIA technologies into digital twin environments, enabling more accurate modelling of production lines and automation systems.

Advances are also extending into humanoid robotics, where companies are using AI models to develop machines capable of more flexible and adaptive behaviour.

New foundation models, including GR00T systems, are designed to give robots general-purpose capabilities instead of limiting them to specific tasks.

Healthcare and logistics represent additional areas of deployment, with robotics platforms being tested in surgical systems, warehouse automation and manufacturing environments. These applications highlight how physical AI could reshape industries requiring precision, safety and scalability.

Growing collaboration across cloud providers, manufacturers and AI developers suggests that robotics is moving toward a more integrated ecosystem, where simulation, data generation and deployment are increasingly interconnected.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!