India has unveiled a plan to offer foreign cloud providers zero taxes on revenues from services sold abroad if workloads are run from Indian data centres until 2047. The move aims to attract AI investment despite power and water shortages.
Major US tech companies, including Google, Microsoft and Amazon, have pledged billions of dollars to expand AI-focused data centres in India. Domestic operators are also increasing capacity, with large projects announced in Andhra Pradesh and other states.
The government has boosted incentives for electronics and semiconductor manufacturing, critical minerals, and cross-border e-commerce. These measures aim to integrate India more deeply into global technology supply chains.
Analysts warn that execution risks remain, including energy shortages, land access and regulatory hurdles. Observers say the tax holiday and incentives reflect a strategic bet on establishing India as a global hub for AI and cloud computing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A proposal filed with the US Federal Communications Commission seeks approval for a constellation of up to one million solar-powered satellites designed to function as orbiting data centres for artificial intelligence computing, according to documents submitted by SpaceX.
The company described the network as an efficient response to growing global demand for AI processing power, positioning space-based infrastructure as a new frontier for large-scale computation.
In its filing, SpaceX framed the project in broader civilisational terms, suggesting the constellation could support humanity’s transition towards harnessing the Sun’s full energy output and enable long-term multi-planetary development.
Regulators are unlikely to approve the full scale immediately, with analysts viewing the figure as a negotiating position. The USFCC recently authorised thousands of additional Starlink satellites while delaying approval for a larger proposed expansion.
Concerns continue to grow over orbital congestion, space debris, and environmental impacts, as satellite numbers rise sharply and rival companies seek similar regulatory extensions.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Social media platforms are increasingly filled with AI-generated slop created to maximise engagement. The rapid spread has been fuelled by easy access to generative tools and algorithm-driven promotion.
Users across major platforms are pushing back, frequently calling out fake or misleading posts in comment sections. In many cases, criticism of AI slop draws more attention than the original content.
Technology companies acknowledge concerns about low-quality AI media but remain reluctant to impose strict limits. Platform leaders argue that new formats are often criticised before gaining wider acceptance.
Researchers warn that repeated exposure to AI slop may contribute to what they describe as ‘brain rot’, reducing attention and discouraging content verification. The risk becomes more serious when fabricated visuals shape public opinion or circulate as news.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Doha Debates, an initiative of Qatar Foundation, hosted a town hall examining the ethical, political, and social implications of rapidly advancing AI. The discussion reflected growing concern that AI capabilities could outpace human control and existing governance frameworks.
Held at Multaqa in Education City, the forum gathered students, researchers, and international experts to assess readiness for rapid technological change. Speakers offered contrasting views, highlighting both opportunity and risk as AI systems grow more powerful.
Philosopher and transhumanist thinker Max More argued for continued innovation guided by reason and proportionate safeguards, warning against fear-driven stagnation.
By contrast, computer scientist Roman Yampolskiy questioned whether meaningful control over superintelligent systems is realistic, cautioning that widening intelligence gaps could undermine governance entirely.
Nabiha Syed, executive director of the Mozilla Foundation, focused on accountability and social impact. She urged broader public participation and transparency, particularly as AI deployment risks reinforcing existing inequalities across societies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has confirmed that several legacy AI models will be removed from ChatGPT, with GPT-4o scheduled for retirement on 13 February. The decision follows months of debate after the company reinstated the model amid strong user backlash.
Alongside GPT-4o, the models being withdrawn include GPT-5 Instant, GPT-5 Thinking, GPT-4.1, GPT-4.1 mini, and o4-mini. The changes apply only to ChatGPT, while developers will continue to access the models through OpenAI’s API.
GPT-4o had built a loyal following for its natural writing style and emotional awareness, with many users arguing newer models felt less expressive. When OpenAI first attempted to phase it out in 2025, widespread criticism prompted a temporary reversal.
Company data now suggests active use of GPT-4o has dropped to around 0.1% of daily users. OpenAI says features associated with the model have since been integrated into GPT-5.2, including personality tuning and creative response controls.
Despite this, criticism has resurfaced across social platforms, with users questioning usage metrics and highlighting that GPT-4o was no longer prominently accessible. Comments from OpenAI leadership acknowledging recent declines in writing quality have further fuelled concerns about the model’s removal.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is increasingly being used to answer questions about faith, morality, and suffering, not just everyday tasks. As AI systems become more persuasive, religious leaders are raising concerns about the authority people may assign to machine-generated guidance.
Within this context, Catholic outlet EWTN Vatican examined Magisterium AI, a platform designed to reference official Church teaching rather than produce independent moral interpretations. Its creators say responses are grounded directly in doctrinal sources.
Founder Matthew Sanders argues mainstream AI models are not built for theological accuracy. He warns that while machines sound convincing, they should never be treated as moral authorities without grounding in Church teaching.
Church leaders have also highlighted broader ethical risks associated with AI, particularly regarding human dignity and emotional dependency. Recent Vatican discussions stressed the need for education and safeguards.
Supporters say faith-based AI tools can help navigate complex religious texts responsibly. Critics remain cautious, arguing spiritual formation should remain rooted in human guidance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
SpaceX has acquired Elon Musk’s AI company xAI, bringing xAI’s Grok chatbot and the X social platform under the SpaceX umbrella in a deal that further consolidates Musk’s privately held businesses. Investor and media accounts of the transaction put the combined valuation around $1.25 trillion, reflecting SpaceX’s scale in launch services and Starlink, alongside xAI’s rapid growth in the AI market.
The tie-up is pitched as a way to integrate AI development with SpaceX’s communications infrastructure and space hardware, including ambitions to push computing beyond Earth. The companies argue that the power and cooling demands of AI, if met mainly through terrestrial data centres, will strain electricity supply and local environments, and that space-based systems could become part of a longer-term answer.
At the same time, Grok and X have faced mounting scrutiny over AI-generated harms, including non-consensual sexualised deepfakes, prompting investigations and renewed pressure on safeguards and enforcement. That backdrop adds regulatory and reputational risk to a structure that now ties AI tooling to a mass-distribution platform and to a company with major government and national-security-adjacent business lines.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A leading British think tank has urged the government to introduce ‘nutrition labels’ for AI-generated news, arguing that clearer rules are needed as AI becomes a dominant source of information.
The Institute for Public Policy Research said AI firms are increasingly acting as new gatekeepers of the internet and must pay publishers for the journalism that shapes their output.
The group recommended standardised labels showing which sources underpin AI-generated answers, instead of leaving users unsure about the origin or reliability of the material they read.
It also called for a formal licensing system in the UK that would allow publishers to negotiate directly with technology companies over the use of their content. The move comes as a growing share of the public turns to AI for news, while Google’s AI summaries reach billions each month.
IPPR’s study found that some AI platforms rely heavily on content from outlets with licensing agreements, such as the Guardian and the Financial Times, while others, like the BBC, appear far less often due to restrictions on scraping.
The think tank warned that such patterns could weaken media plurality by sidelining local and smaller publishers instead of supporting a balanced ecosystem. It added that Google’s search summaries have already reduced traffic to news websites by providing answers before users click through.
The report said public funding should help sustain investigative and local journalism as AI tools expand. OpenAI responded that its products highlight sources and provide links to publishers, arguing that careful design can strengthen trust in the information people see online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Catalan Cybersecurity Agency has warned that generative AI is now being used in the vast majority of email scams containing malicious links. Its Cybersecurity Outlook Report for 2026 found that more than 80% of such messages rely on AI-generated content.
The report shows that 82.6% of emails carrying malicious links include text, video, or voice produced using AI tools, making fraudulent messages increasingly difficult to identify. Scammers use AI to create near-flawless messages that closely mimic legitimate communications.
Agency director Laura Caballero said the sophistication of AI-generated scams means users face greater risks, while businesses and platforms are turning to AI-based defences to counter the threat.
She urged a ‘technology against technology’ approach, combined with stronger public awareness and basic security practices such as two-factor authentication.
Cyber incidents are also rising. The agency handled 3,372 cases in 2024, a 26% increase year on year, mostly involving credential leaks and unauthorised email access.
In response, the Catalan government has launched a new cybersecurity strategy backed by a €18.6 million investment to protect critical public services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Massachusetts Institute of Technology researchers have developed a compact ultrasound system designed to make breast cancer screening more accessible and frequent, particularly for people at higher risk.
The portable device could be used in doctors’ offices or at home, helping detect tumours earlier than current screening schedules allow.
The system pairs a small ultrasound probe with a lightweight processing unit to deliver real-time 3D images via a laptop. Researchers say its portability and low power use could improve access in rural areas where traditional ultrasound machines are impractical.
Frequent monitoring is critical, as aggressive interval cancers can develop between routine mammograms and account for up to 30% of breast cancer cases.
By enabling regular ultrasound scans without specialised technicians or bulky equipment, the technology could increase early detection rates, where survival outcomes are significantly higher.
Initial testing successfully produced clear, gap-free 3D images of breast tissue, and larger clinical trials are now underway at partner hospitals. The team is developing a smaller version that could connect to a smartphone and be integrated into a wearable device for home use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!