Hamad Bin Khalifa University has unveiled the UNESCO Chair on Digital Technologies and Human Behaviour to strengthen global understanding of how emerging tools shape society.
An initiative, located in the College of Science and Engineering in Qatar, that will examine the relationship between digital adoption and human behaviour, focusing on digital well-being, ethical design and healthier online environments.
The Chair is set to address issues such as internet addiction, cyberbullying and misinformation through research and policy-oriented work.
By promoting dialogue among international organisations, governments and academic institutions, the programme aims to support the more responsible development of digital technologies rather than approaches that overlook societal impact.
HBKU’s long-standing emphasis on ethical innovation formed the foundation for the new initiative. The launch event brought together experts from several disciplines to discuss behavioural change driven by AI, mobile computing and social media.
An expert panel considered how GenAI can improve daily life while also increasing dependency, encouraging users to shift towards a more intentional and balanced relationship with AI systems.
UNESCO underlined the importance of linking scientific research with practical policymaking to guide institutions and communities.
The Chair is expected to strengthen cooperation across sectors and support progress on global development goals by ensuring digital transformation remains aligned with human dignity, social cohesion and inclusive growth.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Toronto’s notoriously congested traffic, among the worst in North America, with drivers spending an average of about 100 hours in traffic annually, continues to frustrate commuters.
Experts and city officials are now considering artificial intelligence-driven traffic signal optimisation as a key tool to improve traffic flows by dynamically adjusting signal timing across the city’s roughly 2,500 intersections.
AI systems could analyse real-time traffic patterns faster and more efficiently than manual control, helping reduce idle time, clear bottlenecks and support transit modes like the Finch West LRT by prioritising movement where needed.
While details of Toronto’s broader congestion management plan are still being finalised, this high-tech approach is being positioned as one of the most promising ways to address chronic gridlock and improve overall mobility.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Scientists are increasingly applying generative AI models to address complex problems in materials science, such as predicting structures, simulating properties, and guiding the discovery of advanced materials with novel functions.
Traditional computational methods, such as density functional theory, can be slow and resource-intensive, whereas AI-based tools can learn from existing data and propose candidate materials more efficiently.
Early applications of these generative approaches include designing materials for energy storage, catalysis, and electronic applications, speeding up workflows that previously involved large amounts of trial and error.
Researchers emphasise that while AI does not yet replace physics-based modelling, it can complement it by narrowing the search space and suggesting promising leads for experimental validation.
The work reflects a broader trend of AI-augmented science, where machine learning and generative models act as accelerators for discovery across disciplines such as chemistry, physics and bioengineering.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
India has unveiled a plan to offer foreign cloud providers zero taxes on revenues from services sold abroad if workloads are run from Indian data centres until 2047. The move aims to attract AI investment despite power and water shortages.
Major US tech companies, including Google, Microsoft and Amazon, have pledged billions of dollars to expand AI-focused data centres in India. Domestic operators are also increasing capacity, with large projects announced in Andhra Pradesh and other states.
The government has boosted incentives for electronics and semiconductor manufacturing, critical minerals, and cross-border e-commerce. These measures aim to integrate India more deeply into global technology supply chains.
Analysts warn that execution risks remain, including energy shortages, land access and regulatory hurdles. Observers say the tax holiday and incentives reflect a strategic bet on establishing India as a global hub for AI and cloud computing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A proposal filed with the US Federal Communications Commission seeks approval for a constellation of up to one million solar-powered satellites designed to function as orbiting data centres for artificial intelligence computing, according to documents submitted by SpaceX.
The company described the network as an efficient response to growing global demand for AI processing power, positioning space-based infrastructure as a new frontier for large-scale computation.
In its filing, SpaceX framed the project in broader civilisational terms, suggesting the constellation could support humanity’s transition towards harnessing the Sun’s full energy output and enable long-term multi-planetary development.
Regulators are unlikely to approve the full scale immediately, with analysts viewing the figure as a negotiating position. The USFCC recently authorised thousands of additional Starlink satellites while delaying approval for a larger proposed expansion.
Concerns continue to grow over orbital congestion, space debris, and environmental impacts, as satellite numbers rise sharply and rival companies seek similar regulatory extensions.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Social media platforms are increasingly filled with AI-generated slop created to maximise engagement. The rapid spread has been fuelled by easy access to generative tools and algorithm-driven promotion.
Users across major platforms are pushing back, frequently calling out fake or misleading posts in comment sections. In many cases, criticism of AI slop draws more attention than the original content.
Technology companies acknowledge concerns about low-quality AI media but remain reluctant to impose strict limits. Platform leaders argue that new formats are often criticised before gaining wider acceptance.
Researchers warn that repeated exposure to AI slop may contribute to what they describe as ‘brain rot’, reducing attention and discouraging content verification. The risk becomes more serious when fabricated visuals shape public opinion or circulate as news.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Doha Debates, an initiative of Qatar Foundation, hosted a town hall examining the ethical, political, and social implications of rapidly advancing AI. The discussion reflected growing concern that AI capabilities could outpace human control and existing governance frameworks.
Held at Multaqa in Education City, the forum gathered students, researchers, and international experts to assess readiness for rapid technological change. Speakers offered contrasting views, highlighting both opportunity and risk as AI systems grow more powerful.
Philosopher and transhumanist thinker Max More argued for continued innovation guided by reason and proportionate safeguards, warning against fear-driven stagnation.
By contrast, computer scientist Roman Yampolskiy questioned whether meaningful control over superintelligent systems is realistic, cautioning that widening intelligence gaps could undermine governance entirely.
Nabiha Syed, executive director of the Mozilla Foundation, focused on accountability and social impact. She urged broader public participation and transparency, particularly as AI deployment risks reinforcing existing inequalities across societies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has confirmed that several legacy AI models will be removed from ChatGPT, with GPT-4o scheduled for retirement on 13 February. The decision follows months of debate after the company reinstated the model amid strong user backlash.
Alongside GPT-4o, the models being withdrawn include GPT-5 Instant, GPT-5 Thinking, GPT-4.1, GPT-4.1 mini, and o4-mini. The changes apply only to ChatGPT, while developers will continue to access the models through OpenAI’s API.
GPT-4o had built a loyal following for its natural writing style and emotional awareness, with many users arguing newer models felt less expressive. When OpenAI first attempted to phase it out in 2025, widespread criticism prompted a temporary reversal.
Company data now suggests active use of GPT-4o has dropped to around 0.1% of daily users. OpenAI says features associated with the model have since been integrated into GPT-5.2, including personality tuning and creative response controls.
Despite this, criticism has resurfaced across social platforms, with users questioning usage metrics and highlighting that GPT-4o was no longer prominently accessible. Comments from OpenAI leadership acknowledging recent declines in writing quality have further fuelled concerns about the model’s removal.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is increasingly being used to answer questions about faith, morality, and suffering, not just everyday tasks. As AI systems become more persuasive, religious leaders are raising concerns about the authority people may assign to machine-generated guidance.
Within this context, Catholic outlet EWTN Vatican examined Magisterium AI, a platform designed to reference official Church teaching rather than produce independent moral interpretations. Its creators say responses are grounded directly in doctrinal sources.
Founder Matthew Sanders argues mainstream AI models are not built for theological accuracy. He warns that while machines sound convincing, they should never be treated as moral authorities without grounding in Church teaching.
Church leaders have also highlighted broader ethical risks associated with AI, particularly regarding human dignity and emotional dependency. Recent Vatican discussions stressed the need for education and safeguards.
Supporters say faith-based AI tools can help navigate complex religious texts responsibly. Critics remain cautious, arguing spiritual formation should remain rooted in human guidance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
SpaceX has acquired Elon Musk’s AI company xAI, bringing xAI’s Grok chatbot and the X social platform under the SpaceX umbrella in a deal that further consolidates Musk’s privately held businesses. Investor and media accounts of the transaction put the combined valuation around $1.25 trillion, reflecting SpaceX’s scale in launch services and Starlink, alongside xAI’s rapid growth in the AI market.
The tie-up is pitched as a way to integrate AI development with SpaceX’s communications infrastructure and space hardware, including ambitions to push computing beyond Earth. The companies argue that the power and cooling demands of AI, if met mainly through terrestrial data centres, will strain electricity supply and local environments, and that space-based systems could become part of a longer-term answer.
At the same time, Grok and X have faced mounting scrutiny over AI-generated harms, including non-consensual sexualised deepfakes, prompting investigations and renewed pressure on safeguards and enforcement. That backdrop adds regulatory and reputational risk to a structure that now ties AI tooling to a mass-distribution platform and to a company with major government and national-security-adjacent business lines.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!