OpenAI is preparing to build a significant new data centre in India as part of its Stargate AI infrastructure initiative. The move will expand the company’s presence in Asia and strengthen its operations in its second-largest market by user base.
OpenAI has already registered as a legal entity in India and begun assembling a local team.
The company plans to open its first office in New Delhi later this year. Details regarding the exact location and timeline of the proposed data centre remain unclear, though CEO Sam Altman may provide further information during his upcoming visit to India.
The project represents a strategic step to support the company’s growing regional AI ambitions.
OpenAI’s Stargate initiative, announced by US President Donald Trump in January, involves private sector investment of up to $500 billion for AI infrastructure, backed by SoftBank, OpenAI, and Oracle.
The initiative seeks to develop large-scale AI capabilities across major markets worldwide, with the India data centre potentially playing a key role in the efforts.
The expansion highlights OpenAI’s focus on scaling its AI infrastructure while meeting regional demand. The company intends to strengthen operational efficiency, improve service reliability, and support its long-term growth in Asia by establishing local offices and a significant data centre.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft has unveiled two new AI models, marking a major step in its efforts to build its own technology rather than rely solely on OpenAI.
The first model, MAI-Voice-1, generates high-fidelity audio and supports both single and multi-speaker scenarios. Microsoft said the system can create a full minute of expressive audio in under a second on a single GPU, making it one of the fastest of its kind.
MAI-Voice-1 is already available in Copilot Daily and Podcasts, while Copilot Labs allows users to experiment with storytelling and speech demos. Microsoft sees voice as a vital interface for future AI companions.
MAI-1 Preview is currently undergoing community testing on LMArena and will soon be integrated into selected Copilot use cases. Microsoft said it plans to expand its family of specialised models, aiming to orchestrate different systems for diverse user needs.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
On 26 August 2025, the UN General Assembly (UNGA) adopted a resolution establishing two new mechanisms for global AI governance: an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance. The 40-member Panel will provide annual, evidence-based assessments of AI’s opportunities, risks, and impacts, while the Global Dialogue will serve as a platform for governments and relevant stakeholders to discuss international cooperation, exchange best practices, and foster inclusive discussions on AI governance.
The Dialogue will be launched during UNGA’s 80th session in September 2025 and will convene annually, alternating between Geneva and New York, alongside existing UN events. These mechanisms also aim to contribute to capacity development efforts on AI. The resolution also invites states and stakeholders to contribute resources, particularly to ensure participation from developing countries, and foresees that a review of both initiatives may happen at UNGA’s 82nd session.
Alphabet’s Google has announced a $9 billion investment in Virginia by 2026, reinforcing the state’s status as a key US data infrastructure hub, with plans for a new Chesterfield County facility and expansions in Loudoun and Prince William counties to boost AI and cloud computing capabilities. The investment, supported by Dominion Energy and expected to take up to seven years to operationalise fully, aligns with a broader tech trend where giants like Microsoft, Amazon, Meta, and Alphabet are pouring hundreds of billions into AI projects, though it raises energy demand concerns that Google aims to address through efficiency measures and community funding.
INTERPOL’s ‘Serengeti 2.0’ operation across Africa led to over 1,200 arrests between June and August 2025, targeting ransomware, online fraud, and business email compromise schemes, and recovering nearly USD 100 million stolen from tens of thousands of victims. Authorities shut down illicit cryptocurrency mining sites in Angola, dismantled a massive crypto fraud scheme in Zambia, and uncovered a human trafficking network with forged passports in Lusaka.
OpenAI announced new safety measures for ChatGPT after a lawsuit accused the chatbot of contributing to a teenager’s suicide. The company plans to enhance detection of mental distress, improve safeguards in suicide-related conversations, add parental controls, and provide links to emergency services while addressing content filtering flaws. Regulators and mental health experts are intensifying scrutiny, warning that growing reliance on chatbots instead of professional care could endanger vulnerable users, especially children.
For the main updates, reflections and events, consult the RADAR, the READING CORNER and the UPCOMING EVENTS section below.
Join us as we connect the dots, from daily updates to main weekly developments, to bring you a clear, engaging monthly snapshot of worldwide digital trends.
AI’s rapid rise is reshaping how nations think about energy, opening the door to new partnerships that could redefine the path toward a cleaner and smarter future.
A new wave of Android malware deployed through fake utilities on the Play Store infected millions, using overlay attacks to harvest financial credentials and deploy adware.
By experimenting with AI edits without approval, YouTube has angered creators and renewed debates about trust, regulation and control in the age of AI.
Wheels, wagons, and metal turned herders into mobile nomads. With speed on their side, raiding – and empire-building – became possible. Aldo Matteucci writes.
AI is emerging as both a driver of environmental strain and a potential force for sustainable solutions, raising urgent questions about whether innovation and ecological responsibility can truly advance together.
ISOC Brazil webinar on the responsibility of intermediaries and changes in the US policy landscape. The webinar will promote an in-depth discussion about the
Declaring Independence in Cyberspace: Book Discussion Diplo’s Director of Digital Trade and Economic Security, Marilia Maciel, will provide comments and
The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.
The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.
Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.
The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has announced new safety measures for its popular chatbot following a lawsuit filed by the parents of a 16-year-old boy who died by suicide after relying on ChatGPT for guidance.
The parents allege the chatbot isolated their son and contributed to his death earlier in the year.
The company said it will improve ChatGPT’s ability to detect signs of mental distress, including indirect expressions such as users mentioning sleep deprivation or feelings of invincibility.
It will also strengthen safeguards around suicide-related conversations, which OpenAI admitted can break down in prolonged chats. Planned updates include parental controls, access to usage details, and clickable links to local emergency services.
OpenAI stressed that its safeguards work best during short interactions, acknowledging weaknesses in longer exchanges. It also said it is considering building a network of licensed professionals that users could access through ChatGPT.
The company added that content filtering errors, where serious risks are underestimated, will also be addressed.
The lawsuit comes amid wider scrutiny of AI tools by regulators and mental health experts. Attorneys general from more than 40 US states recently warned AI companies of their duty to protect children from harmful or inappropriate chatbot interactions.
Critics argue that reliance on chatbots for support instead of professional care poses growing risks as usage expands globally.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s xAI has filed a lawsuit in Texas accusing Apple and OpenAI of colluding to stifle competition in the AI sector.
The case alleges that both companies locked up markets to maintain monopolies, making it harder for rivals like X and xAI to compete.
The dispute follows Apple’s 2024 deal with OpenAI to integrate ChatGPT into Siri and other apps on its devices. According to the lawsuit, Apple’s exclusive partnership with OpenAI has prevented fair treatment of Musk’s products within the App Store, including the X app and xAI’s Grok app.
Musk previously threatened legal action against Apple over antitrust concerns, citing the company’s alleged preference for ChatGPT.
Musk, who acquired his social media platform X in a $45 billion all-stock deal earlier in the year, is seeking billions of dollars in damages and a jury trial. The legal action highlights Musk’s ongoing feud with OpenAI’s CEO, Sam Altman.
Musk, a co-founder of OpenAI who left in 2018 after disagreements with Altman, has repeatedly criticised the company’s shift to a profit-driven model. He is also pursuing separate litigation against OpenAI and Altman over that transition in California.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has announced plans to open its first office in India later this year, selecting New Delhi as the location. India is now ChatGPT’s second-largest market after the US and continues to experience rapid growth in user activity.
Weekly active users of ChatGPT in India have increased over fourfold over the past year, with students making up the largest global user segment. CEO Sam Altman praised India’s talent pool and government support, stating the new office is key to building AI with and for India.
Union IT Minister Ashwini Vaishnaw welcomed the move, citing India’s AI mission and expanding digital infrastructure as a natural foundation for the partnership. OpenAI will also hold its first Education Summit in India later this month, aiming to further engage with students and educators nationwide.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI’s chief people officer, Julia Villagra, has left the company, marking the latest leadership change at the AI pioneer. Villagra, who joined the San Francisco firm in early 2024 and was promoted in March, previously led its human resources operations.
Her responsibilities will temporarily be overseen by chief strategy officer Jason Kwon, while chief applications officer Fidji Simo will lead the search for her successor.
OpenAI said Villagra is stepping away to pursue her personal interest in art, music and storytelling as tools to help people understand the shift towards artificial general intelligence, a stage when machines surpass human performance in most forms of work.
The departure comes as OpenAI navigates a period of intense competition for AI expertise. Microsoft-backed OpenAI is valued at about $300 billion, with a potential share sale set to raise that figure to $500 billion.
The company faces growing rivalry from Meta, where Mark Zuckerberg has reportedly offered $100 million signing bonuses to attract OpenAI talent.
While OpenAI expands, public concerns over the impact of AI on employment continue. A Reuters/Ipsos poll found 71% of Americans fear AI could permanently displace too many workers, despite the unemployment rate standing at 4.2% in July.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
According to sworn interrogations, OpenAI said Musk had discussed possible financing arrangements with Zuckerberg as part of the bid. Musk’s AI startup xAI, a competitor to OpenAI, did not respond to requests for comment.
In the filing, OpenAI asked a federal judge to order Meta to provide documents related to any bid for OpenAI, including internal communications about restructuring or recapitalisation. The firm argued these records could clarify motivations behind the bid.
Meta countered that such documents were irrelevant and suggested OpenAI seek them directly from Musk or xAI. A US judge ruled that Musk must face OpenAI’s claims of attempting to harm the company through public remarks and what it described as a sham takeover attempt.
The legal dispute follows Musk’s lawsuit against OpenAI and Sam Altman over its for-profit transition, with OpenAI filing a countersuit in April. A jury trial is scheduled for spring 2026.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Last week, on Monday, Google agreed to pay a A$55 million (US$35.8 million) fine in Australia after regulators found it restricted competition by striking revenue-sharing deals with Telstra and Optus to pre-install its search app on Android phones, sidelining rival platforms. The Australian Competition and Consumer Commission (ACCC) said the arrangements, which were in place from 2019 to 2021, limited consumer choice and blocked competitors’ visibility. Google admitted that the deals harmed competition and pledged to drop similar practices, while Telstra and Optus confirmed that they no longer pursue such agreements. The settlement, which still requires court approval, comes amid wider legal and regulatory challenges for Google in Australia, including a recent loss in a case brought by Epic Games and growing scrutiny over its role in app distribution and social media access.
The United States and the European Union have agreed on a new Framework Agreement on Reciprocal, Fair, and Balanced Trade, aiming to reset one of the world’s largest trade relationships. The deal includes EU commitments to eliminate tariffs on US industrial goods, expand access for American agricultural and seafood products, and procure $750 billion in US energy exports and $40 billion in AI chips by 2028. In return, the US will cap tariffs on key EU goods, ease automobile tariffs, and pursue cooperation on steel, aluminium, and supply chain security. Both sides pledged deeper collaboration on defence procurement, digital trade, cybersecurity, sustainability rules, and standards harmonisation, while also working to resolve disputes over deforestation, carbon border taxes, and non-tariff barriers.
OpenAI CEO Sam Altman warned that the US risks underestimating China’s rapid AI progress, arguing that export controls on advanced semiconductors are an unreliable long-term solution. Speaking in San Francisco, he said chip restrictions and policy-driven approaches often fail due to workarounds, while China is quickly expanding its AI capacity and accelerating domestic alternatives through firms like Huawei.
On the same front, Nvidia is quietly developing a new AI chip for China, the B30A, based on its advanced Blackwell architecture, just as Washington debates how much US technology Beijing should be allowed to access. Positioned between the weaker H20 and the flagship B300, the B30A retains key features like high-bandwidth memory and NVLink, making it more powerful than China’s current scaled-down H20 approved model while staying within export limits. The move follows President Trump’s recent openness to allowing scaled-down chip sales to China, though bipartisan lawmakers remain wary of boosting Beijing’s AI capabilities. Nvidia, which relies on China for 13% of its revenue, also plans to release the lower-end RTX6000D for AI inference in September, reflecting efforts to comply with US-China export policy while fending off rising domestic rivals like Huawei, whose chips are improving but still lag in software and memory. Meanwhile, Chinese regulators have warned firms about potential security risks in Nvidia’s products, underscoring the political tensions shaping the company’s commercial strategy.
Private conversations with xAI’s chatbot Grok were unintentionally exposed online after its ‘share’ button generated public URLs that became indexed by Google and other search engines, raising serious concerns about user privacy and AI safety. The leaked chats included sensitive and dangerous content, from hacking crypto wallets to drug-making instructions, despite xAI’s ban on harmful use. The flaw, reminiscent of earlier issues with other AI platforms like ChatGPT, has damaged trust in xAI and highlighted the urgent need for stronger privacy safeguards, such as blocking the indexing of shared content and adopting privacy-by-design measures, as users may otherwise hesitate to engage with chatbots.
Meta is launching a new research lab dedicated to superintelligence, led by Scale AI founder Alexandr Wang, as part of its push to regain momentum in the global AI race after mixed results with its Llama models and ongoing talent losses. Mark Zuckerberg is reportedly considering a multibillion-dollar investment in Scale, signalling strong confidence in Wang’s approach, while the lab’s creation, separate from Meta’s FAIR division, underscores Meta’s shift toward partnerships with top AI players, mirroring strategies used by Microsoft, Amazon, and Google.
Japanese technology giant SoftBank has announced plans to buy a $2 billion stake in Intel, signalling a stronger push into the American semiconductor industry. The investment comes as Washington debates greater government involvement in the sector, with reports suggesting President Donald Trump is weighing a US government stake in the chipmaker. SoftBank will purchase Intel’s common stock at $23 per share. Its chairman, Masayoshi Son, said semiconductors remain the backbone of every industry and expressed confidence that advanced chip manufacturing will expand in the US, with Intel playing a central role.
For the main updates, reflections and events, consult the RADAR, the READING CORNER and the UPCOMING EVENTS section below.
Join us as we connect the dots, from daily updates to main weekly developments, to bring you a clear, engaging monthly snapshot of worldwide digital trends.
The Frontier Stable Token marks the first government-backed stablecoin in the US, with Wyoming positioning itself as a leader in digital finance innovation.
With the V3.1 upgrade now live and the R1 label missing, observers are debating whether DeepSeek has postponed or abandoned its R2 reasoning model entirely.
The former Twitter chief executive argues that AI agents will soon dominate the internet instead of humans, with individuals likely to deploy dozens to manage daily online activity.
Strangeworks acquires German firm Quantagonia to expand European operations and bring AI-powered, quantum-ready planning technology to more organisations.
Regulators urge safeguards for AI toys as children gain interactive companions that teach and engage instead of relying solely on human interaction or screens.
The platform allows users to conduct market research, plan products, design prototypes, check regulations, and find distributors in minutes rather than weeks.
HTC’s entry into this market is significant as it competes with established players like Meta, Google, and Samsung, each developing or already offering advanced smart glasses technology.
Key concerns include the potential for widespread job displacement as AI systems replace human workers, significant environmental harm due to the substantial energy usage of AI models, and privacy erosion…
A $45 million Bitcoin donation accepted without checks has turned into a major Czech political scandal, now focused on money laundering and drug trafficking.
As the Trump-Putin summit brought Alaska into focus, its overlooked telegraph cables reveal a fascinating history: in the late 19th century, Alaska was on the brink of becoming a telecommunication hub connecting the US to Europe via Siberia.
Can AI replace the transmission of wisdom? The world of education is changing radically and rapidly. Generative AI tools are now capable of writing essays, solving math problems, summarising textbooks, and even personalising learning experiences at scale.
English dominates the AI landscape, but this hegemony can hinder our understanding of AI’s deeper, non-technical aspects. The recent explosion of AI jargon often obscures meaning and can lead to cognitive confusion. Embracing our native languages allows us to deflate this jargon, fostering clearer, common-sense comprehension of AI concepts.
AI offers tools to expand access to justice globally, but without transparency, oversight, and human-rights safeguards, it risks deepening bias, exclusion, and eroding public trust.
How is money shaping foreign policy? Learn how countries use sovereign wealth funds and strategic investments as powerful tools for foreign policy and soft power.