People-First AI Fund awards support to 208 US nonprofits

OpenAI Foundation has named the first recipients of the People-First AI Fund, awarding $40.5 million to 208 community groups across the United States. The grants will be disbursed by the end of the year, with a further $9.5 million in Board-directed funding to follow.

Nationwide listening sessions and recommendations from an independent Nonprofit Commission shaped applications. Nearly 3,000 organisations applied, underscoring strong demand for support across US communities. Final selections were made following a multi-stage human review involving external experts.

Grantees span digital literacy programmes, rural health initiatives and Indigenous media networks. Many operate with limited exposure to AI, reflecting the fund’s commitment to trusted, community-centred groups. California features prominently, consistent with the Foundation’s ties to its home state.

Funded projects span primary care, youth training in agricultural areas, and Tribal AI literacy work. Groups are also applying AI to food networks, disability education, arts and local business support. Each organisation sets priorities through flexible grants.

The programme focuses on AI literacy, community innovation and economic opportunity, with further grants targeting sector-level transformation. OpenAI Foundation says it will continue learning alongside grantees and supporting efforts that broaden opportunity while grounding AI adoption in local US needs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FCA begins live AI testing with UK financial firms

The UK’s Financial Conduct Authority has started a live testing programme for AI with major financial firms. The initiative aims to explore AI’s benefits and risks in retail financial services while ensuring safe and responsible deployment.

Participating firms, including NatWest, Monzo, Santander and Scottish Widows, receive guidance from FCA regulators and technical partner Advai. Use cases being trialled range from debt resolution and financial advice to customer engagement and smarter spending tools.

Insights from the testing will help the FCA shape future regulations and governance frameworks for AI in financial markets. The programme complements the regulator’s Supercharged Sandbox, with a second cohort of firms due to begin testing in April 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sega cautiously adopts AI in game development

Game development is poised to transform as Sega begins to incorporate AI selectively. The Japanese studio aims to enhance efficiency across production processes while preserving the integrity of creative work, such as character design.

Executives emphasised that AI will primarily support tasks such as content transcription and workflow optimisation, avoiding roles that require artistic skills. Careful evaluation of each potential use case will guide its implementation across projects.

The debate over generative AI continues to divide the gaming industry, with some developers raising concerns that candidates may misrepresent AI-generated work during the hiring process. Studios are increasingly requiring proof of actual creative ability to avoid productivity issues.

Other developers, including Arrowhead Game Studios, emphasise the importance of striking a balance between AI use and human creativity. By reducing repetitive tasks rather than replacing artistic roles, studios aim to enhance efficiency while preserving the unique contributions of human designers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Governments urged to build learning systems for the AI era

Governments are facing increased pressure to govern AI effectively, prompting calls for continuous institutional learning. Researchers argue that the public sector must develop adaptive capacity to keep pace with rapid technological change.

Past digital reforms often stalled because administrations focused on minor upgrades rather than redesigning core services. Slow adaptation now carries greater risks, as AI transforms decisions, systems and expectations across government.

Experts emphasise the need for a learning infrastructure that facilitates to reliable flow of knowledge across institutions. Singapore and the UAE have already invested heavily in large-scale capability-building programmes.

Public servants require stronger technical and institutional literacy, supported through ongoing training and open collaboration with research communities. Advocates say that states that embed learning deeply will govern AI more effectively and maintain public trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan plans large scale investment to boost AI capability

Japan plans to increase generative AI usage to 80 percent as officials push national adoption. Current uptake remains far lower than in the United States and China.

The government intends to raise early usage to 50 percent and stimulate private investment. A trillion yen target highlights the efforts to expand infrastructure and accelerate deployment across various Japanese sectors quickly.

Guidelines stress risk reduction and stronger oversight through an enhanced AI Safety Institute. Critics argue that measures lack detail and fail to address misuse with sufficient clarity.

Authorities expect broader AI use in health care, finance and agriculture through coordinated public-private work. Annual updates will monitor progress as Japan seeks to enhance its competitiveness and strategic capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Honolulu in the US pushes for transparency in government AI use

Growing pressure from Honolulu residents in the US is prompting city leaders to consider stricter safeguards surrounding the use of AI. Calls for greater transparency have intensified as AI has quietly become part of everyday government operations.

Several city departments already rely on automated systems for tasks such as building-plan screening, customer service support and internal administrative work. Advocates now want voters to decide whether the charter should require a public registry of AI tools, human appeal rights and routine audits.

Concerns have deepened after the police department began testing AI-assisted report-writing software without broad consultation. Supporters of reform argue that stronger oversight is crucial to maintain public trust, especially if AI starts influencing high-stakes decisions that impact residents’ lives.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI growth threatens millions of jobs across Asia

UN economists warned millions of jobs in Asia could be at risk as AI widens the gap between digitally advanced nations and those lacking basic access and skills. The report compared the AI revolution to 19th-century industrialisation, which created a wealthy few and left many behind.

Women and young adults face the most significant threat from AI in the workplace, while the benefits in health, education, and income are unevenly distributed.

Countries such as China, Singapore, and South Korea have invested heavily in AI and reaped significant benefits. Still, entry-level workers in many South Asian nations remain highly vulnerable to automation and technological advancements.

The UN Development Programme urged governments to consider ethical deployment and inclusivity when implementing AI. Countries such as Cambodia, Papua New Guinea, and Vietnam are focusing on developing simple digital tools to help health workers and farmers who lack reliable internet access.

AI could generate nearly $1 trillion in economic gains across Asia over the next decade, boosting regional GDP growth by about two percentage points. Income disparities mean AI benefits remain concentrated in wealthy countries, leaving poorer nations at a disadvantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia stands firm on under 16 social media ban

Australia’s government defended its under-16 social media ban ahead of its introduction on 10 December. Minister Anika Wells said she would not be pressured by major platforms opposing the plan.

Tech companies argued that bans may prove ineffective, yet Wells maintained firms had years to address known harms. She insisted parents required stronger safeguards after repeated failures by global platforms.

Critics raised concerns about enforcement and the exclusion of online gaming despite widespread worries about Roblox. Two teenagers also launched a High Court challenge, claiming the policy violated children’s rights.

Wells accepted rollout difficulties but said wider social gains in Australia justified firm action. She added that policymakers must intervene when unsafe operating models place young people at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Safran and UAE institute join forces on AI geospatial intelligence

Safran.AI, the AI division of Safran Electronics & Defence, and the UAE’s Technology Innovation Institute have formed a strategic partnership to develop a next-generation agentic AI geospatial intelligence platform.

The collaboration aims to transform high-resolution satellite imagery into actionable intelligence for defence operations.

The platform will combine human oversight with advanced geospatial reasoning, enabling operators to interpret and respond to emerging situations faster and with greater precision.

Key initiatives include agentic reasoning systems powered by large language models, a mission-specific AI detector factory, and an autonomous multimodal fusion engine for persistent, all-weather monitoring.

Under the agreement, a joint team operating across France and the UAE will accelerate innovation within a unified operational structure.

Leaders from both organisations emphasise that the alliance strengthens sovereign geospatial intelligence capabilities and lays the foundations for decision intelligence in national security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cairo Forum examines MENA’s path in the AI era

The Second Cairo Forum brought together experts to assess how AI, global shifts, and economic pressures are shaping MENA. Speakers said the region faces a critical moment as new technologies accelerate. The discussion asked whether MENA will help shape AI or simply adopt it.

Participants highlighted global divides, warning that data misuse and concentrated control remain major risks. They argued that middle-income countries can collaborate to build shared standards. Several speakers urged innovation-friendly regulation supported by clear safety rules.

Officials from Egypt outlined national efforts to embed AI across health, agriculture, and justice. They described progress through applied projects and new governance structures. Limited data access and talent retention were identified as continuing obstacles.

Industry voices stressed that trust, transparency, and skills must underpin the use of AI. They emphasised co-creation that fits regional languages and contexts. Training and governance frameworks were seen as essential for responsible deployment.

Closing remarks warned that rapid advances demand urgent decisions. Speakers said safety investment lags behind development, and global competition is intensifying. They agreed that today’s choices will shape the region’s AI future.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!