OpenAI has launched OpenAI for Australia, a nationwide initiative to unlock the economic and societal benefits of AI. The program aims to support sovereign AI infrastructure, upskill Australians, and accelerate the country’s local AI ecosystem.
CEO Sam Altman highlighted Australia’s deep technical talent and strong institutions as key factors in becoming a global leader in AI.
A significant partnership with NEXTDC will see the development of a next-generation hyperscale AI campus and large GPU supercluster at Sydney’s Eastern Creek S7 site.
The project is expected to create thousands of jobs, boost local supplier opportunities, strengthen STEM and AI skills, and provide sovereign compute capacity for critical workloads.
OpenAI will also upskill more than 1.2 million Australians in collaboration with CommBank, Coles and Wesfarmers. OpenAI Academy will provide tailored modules to give workers and small business owners practical AI skills for confident daily use.
The nationwide rollout of courses is scheduled to begin in 2026.
OpenAI is launching its first Australian start-up program with local venture capital firms Blackbird, Square Peg, and AirTree to support home-grown innovation. Start-ups will receive API credits, mentorship, workshops, and access to Founder Day to accelerate product development and scale AI solutions locally.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has begun removing Australian users under 16 from Facebook, Instagram and Threads ahead of a national ban taking effect on 10 December. Canberra requires major platforms to block younger users or face substantial financial penalties.
Meta says it is deleting accounts it reasonably believes belong to underage teenagers while allowing them to download their data. Authorities expect hundreds of thousands of adolescents to be affected, given Instagram’s large cohort of 13 to 15 year olds.
Regulators argue the law addresses harmful recommendation systems and exploitative content, though YouTube has warned that safety filters will weaken for unregistered viewers. The Australian communications minister has insisted platforms must strengthen their own protections.
Rights groups have challenged the law in court, claiming unjust limits on expression. Officials concede teenagers may try using fake identification or AI-altered images, yet still expect platforms to deploy strong countermeasures.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Leaders from academia and industry in Hyderabad, India are stressing that humans must remain central in decision-making as AI and automation expand across society. Collaborative intelligence, combining AI experts, domain specialists and human judgement, is seen as essential for responsible adoption.
Universities are encouraged to treat students as primary stakeholders, adapting curricula to integrate AI responsibly and avoid obsolescence. Competency-based, values-driven learning models are being promoted to prepare students to question, shape and lead through digital transformation.
Experts highlighted that modern communication is co-produced by humans, machines and algorithms. Designing AI to augment human agency rather than replace it ensures a balance between technology and human decision-making across education and industry.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI Foundation has named the first recipients of the People-First AI Fund, awarding $40.5 million to 208 community groups across the United States. The grants will be disbursed by the end of the year, with a further $9.5 million in Board-directed funding to follow.
Nationwide listening sessions and recommendations from an independent Nonprofit Commission shaped applications. Nearly 3,000 organisations applied, underscoring strong demand for support across US communities. Final selections were made following a multi-stage human review involving external experts.
Grantees span digital literacy programmes, rural health initiatives and Indigenous media networks. Many operate with limited exposure to AI, reflecting the fund’s commitment to trusted, community-centred groups. California features prominently, consistent with the Foundation’s ties to its home state.
Funded projects span primary care, youth training in agricultural areas, and Tribal AI literacy work. Groups are also applying AI to food networks, disability education, arts and local business support. Each organisation sets priorities through flexible grants.
The programme focuses on AI literacy, community innovation and economic opportunity, with further grants targeting sector-level transformation. OpenAI Foundation says it will continue learning alongside grantees and supporting efforts that broaden opportunity while grounding AI adoption in local US needs.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK’s Financial Conduct Authority has started a live testing programme for AI with major financial firms. The initiative aims to explore AI’s benefits and risks in retail financial services while ensuring safe and responsible deployment.
Participating firms, including NatWest, Monzo, Santander and Scottish Widows, receive guidance from FCA regulators and technical partner Advai. Use cases being trialled range from debt resolution and financial advice to customer engagement and smarter spending tools.
Insights from the testing will help the FCA shape future regulations and governance frameworks for AI in financial markets. The programme complements the regulator’s Supercharged Sandbox, with a second cohort of firms due to begin testing in April 2026.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Game development is poised to transform as Sega begins to incorporate AI selectively. The Japanese studio aims to enhance efficiency across production processes while preserving the integrity of creative work, such as character design.
Executives emphasised that AI will primarily support tasks such as content transcription and workflow optimisation, avoiding roles that require artistic skills. Careful evaluation of each potential use case will guide its implementation across projects.
The debate over generative AI continues to divide the gaming industry, with some developers raising concerns that candidates may misrepresent AI-generated work during the hiring process. Studios are increasingly requiring proof of actual creative ability to avoid productivity issues.
Other developers, including Arrowhead Game Studios, emphasise the importance of striking a balance between AI use and human creativity. By reducing repetitive tasks rather than replacing artistic roles, studios aim to enhance efficiency while preserving the unique contributions of human designers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Governments are facing increased pressure to govern AI effectively, prompting calls for continuous institutional learning. Researchers argue that the public sector must develop adaptive capacity to keep pace with rapid technological change.
Past digital reforms often stalled because administrations focused on minor upgrades rather than redesigning core services. Slow adaptation now carries greater risks, as AI transforms decisions, systems and expectations across government.
Experts emphasise the need for a learning infrastructure that facilitates to reliable flow of knowledge across institutions. Singapore and the UAE have already invested heavily in large-scale capability-building programmes.
Public servants require stronger technical and institutional literacy, supported through ongoing training and open collaboration with research communities. Advocates say that states that embed learning deeply will govern AI more effectively and maintain public trust.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Japan plans to increase generative AI usage to 80 percent as officials push national adoption. Current uptake remains far lower than in the United States and China.
The government intends to raise early usage to 50 percent and stimulate private investment. A trillion yen target highlights the efforts to expand infrastructure and accelerate deployment across various Japanese sectors quickly.
Guidelines stress risk reduction and stronger oversight through an enhanced AI Safety Institute. Critics argue that measures lack detail and fail to address misuse with sufficient clarity.
Authorities expect broader AI use in health care, finance and agriculture through coordinated public-private work. Annual updates will monitor progress as Japan seeks to enhance its competitiveness and strategic capabilities.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Growing pressure from Honolulu residents in the US is prompting city leaders to consider stricter safeguards surrounding the use of AI. Calls for greater transparency have intensified as AI has quietly become part of everyday government operations.
Several city departments already rely on automated systems for tasks such as building-plan screening, customer service support and internal administrative work. Advocates now want voters to decide whether the charter should require a public registry of AI tools, human appeal rights and routine audits.
Concerns have deepened after the police department began testing AI-assisted report-writing software without broad consultation. Supporters of reform argue that stronger oversight is crucial to maintain public trust, especially if AI starts influencing high-stakes decisions that impact residents’ lives.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UN economists warned millions of jobs in Asia could be at risk as AI widens the gap between digitally advanced nations and those lacking basic access and skills. The report compared the AI revolution to 19th-century industrialisation, which created a wealthy few and left many behind.
Women and young adults face the most significant threat from AI in the workplace, while the benefits in health, education, and income are unevenly distributed.
Countries such as China, Singapore, and South Korea have invested heavily in AI and reaped significant benefits. Still, entry-level workers in many South Asian nations remain highly vulnerable to automation and technological advancements.
The UN Development Programme urged governments to consider ethical deployment and inclusivity when implementing AI. Countries such as Cambodia, Papua New Guinea, and Vietnam are focusing on developing simple digital tools to help health workers and farmers who lack reliable internet access.
AI could generate nearly $1 trillion in economic gains across Asia over the next decade, boosting regional GDP growth by about two percentage points. Income disparities mean AI benefits remain concentrated in wealthy countries, leaving poorer nations at a disadvantage.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!