Tech giants fund teacher AI training amid classroom chatbot push

Major technology companies are shifting strategic emphasis toward education by funding teacher training in artificial intelligence. Companies such as Microsoft, OpenAI and Anthropic have pledged millions of dollars to train educators and bring chatbots into classrooms.

Under a deal with the American Federation of Teachers (AFT) in the United States, Microsoft will contribute $12.5 million over five years, OpenAI will provide $8 million plus $2 million in technical resources, and Anthropic has pledged $500,000. The AFT plans to build AI training hubs, including one in New York, and aims to train around 400,000 teachers over five years.

At a workshop in San Antonio, dozens of teachers used AI tools such as ChatGPT, Google’s Gemini and Microsoft CoPilot to generate lesson plans, podcasts and bilingual flashcards. One teacher noted how quickly AI could generate materials: ‘It can save you so much time.’

However, the initiative raises critical questions. Educators expressed concerns about being replaced by AI, while unions emphasise that teachers must lead training content and maintain control over learning. Technology companies see this as a way to expand into education, but also face scrutiny over influence and the implications for teaching practice.

As schools increasingly adopt AI tools, experts say training must go beyond technical skills to cover ethical use, student data protection and critical thinking. The reforms reflect a broader push to prepare both teachers and students for a future defined by AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI transforms Japanese education while raising ethical questions

AI is reshaping Japanese education, from predicting truancy risks to teaching English and preserving survivor memories. Schools and universities nationwide are experimenting with systems designed to support teachers and engage students more effectively.

In Saitama’s Toda City, AI analysed attendance, health records, and bullying data to identify pupils at risk of skipping school. During a 2023 pilot, it flagged more than a thousand students and helped teachers prioritise support for those most vulnerable.

Experts praised the system’s potential but warned against excessive dependence on algorithms. Keio University’s Professor Makiko Nakamuro said educators must balance data-driven insights with privacy safeguards and human judgment. Toda City has already banned discriminatory use of AI results.

AI’s role is also expanding in language learning. Universities such as Waseda and Kyushu now use a Tokyo-developed conversation AI that assesses grammar, pronunciation, and confidence. Students say they feel more comfortable practising with a machine than in front of classmates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU expands AI reach through new antenna network

The European Commission has launched new ‘AI Antennas’ across 13 European countries to strengthen AI infrastructure. Seven EU states, including Belgium, Ireland, and Malta, will gain access to high-performance computing through the EuroHPC network.

Six non-EU partners, such as the UK and Switzerland, have also joined the initiative. Their inclusion reflects the EU’s growing cooperation on digital innovation with neighbouring countries despite Brexit and other trade tensions.

Each AI Antenna will serve as a local gateway to the bloc’s supercomputing hubs, providing technical support, training, and algorithmic resources. Countries without an AI Factory of their own can now connect remotely to major systems like Jupiter.

The Commission says the network aims to spread AI skills and research capabilities across Europe, narrowing regional gaps in digital development. However, smaller nations hosting only antennas are unlikely to house the bloc’s future ‘AI Gigafactories’, which will be up to four times more powerful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Startup raises $9m to orchestrate Gulf digital infrastructure

Bilal Abu-Ghazaleh has launched 1001 AI, a London–Dubai startup building an AI-native operating system for critical MENA industries. The two-month-old firm raised $9m seed from CIV, General Catalyst and Lux Capital, with angels including Chris Ré, Amjad Masad and Amira Sajwani.

Target sectors include airports, ports, construction, and oil and gas, where 1001 AI sees billions in avoidable inefficiencies. Its engine ingests live operational data, models workflows and issues real-time directives, rerouting vehicles, reassigning crews and adjusting plans autonomously.

Abu-Ghazaleh brings scale-up experience from Hive AI and Scale AI, where he led GenAI operations and contributor networks. 1001 borrows a consulting-style rollout: embed with clients, co-develop the model, then standardise reusable patterns across similar operational flows.

Investors argue the Gulf is an ideal test bed given sovereign-backed AI ambitions and under-digitised, mission-critical infrastructure. Deena Shakir of Lux says the region is ripe for AI that optimises physical operations at scale, from flight turnarounds to cargo moves.

First deployments are slated for construction by year-end, with aviation and logistics to follow. The funding supports early pilots and hiring across engineering, operations and go-to-market, as 1001 aims to become the Gulf’s orchestration layer before expanding globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SMEs underinsured as Canada’s cyber landscape shifts

Canada’s cyber insurance market is stabilising, with stronger underwriting, steadier loss trends, and more product choice, the Insurance Bureau of Canada says. But the threat landscape is accelerating as attackers weaponise AI, leaving many small and medium-sized enterprises exposed and underinsured.

Rapid market growth brought painful losses during the ransomware surge: from 2019 to 2023, combined loss ratios averaged about 155%, forcing tighter pricing and coverage. Insurers have recalibrated, yet rising AI-enabled phishing and deepfake impersonations are lifting complexity and potential severity.

Policy is catching up unevenly. Bill C-8 in Canada would revive critical-infrastructure cybersecurity standards, stronger oversight, and baseline rules for risk management and incident reporting. Public–private programmes signal progress but need sustained execution.

SMEs remain the pressure point. Low uptake means minor breaches can cost tens or hundreds of thousands, while severe incidents can be fatal. Underinsurance shifts shock to the wider economy, challenging insurers to balance affordability with long-term viability.

The Bureau urges practical resilience: clearer governance, employee training, incident playbooks, and fit-for-purpose cover. Education campaigns and free guidance aim to demystify coverage, boost readiness, and help SMEs recover faster when attacks hit, supporting a more durable digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Public consultation flaws risk undermining Digital Fairness Act debate

As the European Commission’s public consultation on the Digital Fairness Act enters its final phase, growing criticism points to flaws in how citizen feedback is collected.

Critics say the survey’s structure favours those who support additional regulation while restricting opportunities for dissenting voices to explain their reasoning. The issue raises concerns over how such results may influence the forthcoming impact assessment.

The Call for Evidence and Public Consultation, hosted on the Have Your Say portal, allows only supporters of the Commission’s initiative to provide detailed responses. Those who oppose new regulation are reportedly limited to choosing a single option with no open field for justification.

Such an approach risks producing a partial view of European opinion rather than a balanced reflection of stakeholders’ perspectives.

Experts argue that this design contradicts the EU’s Better Regulation principles, which emphasise inclusivity and objectivity.

They urge the Commission to raise its methodological standards, ensuring surveys are neutral, questions are not loaded, and all respondents can present argument-based reasoning. Without these safeguards, consultations may become instruments of validation instead of genuine democratic participation.

Advocates for reform believe the Commission’s influence could set a positive precedent for the entire policy ecosystem. By promoting fairer consultation practices, the EU could encourage both public and private bodies to engage more transparently with Europe’s diverse digital community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy laws block cross-border crypto regulation progress

Regulators continue to face hurdles in overseeing global crypto markets as privacy laws block effective cross-border data sharing, the Financial Stability Board warned. Sixteen years after Bitcoin’s launch, regulation remains inconsistent, with differing national approaches causing data gaps and fragmented oversight.

The FSB, under the Bank for International Settlements, said secrecy laws hinder authorities from monitoring risks and sharing information. Some jurisdictions block data sharing with foreign regulators, while others delay cooperation over privacy and reciprocity concerns.

According to the report, addressing these legal and institutional barriers is essential to improving cross-border collaboration and ensuring more effective global oversight of crypto markets.

However, the FSB noted that reliable data on digital assets remain scarce, as regulators rely heavily on incomplete or inconsistent sources from commercial data providers.

Despite the growing urgency to monitor financial stability risks, little progress has been made since similar concerns were raised nearly four years ago. The FSB has yet to outline concrete solutions for bridging the gap between data privacy protection and effective crypto regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and TSMC celebrate first US-made Blackwell AI chip

A collaboration between NVIDIA and TSMC has marked a historic milestone with the first NVIDIA Blackwell wafer produced on US soil.

The event, held at TSMC’s facility in Phoenix, symbolised the start of volume production for the Blackwell architecture and a major step toward domestic AI chip manufacturing.

NVIDIA’s CEO Jensen Huang described it as a moment that brings advanced technology and industrial strength back to the US.

A partnership that highlights how the companies aim to strengthen the US’s semiconductor supply chain by producing the world’s most advanced chips domestically.

TSMC Arizona will manufacture next-generation two-, three- and four-nanometre technologies, crucial for AI, telecommunications, and high-performance computing. The process transforms raw wafers through layering, etching, and patterning into the high-speed processors driving the AI revolution.

TSMC executives praised the achievement as the result of decades of partnership with NVIDIA, built on innovation and technical excellence.

Both companies believe that local chip production will help meet the rising global demand for AI infrastructure while securing the US’s strategic position in advanced technology manufacturing.

NVIDIA also plans to use its AI, robotics, and digital twin platforms to design and manage future American facilities, deepening its commitment to domestic production.

The companies say their shared investment signals a long-term vision of sustainable innovation, industrial resilience, and technological leadership for the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Harvard’s health division supports AI-powered medical learning

Harvard Health Publishing has partnered with Microsoft to use its health content to train the Copilot AI system. The collaboration seeks to enhance the accuracy of healthcare responses on Microsoft’s AI platform, according to the Wall Street Journal.

HHP publishes consumer health resources reviewed by Harvard scientists, covering topics such as sleep, nutrition, and pain management. The institution confirmed that Microsoft has paid to license its articles, expanding a previous agreement made in 2022.

The move is designed to make medically verified information more accessible to the public through Copilot, which now reaches over 33 million users.

Harvard’s Soroush Saghafian said the deal could help cut errors in AI-generated medical advice, a key concern in healthcare. He emphasised the importance of rigorous testing before deployment, warning that unverified tools could pose serious risks to users.

Harvard continues to invest in AI research and integration across its academic programmes. Recent initiatives include projects to address bias in medical training and studies exploring AI’s role in drug development and cancer treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot