Egypt launches AI readiness report with EU support

Egypt has released its first AI Readiness Assessment Report, developed by the Ministry of Communications and Information Technology with UNESCO Cairo and supported by the EU funding.

The report reviews Egypt’s legal, policy, institutional and technical environment, highlighting the strengths and gaps in the country’s digital transformation journey. It emphasises ensuring that AI development is human-centred and responsibly governed.

EU officials praised Egypt’s growing leadership in ethical AI governance and reiterated their support for an inclusive digital transition. Cooperation between Egypt and the EU is expected to deepen in digital policy and capacity-building areas.

The assessment aims to guide future investments and reforms, ensuring that AI strengthens sustainable development and benefits all segments of Egyptian society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California moves to regulate AI companion chatbots to protect minors

The California State Assembly passed SB 243, advancing legislation making the state the first in the USA to regulate AI companion chatbots. The bill, which aims to safeguard minors and vulnerable users, passed with bipartisan support and now heads to the state Senate for a final vote on Friday.

If signed into law by Governor Gavin Newsom, SB 243 would take effect on 1 January 2026. It would require companies like OpenAI, Replika, and Character.AI to implement safety protocols for AI systems that simulate human companionship.

The law would prohibit such chatbots from engaging in conversations involving suicidal ideation, self-harm, or sexually explicit content. For minors, platforms must provide recurring alerts every three hours, reminding them they interact with AI and encouraging breaks.

The bill also introduces annual transparency and reporting requirements, effective 1 July 2027. Users harmed by violations could seek damages of up to $1,000 per incident, injunctive relief and attorney’s fees.

The legislation follows the suicide of teen Adam Raine after troubling conversations with ChatGPT, and amid mounting scrutiny of AI’s impact on children. Lawmakers nationwide and the Federal Trade Commission (FTC) are increasing pressure on AI companies to bolster safeguards in the USA.

Though earlier versions of the bill included stricter requirements, like banning addictive engagement tactics, those provisions were removed. Still, backers say the final bill strikes a necessary balance between innovation and public safety.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Broadcom lands $10bn AI chip order

Broadcom has secured a $10 billion agreement to supply custom AI chips, with analysts pointing to OpenAI as the likely customer.

The US semiconductor firm announced the deal alongside better-than-expected third-quarter earnings, driven by growing demand for its ASICs. It forecast a strong fourth quarter as cloud providers seek alternatives to Nvidia, whose GPUs remain costly and supply-constrained.

Chief executive Hock Tan said Broadcom is collaborating with four potential new clients on chip development, adding to existing partnerships with major players such as Google and Meta.

The company recently introduced the Tomahawk Ultra and next-generation Jericho networking chips, further strengthening its position in the AI computing sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and cyber priorities headline massive US defence budget bill

The US House of Representatives has passed an $848 billion defence policy bill with new provisions for cybersecurity and AI. Lawmakers voted 231 to 196 to approve the chamber’s version of the National Defence Authorisation Act (NDAA).

The bill mandates that the National Security Agency brief Congress on plans for its Cybersecurity Coordination Centre and requires annual reports from combatant commands on the levels of support provided by US Cyber Command.

It also calls for a software bill of materials for AI-enabled technology that the Department of Defence uses. The Pentagon will be authorised to create up to 12 generative AI projects to improve cybersecurity and intelligence operations.

An adopted amendment allows the NSA to share threat intelligence with the private sector to protect US telecommunications networks. Another requirement is that the Pentagon study the National Guard’s role in cyber response at the federal and state levels.

Proposals to renew the Cybersecurity Information Sharing Act and the State and Local Cybersecurity Grant Program were excluded from the final text. The Senate is expected to approve its version of the NDAA next week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude AI gains powerful file editing tools for documents and spreadsheets

Anthropic’s Claude has expanded its role as a leading AI assistant by adding advanced tools for creating and editing files. Instead of manually working with different programs, users can now describe their needs in plain language and let the AI produce or update Word, Excel, PowerPoint, and PDF files.

A feature that supports uploads of CSV and TSV data and can generate charts, graphs, or images where needed, with a 30MB size limit applying to uploads and downloads.

The real breakthrough lies in editing. Instead of opening a document or spreadsheet, users can simply type instructions such as replacing text, changing currencies, or updating job titles. Claude processes the prompt and makes all the changes in one pass, preserving the original formatting.

It positions the AI as more efficient than rivals, as Gemini can only export reports but not directly modify existing files.

The feature preview is available on web and desktop for subscribers on Max, Team, or Enterprise plans. Analysts suggest the update could reshape productivity tools, especially after reports that Microsoft has partnered with Anthropic to explore using Claude for Office 365 functions.

By removing repetitive tasks and making file handling conversational, Claude is pushing productivity software into a new phase of automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NotebookLM turns notes into flashcards podcasts and quizzes

Google’s learning-focused AI tool NotebookLM has gained a major update, making studying and teaching more interactive.

Instead of offering only static summaries, it now generates flashcards that condense key information into easy-to-remember notes, helping users recall knowledge more effectively.

Reports can also be transformed into quizzes with customisable topics and difficulty, which can then be shared with friends or colleagues through a simple link.

The update extends to audio learning, where NotebookLM’s podcast-style Audio Overviews are evolving with new formats. Instead of a single style, users can now create Brief, Debate, or Critique episodes, giving greater flexibility in how material is explained or discussed.

Google is also strengthening its teaching tools. A new Blog Post format offers contextual suggestions such as strategy papers or explainers, while the ability to create custom report formats allows users to design study resources tailored to their needs.

The most significant addition, however, is the Learning Guide. Acting like a personal tutor, it promotes deeper understanding by asking open-ended questions, breaking problems into smaller steps, and adapting explanations to suit each learner.

With these features, NotebookLM is moving closer to becoming a comprehensive learning assistant, offering a mix of interactive study aids and adaptable teaching methods that go beyond simple note-taking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Photonic chips open the path to sustainable AI by training with light

A team of international researchers has shown how training neural networks directly with light on photonic chips could make AI faster and more sustainable.

A breakthrough study, published in Nature, involved collaboration between the Politecnico di Milano, EPFL Lausanne, Stanford University, the University of Cambridge, and the Max Planck Institute.

The research highlights how physical neural networks, which use analogue circuits that exploit the laws of physics, can process information in new ways.

Photonic chips developed at the Politecnico di Milano perform mathematical operations such as addition and multiplication through light interference on silicon microchips only a few millimetres in size.

By eliminating the need to digitise information, these chips dramatically cut both processing time and energy use. Researchers have also pioneered an ‘in-situ’ training technique that enables photonic neural networks to learn tasks entirely through light signals, instead of relying on digital models.

The result is a training process that is faster, more efficient and more robust.

Such advances could lead to more powerful AI models capable of running directly on devices instead of being dependent on energy-hungry data centres.

An approach that paves the way for technologies such as autonomous vehicles, portable intelligent sensors and real-time data processing systems that are both greener and quicker.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle and OpenAI drive record $300B investment in cloud for AI

OpenAI has finalised a record $300 billion deal with Oracle to secure vast computing infrastructure over five years, marking one of the most significant cloud contracts in history. The agreement is part of Project Stargate, OpenAI’s plan to build massive data centre capacity in the US and abroad.

The two companies will develop 4.5 gigawatts of computing power, equivalent to the energy consumed by millions of homes.

Backed by SoftBank and other partners, the Stargate initiative aims to surpass $500 billion in investment, with construction already underway in Texas. Additional plans include a large-scale data centre project in the United Arab Emirates, supported by Emirati firm G42.

The scale of the deal highlights the fierce race among tech giants to dominate AI infrastructure. Amazon, Microsoft, Google and Meta are also pledging hundreds of billions of dollars towards data centres, while OpenAI faces mounting financial pressure.

The company currently generates around $10 billion in revenue but is expected to spend far more than that annually to support its expansion.

Oracle is betting heavily on OpenAI as a future growth driver, although the risk is high given OpenAI’s lack of profitability and Oracle’s growing debt burden.

A gamble that rests on the assumption that ChatGPT and related AI technologies will continue to grow at an unprecedented pace, despite intense competition from Google, Anthropic and others.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing concern over AI fatigue among students and teachers

Experts say growing exposure to AI is leaving many people exhausted, a phenomenon increasingly described as ‘AI fatigue’.

Educators and policymakers note that AI adoption surged before society had time to thoroughly weigh its ethical or social effects. The technology now underpins tasks from homework writing to digital art, leaving some feeling overwhelmed or displaced.

University students are among those most affected, with many relying heavily on AI for assignments. Teachers say it has become challenging to identify AI-generated work, as detection tools often produce inconsistent results.

Some educators are experimenting with low-tech classrooms, banning phones and requiring handwritten work. They report deeper conversations and stronger engagement when distractions are removed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!