IBM unveils AI-powered mainframe z17

IBM has announced the launch of its most advanced mainframe yet, the z17, powered by the new Telum II processor. Designed to handle more AI operations, the system delivers up to 50% more daily inference tasks than its predecessor.

The z17 features a second-generation on-chip AI accelerator and introduces new tools for managing and securing enterprise data. A Spyre Accelerator add-on, expected later this year, will enable generative AI features such as large language models.

More than 100 clients contributed to the development of the z17, which also supports a forthcoming operating system, z/OS 3.2. The OS update is set to enable hybrid cloud data processing and enhanced NoSQL support.

IBM says the z17 brings AI to the core of enterprise infrastructure, enabling organisations to tap into large data sets securely and efficiently, with strong performance across both traditional and AI workloads.

For more information on these topics, visit diplomacy.edu.

Bank appoints head of AI enablement

Standard Chartered has appointed David Hardoon as its global head of AI enablement, further embedding AI across its operations.

Based in Singapore, he will report to group chief data officer Mohammed Rahim.

Hardoon will lead AI governance and identify areas where AI can enhance productivity, efficiency, and client experiences. His appointment follows the bank’s recent rollout of a generative AI tool to over 70,000 employees across 41 markets.

The bank has been steadily introducing AI-driven tools, including a smart video column to provide insights for clients in Asia. It plans further expansion of its internal AI systems across additional regions.

With more than 20 years of experience in data and AI, including with Singapore’s central bank, Hardoon is expected to guide the responsible and strategic use of AI technologies across Standard Chartered’s global footprint.

For more information on these topics, visit diplomacy.edu.

Dutch researchers to face new security screenings

The Dutch government has proposed new legislation requiring background checks for thousands of researchers working with sensitive technologies. The plan, announced by Education Minister Eppo Bruins, aims to block foreign intelligence from accessing high-risk scientific work.

Around 8,000 people a year, including Dutch citizens, would undergo screenings involving criminal records, work history, and possible links to hostile regimes.

Intelligence services would support the process, which targets sectors like AI, quantum computing, and biotech.

Universities worry the checks may deter global talent due to delays and bureaucracy. Critics also highlight a loophole: screenings occur only once, meaning researchers could still be approached by foreign governments after being cleared.

While other countries are introducing similar measures, the Netherlands will attempt to avoid unnecessary delays. Officials admit, however, that no system can eliminate all risks.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Sam Altman responds to Miyazaki’s AI animation concerns

The recent viral trend of AI-generated Ghibli-style images has taken the internet by storm. Using OpenAI’s GPT-4o image generator, users have been transforming photos, from historic moments to everyday scenes, into Studio Ghibli-style renditions.

A trend like this has caught the attention of notable figures, including celebrities and political personalities, sparking both excitement and controversy.

While some praise the trend for democratising art, others argue that it infringes on copyright and undermines the efforts of traditional artists. The debate intensified when Hayao Miyazaki, the co-founder of Studio Ghibli, became a focal point.

In a 2016 documentary, Miyazaki expressed his disdain for AI in animation, calling it ‘an insult to life itself’ and warning that humanity is losing faith in its creativity.

OpenAI’s CEO, Sam Altman, recently addressed these concerns, acknowledging the challenges posed by AI in art but defending its role in broadening access to creative tools. Altman believes that technology empowers more people to contribute, benefiting society as a whole, even if it complicates the art world.

Miyazaki’s comments and Altman’s response highlight a growing divide in the conversation about AI and creativity. As the debate continues, the future of AI in art remains a contentious issue, balancing innovation with respect for traditional artistic practices.

For more information on these topics, visit diplomacy.edu.

Anthropic grows its presence in Europe

Anthropic is expanding its operations across Europe, with plans to add over 100 new roles in sales, engineering, research, and business operations. Most of these positions will be based in Dublin and London.

The company has also appointed Guillaume Princen, a former Stripe executive, as its head for Europe, the Middle East, and Africa. This move signals Anthropic’s ambition to strengthen its global presence, particularly in Europe where the demand for enterprise-ready AI tools is rising.

The company’s hiring strategy also reflects a wider trend within the AI industry, with firms like Anthropic competing for global market share after securing significant funding.

The recent $3.5 billion funding round bolsters Anthropic’s position as it seeks to lead the AI race across multiple regions, including the Americas, Europe, and Asia.

Instead of focusing solely on the US, Anthropic’s European push is designed to comply with local AI governance and regulatory standards, which are increasingly important to businesses operating in the region.

Anthropic’s expansion comes at a time when AI firms are facing growing competition from companies like Cohere, which has been positioning itself as a European-compliant alternative.

As the EU continues to shape global AI regulations, Anthropic’s focus on safety and localisation could position it favourably in these highly regulated markets. Analysts suggest that while the US may remain a less regulated environment for AI, the EU is likely to lead global AI policy development in the near future.

For more information on these topics, visit diplomacy.edu.

Meta faces backlash over Llama 4 release

Over the weekend, Meta unveiled two new Llama 4 models—Scout, a smaller version, and Maverick, a mid-sized variant it claims outperforms OpenAI’s GPT-4o and Google’s Gemini 2.0 Flash across multiple benchmarks.

Maverick quickly climbed to second place on LMArena, an AI benchmarking platform where human evaluators compare and vote on model outputs. Meta proudly pointed to Maverick’s ELO score of 1417, placing it just beneath Gemini 2.5 Pro, instead of trailing behind the usual leaders.

However, AI researchers noticed a critical detail buried in Meta’s documentation: the version of Maverick that ranked so highly wasn’t the one released to the public. Instead of using the standard model, Meta had submitted an ‘experimental’ version specifically optimised for conversations.

LMArena later criticised this move, saying Meta failed to clearly indicate the model was customised, prompting the platform to update its policies to ensure future evaluations remain fair and reproducible.

Meta’s spokesperson acknowledged the use of experimental variants, insisting the company frequently tests different configurations.

While this wasn’t a violation of LMArena’s existing rules, the episode raised concerns about the credibility of benchmark rankings when companies submit fine-tuned models instead of the ones accessible to the wider community.

Independent AI researcher Simon Willison expressed frustration, saying the impressive ranking lost all meaning once it became clear the public couldn’t even use the same version.

The controversy unfolded against a backdrop of mounting competition in open-weight AI, with Meta under pressure following high-profile releases like China’s DeepSeek model.

Instead of offering a smooth rollout, Meta released Llama 4 on a Saturday—an unusual move—which CEO Mark Zuckerberg explained simply as ‘that’s when it was ready.’ But for many in the AI space, the launch has only deepened confusion around what these models can genuinely deliver.

For more information on these topics, visit diplomacy.edu.

Southampton Airport launches AI assistant to support passengers

Southampton Airport has launched an advanced AI-powered digital assistant to enhance passenger experience and accessibility throughout its terminal. The technology, developed in collaboration with Hello Lamp Post, offers real-time flight updates, personalised navigation assistance, and tailored support, especially for those requiring special assistance.

Following a successful trial at Glasgow Airport with Connected Places Catapult, the AI platform demonstrated a 50% reduction in customer service queries and supported over 12,000 additional passengers annually. Passenger satisfaction during the pilot reached 86%, prompting Southampton to expand the tool for all travellers. The assistant is accessible via QR codes placed throughout the terminal, effectively acting as a virtual concierge.

The initiative forms part of the airport’s broader commitment to inclusive and efficient travel. Southampton Airport recently received the Civil Aviation Authority’s top ‘Very Good’ rating for accessibility. Airport Managing Director Gavin Williams praised the new tool’s ability to enhance customer journeys, while Hello Lamp Post’s CEO, Tiernan Mines, highlighted the value in easing pressure on staff by handling routine queries.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises AI-generated nude deepfakes of minors

New Jersey has become the first US state to criminalise the creation and sharing of AI-generated nude images of minors, following a high-profile campaign led by 14-year-old Francesca Mani. The US legislation, signed into law on 2 April by Governor Phil Murphy, allows victims to sue perpetrators for up to $1,000 per image and includes criminal penalties of up to five years in prison and fines of up to $30,000.

Mani launched her campaign after discovering that boys at her school had used an AI “nudify” website to target her and other girls. Refusing to accept the school’s minimal disciplinary response, she called for lawmakers to take decisive action against such deepfake abuses. Her efforts gained national attention, including a feature on 60 Minutes, and helped drive the new legal protections.

The law defines deepfakes as media that convincingly depicts someone doing something they never actually did. It also prohibits the use of such technology for election interference or defamation. Although the law’s focus is on malicious misuse, questions remain about whether exemptions will be made for legitimate uses in film, tech, or education sectors.

For more information on these topics, visit diplomacy.edu.

AI tool boosts accuracy of cancer treatment predictions

A Slovenian-US biotech company, Genialis, is harnessing AI to revolutionise cancer treatment by tackling a major obstacle: the lack of reliable biomarkers to predict how patients will respond to therapy. Using an AI-driven model developed from over a million global samples, the company aims to personalise treatment with far greater accuracy.

Founded nine years ago as a spin-off from the University of Ljubljana, Genialis is now headquartered in Boston but maintains strong ties to Slovenia, employing 22 local experts. Initially focused on tools for biologists, the firm shifted towards personalised medicine six years ago, now offering diagnostic insights that predict whether a patient is likely to respond to a specific cancer drug or treatment.

Genialis’ proprietary “Supermodel” analyses RNA data from a diverse range of patients using machine learning, boosting the likelihood of treatment success from 20–30% to as high as 65% when paired with their biomarkers. While the software is already used in research settings, the ultimate goal is to integrate it into routine clinical care. Despite the promise, challenges remain, including securing quality data and investment. Co-founders Rafael Rosengarten and Miha Štajdohar remain optimistic, believing AI-powered precision medicine is the future of effective cancer therapy.

For more information on these topics, visit diplomacy.edu.

Trump administration pushes for pro-AI shift in US federal agencies

The White House announced on Monday a shift in how US federal agencies will approach AI, prioritising innovation over the stricter regulatory framework previously established under President Biden. 

A new memorandum from the Office of Management and Budget instructs agencies to appoint chief AI officers and craft policies to expand the use of AI technologies across government operations.

This pivot includes repealing two Biden-era directives emphasising transparency and safeguards against AI misuse. 

The earlier rules required federal agencies to implement protective measures for civil rights and limit unchecked acquisition of AI tools. 

These protections have now been replaced with a call for a more ‘forward-leaning and pro-innovation’ stance, removing what the current administration views as excessive bureaucratic constraints.

Federal agencies are now expected to develop AI strategies within six months. These plans must identify barriers to responsible AI implementation and improve how the technology is used enterprise-wide. 

The administration also encouraged the development of specific policies for generative AI, emphasising maximising the use of American-made solutions and enhancing interoperability between systems.

The policy change is part of President Trump’s broader rollback of previous AI governance, including his earlier revocation of a 2023 executive order signed by Biden that required developers to disclose sensitive training data. 

The new framework aims to streamline AI procurement processes and eliminate what the administration labels unnecessary reporting burdens while still maintaining basic privacy protections.

Federal agencies have already begun integrating AI into their operations. The Federal Aviation Administration, for example, has applied machine learning to analyse safety reports and identify emerging aviation risks. 

Under the new guidelines, such initiatives are expected to accelerate, signalling a broader federal embrace of AI across sectors.

For more information on these topics, visit diplomacy.edu.