OpenAI has launched AgentKit, a new suite of developer tools designed to simplify AI-powered agents’ creation, deployment, and optimisation. The platform unifies workflows that previously required multiple systems, offering a faster and more visual way to build intelligent applications.
AgentKit’s AI includes Agent Builder, Connector Registry, ChatKit, and advanced evaluation tools. Developers can now design multi-agent workflows on a visual canvas, manage data connections across workspaces, and integrate chat-based agents directly into apps and websites.
Early users such as Ramp and LY Corporation built working agents in just a few hours, cutting development cycles by up to 70%. Companies including Canva and HubSpot have used ChatKit to embed conversational support agents, transforming customer experience and developer engagement.
New evaluation features and reinforcement fine-tuning allow users to test, grade, and improve agents’ reasoning abilities. AgentKit is now available to developers and enterprises through OpenAI’s API and ChatGPT Enterprise, with a wider rollout expected later this year.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Bulgaria is considering building an AI gigafactory in partnership with IBM and the European Commission, Prime Minister Rosen Zhelyazkov announced after meeting with IBM executives in Sofia. The project aims to attract large-scale high-tech investment and strengthen Europe’s AI infrastructure.
The proposed facility would feature over 100,000 advanced GPU chips and require up to 500 megawatts of power. The initial phase alone is expected to need around 70 megawatts, highlighting the scale of the planned operation.
Funding could come through a public-private partnership, with the European Commission covering up to 17 percent of capital costs and EU member states contributing additional support for this Bulgarian project.
IBM is considered a strategic technology partner, bringing expertise in cloud computing, cybersecurity, and AI systems. The first gigafactories across Europe are expected to begin operations between 2027 and 2028, aligning with the EU’s plan to mobilise €200 billion for AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new global survey by 11:11 Systems highlights growing concerns among IT leaders over cyber incident recovery. More than 800 senior IT professionals across North America, Europe, and the Asia Pacific report a rising strain from evolving threats, staffing gaps, and limited clean-room infrastructure.
Over 80% of respondents experienced at least one major cyberattack in the past year, with more than half facing multiple incidents. Nearly half see recovery planning complexity as their top challenge, while over 80% say their organisations are overconfident in their recovery capabilities.
The survey also reveals that 74% believe integrating AI could increase cyberattack vulnerability. Despite this, 96% plan to invest in cyber incident recovery within the next 12 months, underlining its growing importance in budget strategies.
The financial stakes are high. Over 80% of respondents reported spending at least six figures during just one hour of downtime, with the top 5% incurring losses of over one million dollars per hour. Yet 30% of businesses do not test their recovery plans annually, despite these risks.
11:11 Systems’ CTO Justin Giardina said organisations must adopt a proactive, AI-driven approach to recovery. He emphasised the importance of advanced platforms, secure clean rooms, and tailored expertise to enhance cyber resilience and expedite recovery after incidents.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Among them is C’est La Vie, which presented itself as a Birmingham jeweller run by a couple called Eileen and Patrick. The supposed owners appeared in highly convincing AI-generated photos, while customers later discovered their purchases were shipped from China.
Victims described feeling cheated after receiving poor-quality jewellery and clothes that bore no resemblance to the advertised items. More than 500 complaints on Trustpilot accuse such companies of exploiting fabricated stories to appear authentic.
Consumer experts at Which? warn that AI tools now enable scammers to create fake brands at an unprecedented scale. The ASA has called on social media platforms to act, as many victims were targeted through Facebook ads.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Gen Z drivers are increasingly turning to AI tools to help them decide which car to buy. A new Motor Ombudsman survey of 1,100 UK drivers finds that over one in four Gen Z drivers would rely on AI guidance when purchasing a vehicle, compared with 12% of Gen X drivers and just 6% of Baby Boomers.
Younger drivers view AI as a neutral and judgment-free resource. Nearly two-thirds say it helps them make better decisions, while over half appreciate the ability to ask unlimited questions. Many see AI as a fast and convenient way to access information during car-buying.
Three-quarters of Gen Z respondents believe AI could help them estimate price ranges, while 60% think it would improve their haggling skills. Around four in ten say it would help them assess affordability and running costs, a sentiment less common among Millennials and Gen Xers.
Confidence levels also vary across generations. About 86% of Gen Z and 87% of Millennials say they would feel more assured if they used AI before making a purchase, compared with 39% of Gen Xers and 40% of Boomers, many of whom remain indifferent to its influence.
Almost half of drivers say they would take AI-generated information at face value. Gen Z is the most trusting, while older generations remain cautious. The Motor Ombudsman urges buyers to treat AI as a complement to trusted research and retailer checks.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.
The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.
For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.
Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.
He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.
The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.
Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.
Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.
The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.
Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.
According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.
The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Deloitte has agreed to refund the Australian government the full amount of $440,000 after acknowledging major errors in a consultancy report concerning welfare mutual obligations. These errors were the result of using AI tools, which led to fabricated content, including false quotes related to the Federal Court case on the Robodebt scheme and fictitious academic references.
That incident underscores the challenges of deploying AI in crucial government consultancy projects without sufficient human oversight, raising questions about the credibility of government policy decisions influenced by such flawed reports.
In response to these errors, Deloitte has publicly accepted full responsibility and committed to refunding the government. The firm is re-evaluating its internal quality assurance procedures and has emphasised the necessity of rigorous human review to maintain the integrity of consultancy projects that utilise AI.
The situation has prompted the government of Australia to reassess its reliance on AI-generated content for policy analysis, and it is currently investigating the oversight mechanisms to prevent future occurrences. The inaccuracies in the report had previously swayed discussions on welfare compliance, thereby shaking public trust in the consultancy services employed for critical government policymaking.
The broader consultancy industry is feeling the ripple effects, as this incident highlights the reputational and financial dangers of unchecked AI outputs. As AI becomes more prevalent for its efficiency, this case serves as a stark reminder of its limitations, particularly in sensitive government matters.
Industry pressure is growing for firms to enhance their quality control measures, disclose the level of AI involvement in their reports, and ensure that technology use does not compromise information quality. The Deloitte case adds to ongoing discussions about the ethical and practical integration of AI into professional services, reinforcing the imperative for human oversight and editorial controls even as AI technology progresses.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI and machine learning are transforming ophthalmology, making retinal scans powerful tools for predicting health risks. Retinal images offer a non-invasive view of blood vessels and nerve fibres, showing risks for high blood pressure, kidney, heart, and stroke-related issues.
With lifestyle-related illnesses on the rise, early detection through eye scans has become increasingly important.
Technologies like fundus photography and optical coherence tomography-angiography (OCT-A) now enable detailed imaging of retinal vessels. Researchers use AI to analyse these images, identifying microvascular biomarkers linked to systemic diseases.
Novel approaches such as ‘oculomics’ allow clinicians to predict surgical outcomes for macular hole treatment and assess patients’ risk levels for multiple conditions in one scan.
AI is also applied to diabetes screening, particularly in countries with significant at-risk populations. Deep learning frameworks can estimate average blood sugar levels (HbA1c) from retinal images, offering a non-invasive, cost-effective alternative to blood tests.
Despite its promise, AI in ophthalmology faces challenges. Limited and non-diverse datasets can reduce accuracy, and the ‘black box’ nature of AI decision-making can make doctors hesitant.
Collaborative efforts to share anonymised patient data and develop more transparent AI models are helping to overcome these hurdles, paving the way for safer and more reliable applications.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Deloitte entered a new enterprise AI partnership with Anthropic shortly after refunding the Australian government for a report that included inaccurate AI-generated information.
The A$439,000 (US$290,618) contract was intended for an independent review but contained fabricated citations to non-existent academic sources. Deloitte has since repaid the final instalment, and the government of Australia has released a corrected version of the report.
Despite the controversy, Deloitte is expanding its use of AI by integrating Anthropic’s Claude chatbot across its global workforce of nearly half a million employees.
A collaboration will focus on developing AI-driven tools for compliance, automation and data analysis, especially in highly regulated industries such as finance and healthcare.
The companies also plan to design AI agent personas tailored to Deloitte’s various departments to enhance productivity and decision-making. Financial terms of the agreement were not disclosed.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!