A new generation of interactive apps is arriving in ChatGPT, allowing users to engage with tools like Canva, Spotify, and Booking.com directly through conversation. The apps appear naturally during chats, enabling users to create, learn, and explore within the same interface.
Developers can now build their own ChatGPT apps using the newly launched Apps SDK, released in preview as an open standard based on the Model Context Protocol. The SDK includes documentation, examples, and testing tools, with app submissions and monetisation to follow later this year.
Over 800 million ChatGPT users can now access these apps on Free, Go, Plus and Pro plans, excluding EU regions for the moment. Early partners include Booking.com, Coursera, Canva, Figma, Expedia, Spotify, and Zillow, with more to follow later in the year.
Apps respond to natural language and integrate interactive features such as maps, playlists, and slides directly in chat. ChatGPT can even suggest relevant apps during conversations- for instance, showing Zillow listings when discussing home purchases or prompting Spotify for a party playlist.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.
The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.
For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.
Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.
He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.
The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.
Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.
Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.
The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.
Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.
According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.
The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has published 20 policy proposals to speed up AI adoption across the EU. Released shortly before the European Commission’s Apply AI Strategy, the report outlines practical steps for member states, businesses, and the public sector to bridge the gap between ambition and deployment.
The proposals originate from Hacktivate AI, a Brussels hackathon with 65 participants from EU institutions, governments, industry, and academia. They focus on workforce retraining, SME support, regulatory harmonisation, and public sector collaboration, highlighting OpenAI’s growing policy role in Europe.
Key ideas include Individual AI Learning Accounts to support workers, an AI Champions Network to mobilise SMEs, and a European GovAI Hub to share resources with public institutions. OpenAI’s Martin Signoux said the goal was to bridge the divide between strategy and action.
Europe already represents a major market for OpenAI tools, with widespread use among developers and enterprises, including Sanofi, Parloa, and Pigment. Yet adoption remains uneven, with IT and finance leading, manufacturing catching up, and other sectors lagging behind, exposing a widening digital divide.
The European Commission is expected to unveil its Apply AI Strategy within days. OpenAI’s proposals act as a direct contribution to the policy debate, complementing previous initiatives such as its EU Economic Blueprint and partnerships with governments in Germany and Greece.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Deloitte has agreed to refund the Australian government the full amount of $440,000 after acknowledging major errors in a consultancy report concerning welfare mutual obligations. These errors were the result of using AI tools, which led to fabricated content, including false quotes related to the Federal Court case on the Robodebt scheme and fictitious academic references.
That incident underscores the challenges of deploying AI in crucial government consultancy projects without sufficient human oversight, raising questions about the credibility of government policy decisions influenced by such flawed reports.
In response to these errors, Deloitte has publicly accepted full responsibility and committed to refunding the government. The firm is re-evaluating its internal quality assurance procedures and has emphasised the necessity of rigorous human review to maintain the integrity of consultancy projects that utilise AI.
The situation has prompted the government of Australia to reassess its reliance on AI-generated content for policy analysis, and it is currently investigating the oversight mechanisms to prevent future occurrences. The inaccuracies in the report had previously swayed discussions on welfare compliance, thereby shaking public trust in the consultancy services employed for critical government policymaking.
The broader consultancy industry is feeling the ripple effects, as this incident highlights the reputational and financial dangers of unchecked AI outputs. As AI becomes more prevalent for its efficiency, this case serves as a stark reminder of its limitations, particularly in sensitive government matters.
Industry pressure is growing for firms to enhance their quality control measures, disclose the level of AI involvement in their reports, and ensure that technology use does not compromise information quality. The Deloitte case adds to ongoing discussions about the ethical and practical integration of AI into professional services, reinforcing the imperative for human oversight and editorial controls even as AI technology progresses.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI and machine learning are transforming ophthalmology, making retinal scans powerful tools for predicting health risks. Retinal images offer a non-invasive view of blood vessels and nerve fibres, showing risks for high blood pressure, kidney, heart, and stroke-related issues.
With lifestyle-related illnesses on the rise, early detection through eye scans has become increasingly important.
Technologies like fundus photography and optical coherence tomography-angiography (OCT-A) now enable detailed imaging of retinal vessels. Researchers use AI to analyse these images, identifying microvascular biomarkers linked to systemic diseases.
Novel approaches such as ‘oculomics’ allow clinicians to predict surgical outcomes for macular hole treatment and assess patients’ risk levels for multiple conditions in one scan.
AI is also applied to diabetes screening, particularly in countries with significant at-risk populations. Deep learning frameworks can estimate average blood sugar levels (HbA1c) from retinal images, offering a non-invasive, cost-effective alternative to blood tests.
Despite its promise, AI in ophthalmology faces challenges. Limited and non-diverse datasets can reduce accuracy, and the ‘black box’ nature of AI decision-making can make doctors hesitant.
Collaborative efforts to share anonymised patient data and develop more transparent AI models are helping to overcome these hurdles, paving the way for safer and more reliable applications.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Deloitte entered a new enterprise AI partnership with Anthropic shortly after refunding the Australian government for a report that included inaccurate AI-generated information.
The A$439,000 (US$290,618) contract was intended for an independent review but contained fabricated citations to non-existent academic sources. Deloitte has since repaid the final instalment, and the government of Australia has released a corrected version of the report.
Despite the controversy, Deloitte is expanding its use of AI by integrating Anthropic’s Claude chatbot across its global workforce of nearly half a million employees.
A collaboration will focus on developing AI-driven tools for compliance, automation and data analysis, especially in highly regulated industries such as finance and healthcare.
The companies also plan to design AI agent personas tailored to Deloitte’s various departments to enhance productivity and decision-making. Financial terms of the agreement were not disclosed.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI CEO Sam Altman has announced that ChatGPT now reaches 800 million weekly active users, reflecting rapid growth across consumers, developers, enterprises and governments.
The figure marks another milestone for the company, which reported 700 million weekly users in August and 500 million at the end of March.
Altman shared the news during OpenAI’s Dev Day keynote, noting that four million developers are now building with OpenAI tools. He said ChatGPT processes more than six billion tokens per minute through its API, signalling how deeply integrated it has become across digital ecosystems.
The event also introduced new tools for building apps directly within ChatGPT and creating more advanced agentic systems. Altman states these will support a new generation of interactive and personalised applications.
OpenAI, still legally a nonprofit, was recently valued at $500 billion following a private stock sale worth $6.6 billion.
Its growing portfolio now includes the Sora video-generation tool, a new social platform, and a commerce partnership with Stripe, consolidating its status as the world’s most valuable private company.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has acquired the personal investing startup Roi, which promises AI-driven insights, education, and guidance for individual investors. The Verge reports that the acquisition marks OpenAI’s official entry into the personal finance space.
Following the deal, Roi will shut down its service on October 15 and delete all user data. Its offerings included traditional investing options alongside crypto and NFTs. The company cited this transition in its announcement.
OpenAI did not publicly disclose the purchase price. With this move, OpenAI takes a step beyond content, tools and agents, toward embedding financial services into its AI ecosystem. It questions how AI platforms may offer personalised wealth management or advisory services someday.
The acquisition also draws regulatory, ethical and trust considerations. Mixing AI with finance means issues like explainability, bias, fiduciary responsibility, data privacy and risk management become immediately relevant. Whether users will embrace AI financial advice depends as much on trust and governance as algorithmic accuracy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Mercedes-Benz is integrating ChatGPT into its MBUX infotainment system, enhancing voice control for US customers. The AI upgrade allows the Hey Mercedes assistant to understand natural language more effectively, providing detailed responses and conversational interactions.
An optional beta programme is available for over 900,000 vehicles, accessible via the Mercedes me app or through a voice command.
Microsoft’s Azure OpenAI Service powers ChatGPT with enterprise-grade security and reliability. Mercedes-Benz retains full control over IT processes, with voice command data anonymised and stored in the Mercedes-Benz Intelligent Cloud.
Data privacy remains a top priority, ensuring customers are aware of what information is collected and how it is used.
ChatGPT complements the existing capabilities of Hey Mercedes, extending the assistant’s range beyond predefined tasks. Drivers and passengers can now receive detailed information about destinations, recipes, sports, weather, or other queries while keeping their hands on the wheel.
The three-month beta programme will help Mercedes-Benz refine the assistant and guide future rollouts across markets and languages.
Mercedes-Benz emphasises responsible AI integration, aligning ChatGPT with its AI principles. The system is continuously monitored to mitigate potential risks, ensuring innovative features are delivered safely and effectively to customers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!