Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.
The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.
For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.
Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.
He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI CEO Sam Altman has announced that ChatGPT now reaches 800 million weekly active users, reflecting rapid growth across consumers, developers, enterprises and governments.
The figure marks another milestone for the company, which reported 700 million weekly users in August and 500 million at the end of March.
Altman shared the news during OpenAI’s Dev Day keynote, noting that four million developers are now building with OpenAI tools. He said ChatGPT processes more than six billion tokens per minute through its API, signalling how deeply integrated it has become across digital ecosystems.
The event also introduced new tools for building apps directly within ChatGPT and creating more advanced agentic systems. Altman states these will support a new generation of interactive and personalised applications.
OpenAI, still legally a nonprofit, was recently valued at $500 billion following a private stock sale worth $6.6 billion.
Its growing portfolio now includes the Sora video-generation tool, a new social platform, and a commerce partnership with Stripe, consolidating its status as the world’s most valuable private company.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Discord has disclosed a security incident after a third-party customer service provider was compromised. The breach exposed personal data from users who contacted Discord’s support and Trust & Safety teams.
An unauthorised party accessed the provider’s ticketing system and targeted user data in an extortion attempt. Discord revoked access, launched an investigation with forensic experts, and notified law enforcement. Impacted users will be contacted via official email.
Compromised information may include usernames, contact details, partial billing data, IP addresses, customer service messages, and limited government-ID images. Passwords, authentication data, and full credit card numbers were not affected.
Discord has notified data protection authorities and strengthened security controls for third-party providers. It has also reviewed threat detection systems to prevent similar incidents.
The company urges affected users to remain vigilant against suspicious messages. Service agents are available to answer questions and provide additional support.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.
Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.
The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.
While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.
Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.
The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new study finds that the overall occupational mix has shifted only slightly since the launch of generative AI in November 2022, with changes resembling past technological transitions such as the rise of computers and the internet.
The pace of disruption is not significantly faster than historical benchmarks.
Industry-level data show some variation, particularly in information services, finance, and professional sectors, but trends were already underway before AI tools became widely available.
Similarly, younger workers have not seen a dramatic divergence in opportunities compared with older graduates, suggesting that AI’s impact on early careers remains modest and difficult to isolate.
Exposure, automation, and augmentation metrics offer little evidence of widespread displacement. OpenAI’s exposure data and Anthropic’s usage data suggest stability in the proportion of workers most affected by AI, including those unemployed.
Even in roles theoretically vulnerable to automation, there has been no measurable increase in job losses.
The study concludes that AI’s labour effects are gradual rather than immediate. Historical precedent suggests that large-scale workforce disruption unfolds over decades, not months. Researchers plan to monitor the data to track whether AI’s influence becomes more visible over time.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The actors’ union responded swiftly, warning that Tilly was trained on the work of countless performers without their consent or compensation. It also reminded producers that hiring her would involve dealing with the union.
The episode highlights two key lessons for business leaders in any industry. First, never assume a technology’s current limitations will remain its inherent limitations. Some commentators, including Whoopi Goldberg, have argued that AI actors pose little threat because their physical movements still appear noticeably artificial.
The second lesson concerns human behaviour. People are often irrational; their preferences can upend even the most carefully planned strategies. Producers avoided publicising actors’ names in Hollywood’s early years to maintain control.
Audiences, however, demanded to know everything about the stars they admired, forcing studios to adapt. This human attachment created the star system that shaped the industry. Whether audiences will embrace AI performers like Tilly remains uncertain, but cultural and emotional factors will play a decisive role.
Hollywood offers a high-profile glimpse of the challenges and opportunities of advanced AI. As other sectors face similar disruptions, business leaders may find that technology alone does not determine outcomes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Bombay High Court has granted ad-interim relief to Asha Bhosle, barring AI platforms and sellers from cloning her voice or likeness without consent. The 90-year-old playback singer, whose career spans eight decades, approached the court to protect her identity from unauthorised commercial use.
Bhosle filed the suit after discovering platforms offering AI-generated voice clones mimicking her singing. Her plea argued that such misuse damages her reputation and goodwill. Justice Arif S. Doctor found a strong prima facie case and stated that such actions would cause irreparable harm.
The order restrains defendants, including US-based Mayk Inc, from using machine learning, face-morphing, or generative AI to imitate her voice or likeness. Google, also named in the case, has agreed to take down specific URLs identified by Bhosle’s team.
Defendants are required to share subscriber information, IP logs, and payment details to assist in identifying infringers. The court emphasised that cloning the voices of cultural icons risks misleading the public and infringing on individuals’ rights to their identity.
The ruling builds on recent cases in India affirming personality rights and sets an important precedent in the age of generative AI. The matter is scheduled to return to court on 13 October 2025.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The EU Innovation Hub for Internal Security’s AI Cluster gathered in Tallinn on 25–26 September for a workshop focused on AI and its implications for security and rights.
The European Union Agency for Fundamental Rights (FRA) played a central role, presenting its Fundamental Rights Impact Assessment framework under the AI Act and highlighting its ongoing project on assessing high-risk AI.
A workshop that also provided an opportunity for FRA to give an update on its internal and external work in the AI field, reflecting the growing need to balance technological innovation with rights-based safeguards.
AI-driven systems in security and policing are increasingly under scrutiny, with regulators and agencies seeking to ensure compliance with EU rules on privacy, transparency and accountability.
In collaboration with Europol, FRA also introduced plans for a panel discussion on ‘The right to explanation of AI-driven individual decision-making’. Scheduled for 19 November in Brussels, the session will form part of the Annual Event of the EU Innovation Hub for Internal Security.
It is expected to draw policymakers, law enforcement representatives and rights advocates into dialogue about transparency obligations in AI use for security contexts.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Perplexity has made its Comet AI browser available to everyone for free, widening access beyond its paid user base. The browser, launched three months ago for Max subscribers, introduces new tools designed to turn web browsing into an AI-driven task assistant.
The company describes Comet as a ‘browser for agentic search’, referring to autonomous software agents capable of handling multi-step tasks for users.
Free users can access the sidecar assistant alongside tools for shopping comparisons, travel planning, budgeting, sports updates, project management, and personalised recommendations.
Max subscribers gain early access to more advanced features, including a background assistant likened to a personal mission control dashboard. The tool can draft emails, book tickets, find flights, and integrate with apps on a user’s computer, running tasks in the background with minimal intervention.
Pro users also retain access to advanced AI models and media generation tools.
Perplexity is further introducing Comet Plus, a $5-per-month standalone subscription service that acts as an AI-powered alternative to Apple News. Current Pro and Max subscribers will receive the service automatically.
The move signals Perplexity’s ambition to expand its ecosystem while balancing free accessibility with premium AI features.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Dutch court has ordered Meta to give Facebook and Instagram users in the Netherlands the right to set a chronological feed as their default.
The ruling follows a case brought by digital rights group Bits of Freedom, which argued that Meta’s design undermines user autonomy under the European Digital Services Act.
Although a chronological feed is already available, it is hidden and cannot be permanent. The court said Meta must make the settings accessible on the homepage and Reels section and ensure they stay in place when the apps are restarted.
If Meta does not comply within two weeks, it faces a fine of €100,000 per day, capped at €5 million.
Bits of Freedom argued that algorithmic feeds threaten democracy, particularly before elections. The court agreed the change must apply permanently rather than temporarily during campaigns.
The group welcomed the ruling but stressed it was only a small step in tackling the influence of tech giants on public debate.
Meta has not yet responded to the decision, which applies only in the Netherlands despite being based on EU law. Campaigners say the case highlights the need for more vigorous enforcement to ensure digital platforms respect user choice and democratic values.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!