Sanders warns AI could erase 100 million US jobs

Senator Bernie Sanders has warned that AI and automation could eliminate nearly 100 million US jobs within the next decade unless stronger worker protections are introduced.

The report, titled The Big Tech Oligarchs’ War Against Workers, claims that companies such as Amazon, Walmart, JPMorgan Chase, and UnitedHealth already use AI to reduce their workforces while rewarding executives with multimillion-dollar pay packages.

According to the findings, nearly 90% of US fast-food workers, two-thirds of accountants, and almost half of truck drivers could see their jobs replaced by automation. Sanders argues that technological progress should enhance people’s lives rather than displace them,

His proposals include introducing a 32-hour workweek without loss of pay, a ‘robot tax’ for companies that replace human labour, and giving workers a share of profits and board representation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic launches Bengaluru office to drive responsible AI in India

AI firm Anthropic, the company behind the Claude AI chatbot, is opening its first office in India, choosing Bengaluru as its base.

A move that follows OpenAI’s recent expansion into New Delhi, underlining India’s growing importance as a hub for AI development and adoption.

CEO Dario Amodei said India’s combination of vast technical talent and the government’s commitment to equitable AI progress makes it an ideal location.

The Bengaluru office will focus on developing AI solutions tailored to India’s needs in education, healthcare, and agriculture sectors.

Amodei is visiting India to strengthen ties with enterprises, nonprofits, and startups and promote responsible AI use that is aligned with India’s digital growth strategy.

Anthropic plans further expansion in the Indo-Pacific region, following its Tokyo launch, later in the year.

Chief Commercial Officer Paul Smith noted the rising demand among Indian companies for trustworthy, scalable AI systems. Anthropic’s Claude models are already accessible in India through its API, Amazon Bedrock, and Google Cloud Vertex AI.

The company serves more than 300,000 businesses worldwide, with nearly 80 percent of usage outside the US.

India has become the second-largest market for Claude, with developers using it for tasks such as mobile UI design and web app debugging.

Anthropic also enhances Claude’s multilingual capabilities in major Indic languages, including Hindi, Bengali, and Tamil, to support education and public sector projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI unveils AgentKit for faster AI agent creation

OpenAI has launched AgentKit, a new suite of developer tools designed to simplify AI-powered agents’ creation, deployment, and optimisation. The platform unifies workflows that previously required multiple systems, offering a faster and more visual way to build intelligent applications.

AgentKit’s AI includes Agent Builder, Connector Registry, ChatKit, and advanced evaluation tools. Developers can now design multi-agent workflows on a visual canvas, manage data connections across workspaces, and integrate chat-based agents directly into apps and websites.

Early users such as Ramp and LY Corporation built working agents in just a few hours, cutting development cycles by up to 70%. Companies including Canva and HubSpot have used ChatKit to embed conversational support agents, transforming customer experience and developer engagement.

New evaluation features and reinforcement fine-tuning allow users to test, grade, and improve agents’ reasoning abilities. AgentKit is now available to developers and enterprises through OpenAI’s API and ChatGPT Enterprise, with a wider rollout expected later this year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bulgaria eyes AI gigafactory partnership with IBM

Bulgaria is considering building an AI gigafactory in partnership with IBM and the European Commission, Prime Minister Rosen Zhelyazkov announced after meeting with IBM executives in Sofia. The project aims to attract large-scale high-tech investment and strengthen Europe’s AI infrastructure.

The proposed facility would feature over 100,000 advanced GPU chips and require up to 500 megawatts of power. The initial phase alone is expected to need around 70 megawatts, highlighting the scale of the planned operation.

Funding could come through a public-private partnership, with the European Commission covering up to 17 percent of capital costs and EU member states contributing additional support for this Bulgarian project.

IBM is considered a strategic technology partner, bringing expertise in cloud computing, cybersecurity, and AI systems. The first gigafactories across Europe are expected to begin operations between 2027 and 2028, aligning with the EU’s plan to mobilise €200 billion for AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New report finds IT leaders unprepared for evolving cyber threats

A new global survey by 11:11 Systems highlights growing concerns among IT leaders over cyber incident recovery. More than 800 senior IT professionals across North America, Europe, and the Asia Pacific report a rising strain from evolving threats, staffing gaps, and limited clean-room infrastructure.

Over 80% of respondents experienced at least one major cyberattack in the past year, with more than half facing multiple incidents. Nearly half see recovery planning complexity as their top challenge, while over 80% say their organisations are overconfident in their recovery capabilities.

The survey also reveals that 74% believe integrating AI could increase cyberattack vulnerability. Despite this, 96% plan to invest in cyber incident recovery within the next 12 months, underlining its growing importance in budget strategies.

The financial stakes are high. Over 80% of respondents reported spending at least six figures during just one hour of downtime, with the top 5% incurring losses of over one million dollars per hour. Yet 30% of businesses do not test their recovery plans annually, despite these risks.

11:11 Systems’ CTO Justin Giardina said organisations must adopt a proactive, AI-driven approach to recovery. He emphasised the importance of advanced platforms, secure clean rooms, and tailored expertise to enhance cyber resilience and expedite recovery after incidents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scammers use AI to fake British boutiques

Fraudsters are using AI-generated images and back stories to pose as British family businesses, luring shoppers into buying cheap goods from Asia. Websites claiming to be long-standing local boutiques have been linked to warehouses in China and Hong Kong.

Among them is C’est La Vie, which presented itself as a Birmingham jeweller run by a couple called Eileen and Patrick. The supposed owners appeared in highly convincing AI-generated photos, while customers later discovered their purchases were shipped from China.

Victims described feeling cheated after receiving poor-quality jewellery and clothes that bore no resemblance to the advertised items. More than 500 complaints on Trustpilot accuse such companies of exploiting fabricated stories to appear authentic.

Consumer experts at Which? warn that AI tools now enable scammers to create fake brands at an unprecedented scale. The ASA has called on social media platforms to act, as many victims were targeted through Facebook ads.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tools reshape how Gen Z approaches buying cars

Gen Z drivers are increasingly turning to AI tools to help them decide which car to buy. A new Motor Ombudsman survey of 1,100 UK drivers finds that over one in four Gen Z drivers would rely on AI guidance when purchasing a vehicle, compared with 12% of Gen X drivers and just 6% of Baby Boomers.

Younger drivers view AI as a neutral and judgment-free resource. Nearly two-thirds say it helps them make better decisions, while over half appreciate the ability to ask unlimited questions. Many see AI as a fast and convenient way to access information during car-buying.

Three-quarters of Gen Z respondents believe AI could help them estimate price ranges, while 60% think it would improve their haggling skills. Around four in ten say it would help them assess affordability and running costs, a sentiment less common among Millennials and Gen Xers.

Confidence levels also vary across generations. About 86% of Gen Z and 87% of Millennials say they would feel more assured if they used AI before making a purchase, compared with 39% of Gen Xers and 40% of Boomers, many of whom remain indifferent to its influence.

Almost half of drivers say they would take AI-generated information at face value. Gen Z is the most trusting, while older generations remain cautious. The Motor Ombudsman urges buyers to treat AI as a complement to trusted research and retailer checks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Beware the language of human flourishing in AI regulation

TechPolicy.Press recently published ‘Confronting Empty Humanism in AI Policy’, a thought piece by Matt Blaszczyk exploring how human-centred and humanistic language in AI policy is widespread, but often not backed by meaningful legal or regulatory substance.

Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.

The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.

For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.

Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.

He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils CodeMender, an AI agent that repairs code vulnerabilities

Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.

The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.

Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.

Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.

The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.

Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.

According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.

The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deloitte’s AI blunder: A costly lesson in consultancy business

Deloitte has agreed to refund the Australian government the full amount of $440,000 after acknowledging major errors in a consultancy report concerning welfare mutual obligations. These errors were the result of using AI tools, which led to fabricated content, including false quotes related to the Federal Court case on the Robodebt scheme and fictitious academic references.

That incident underscores the challenges of deploying AI in crucial government consultancy projects without sufficient human oversight, raising questions about the credibility of government policy decisions influenced by such flawed reports.

In response to these errors, Deloitte has publicly accepted full responsibility and committed to refunding the government. The firm is re-evaluating its internal quality assurance procedures and has emphasised the necessity of rigorous human review to maintain the integrity of consultancy projects that utilise AI.

The situation has prompted the government of Australia to reassess its reliance on AI-generated content for policy analysis, and it is currently investigating the oversight mechanisms to prevent future occurrences. The inaccuracies in the report had previously swayed discussions on welfare compliance, thereby shaking public trust in the consultancy services employed for critical government policymaking.

The broader consultancy industry is feeling the ripple effects, as this incident highlights the reputational and financial dangers of unchecked AI outputs. As AI becomes more prevalent for its efficiency, this case serves as a stark reminder of its limitations, particularly in sensitive government matters.

Industry pressure is growing for firms to enhance their quality control measures, disclose the level of AI involvement in their reports, and ensure that technology use does not compromise information quality. The Deloitte case adds to ongoing discussions about the ethical and practical integration of AI into professional services, reinforcing the imperative for human oversight and editorial controls even as AI technology progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot