US judge says Social Security unlawfully shared data with Musk’s aides

A federal judge has ruled that the Social Security Administration (SSA) likely violated privacy laws by granting Elon Musk’s Department of Government Efficiency (DOGE) unrestricted access to millions of Americans’ personal data.

The ruling halts further data sharing and requires DOGE to delete unlawfully accessed records. United States District Judge Ellen Lipton Hollander stated that while tackling fraud is important, government agencies must not ignore privacy laws to achieve their goals.

The case has drawn attention to the extent of DOGE’s access to sensitive government databases, including Numident, which contains detailed personal information on Social Security applicants.

The SSA’s leadership allowed DOGE staffers to review vast amounts of data in an effort to identify fraudulent payments. Critics, including advocacy groups and labour unions, argue that the process lacked proper oversight and risked compromising individuals’ privacy.

The ruling marks a major legal setback for DOGE, which has been expanding its influence across multiple federal agencies. The White House condemned the decision, calling it judicial overreach, while SSA officials indicated they would comply with the order.

The controversy highlights growing concerns over government data security and the limits of executive power in managing public records.

For more information on these topics, visit diplomacy.edu.

ChatGPT wrongly accuses man of murder

A Norwegian man has lodged a complaint against OpenAI after ChatGPT falsely claimed he had murdered his two sons and was serving a 21-year prison sentence.

Arve Hjalmar Holmen, who has never been accused of any crime, says the chatbot’s response was deeply damaging, leading him to seek action from the Norwegian Data Protection Authority.

Digital rights group Noyb, representing Holmen, argues the incident violates European data protection laws regarding the accuracy of personal data.

The error highlights a growing concern over AI ‘hallucinations,’ where chatbots generate false information and present it as fact.

Holmen received the incorrect response when searching for his own name, with ChatGPT fabricating a detailed and defamatory account of a crime that never occurred. Although the chatbot carries a disclaimer about potential inaccuracies,

Noyb insists this is not enough, arguing that spreading false information cannot be justified by a simple warning label.

AI-generated hallucinations have plagued multiple platforms, including Apple and Google, with some errors being bizarre but others causing real harm.

Experts remain uncertain about the underlying causes of these inaccuracies in large language models, making them a key focus of ongoing research.

While OpenAI has since updated ChatGPT’s model to incorporate current news sources, the case raises questions about accountability and the transparency of AI-generated content.

For more information on these topics, visit diplomacy.edu.

OpenAI and Google face lawsuits while advocating for AI copyright exceptions

OpenAI and Google have urged the US government to allow AI models to be trained on copyrighted material under fair use.

The companies submitted feedback to the White House’s ‘AI Action Plan,’ arguing that restrictions could slow AI progress and give countries like China a competitive edge. Google stressed the importance of copyright and privacy exceptions, stating that text and data mining provisions are critical for innovation.

Anthropic also responded to the White House’s request but focused more on AI risks to national security and infrastructure rather than copyright concerns.

Meanwhile, OpenAI and Google are facing multiple lawsuits from news organisations and content creators, including Sarah Silverman and George R.R. Martin, who allege their works were used without permission for AI training.

Other companies, including Apple and Nvidia, have also been accused of improperly using copyrighted material, such as YouTube subtitles, to train AI models.

As legal challenges continue, major tech firms remain committed to pushing for regulations that support AI development while navigating the complexities of intellectual property rights.

For more information on these topics, visit diplomacy.edu.

New AI model by Stability AI creates 3D videos from images

Stability AI has unveiled its latest AI model, Stable Virtual Camera, designed to convert 2D images into dynamic 3D video scenes. Announced in a company blog post, the model enables users to create immersive videos with realistic depth and perspective using up to 32 input images. It generates ‘novel views’ of a scene, offering various preset camera movements, including Spiral, Dolly Zoom, Move, and Pan.

The tool is currently available as a research preview and allows users to generate videos in square (1:1), portrait (9:16), and landscape (16:9) formats, with a maximum length of 1,000 frames. However, Stability AI warns that certain images, such as those with people, animals, or complex textures like water, may produce lower-quality results. Highly ambiguous or irregularly shaped objects may also lead to visual artifacts.

Stable Virtual Camera is available for research use under a non-commercial license and can be downloaded from AI development platform Hugging Face. The launch follows a turbulent period for Stability AI, which has recently undergone leadership changes, secured new investments, and expanded into new AI applications, including generative audio. With this latest innovation, the company aims to solidify its position in the competitive AI market.

For more information on these topics, visit diplomacy.edu.

California’s attempt to regulate online platforms faces legal setback

A federal judge in California has blocked a state law requiring online platforms to take extra measures to protect children, ruling it imposes unconstitutional burdens on tech companies.

The law, signed by Governor Gavin Newsom in 2022, aimed to prevent harm to young users by mandating businesses to assess risks, adjust privacy settings, and estimate users’ ages. Companies faced fines of up to $7,500 per child for intentional violations.

Judge Beth Freeman ruled that the law was too broad and infringed on free speech, siding with NetChoice, a group representing major tech firms, including Amazon, Google, Meta, and Netflix.

NetChoice argued the legislation effectively forced companies to act as government censors under the pretext of protecting privacy.

The ruling marks a victory for the tech industry, which has repeatedly challenged state-level regulations on content moderation and user protections.

California Attorney General Rob Bonta expressed disappointment in the decision and pledged to continue defending the law. The legal battle is expected to continue, as a federal appeals court had previously ordered a reassessment of the injunction.

The case highlights the ongoing conflict between government efforts to regulate online spaces and tech companies’ claims of constitutional overreach.

For more information on these topics, visit diplomacy.edu.

AI Innovation in the UK Advances with new Google initiatives

Google is intensifying its investment in the UK’s AI sector, with plans to expand its data residency offerings and launch new tools for businesses.

At an event in London, Google’s DeepMind CEO Demis Hassabis and Google Cloud CEO Thomas Kurian unveiled plans to add Agentspace, Google’s platform for AI agents, to the UK’s data residency region.

However, this move will allow enterprises to host their AI agents locally, ensuring full control over their data.

In addition to the data residency expansion, Google announced new incentives for AI startups in the UK, offering up to £280,000 in Google Cloud credits for those participating in its accelerator programme.

These efforts come as part of a broader strategy to encourage businesses to adopt Google’s AI services over those of competitors. The company is also focusing on expanding AI skills training to help businesses better leverage these advanced technologies.

Google’s efforts align with the UK government’s push to strengthen its position in the global AI landscape. The government has been actively working to promote AI development, with a particular focus on building services that reduce reliance on big tech companies.

By bringing its latest AI offerings to the UK, Google is positioning itself as a key player in the country’s AI future.

For more information on these topics, visit diplomacy.edu.

UK Technology Secretary uses ChatGPT for advice on media and AI

Technology Secretary Peter Kyle has been using ChatGPT to seek advice on media appearances and to define technical terms related to his role.

His records, obtained by New Scientist through freedom of information laws, reveal that he asked the AI tool for recommendations on which podcasts to feature and for explanations of terms like ‘digital inclusion’ and ‘anti-matter.’

ChatGPT suggested The Infinite Monkey Cage and The Naked Scientists due to their broad reach and scientific focus.

Kyle also inquired why small and medium-sized businesses in the UK have been slow to adopt AI. The chatbot pointed to factors such as a lack of awareness about government initiatives, funding limitations, and concerns over data protection regulations like GDPR.

While AI adoption remains a challenge, Labour leader Sir Keir Starmer has praised its potential, arguing that the UK government should embrace AI more to improve efficiency.

Despite Kyle’s enthusiasm for AI, he has faced criticism for allegedly prioritising the interests of Big Tech over Britain’s creative industries. Concerns have been raised over a proposed policy that could allow tech firms to train AI on copyrighted material without permission unless creators opt out.

His department defended his use of AI, stating that while he utilises the tool, it does not replace expert advice from officials.

For more information on these topics, visit diplomacy.edu.

EU draft AI code faces industry pushback

The tech industry remains concerned about a newly released draft of the Code of Practice on General-Purpose Artificial Intelligence (GPAI), which aims to help AI providers comply with the EU‘s AI Act.

The proposed rules, which cover transparency, copyright, risk assessment, and mitigation, have sparked significant debate, especially among copyright holders and publishers.

Industry representatives argue that the draft still presents serious issues, particularly regarding copyright obligations and external risk assessments, which they believe could hinder innovation.

Tech lobby groups, such as the CCIA and DOT Europe, have expressed dissatisfaction with the latest draft, highlighting that it continues to impose burdensome requirements beyond the scope of the original AI Act.

Notably, the mandatory third-party risk assessments both before and after deployment remain a point of contention. Despite some improvements in the new version, these provisions are seen as unnecessary and potentially damaging to the industry.

Copyright concerns remain central, with organisations like News Media Europe warning that the draft still fails to respect copyright law. They argue that AI companies should not be merely expected to make ‘best efforts’ not to use content without proper authorisation.

Additionally, the draft is criticised for failing to fully address fundamental rights risks, which, according to experts, should be a primary concern for AI model providers.

The draft is open for feedback until 30 March, with the final version expected to be released in May. However, the European Commission’s ability to formalise the Code under the AI Act, which comes into full effect in 2027, remains uncertain.

Meanwhile, the issue of copyright and AI is also being closely examined by the European Parliament.

For more information on these topics, visit diplomacy.edu.

Google enhances Gemini AI with smarter personalisation

Google has announced an update to its Gemini AI assistant, enhancing personalisation to better anticipate user needs and deliver responses that feel more like those of a personal assistant.

The feature, initially available on desktop before rolling out to mobile, allows Gemini to offer tailored recommendations, such as travel ideas, based on search history and, in the future, data from apps like Photos and YouTube.

Users can opt in to the new personalisation features, sharing details like dietary preferences or past conversations to refine responses further.

Google assures that users must explicitly grant permission for Gemini to access search history and other services, and they can disconnect at any time.

However, this level of contextual awareness could give Google an advantage over competitors like ChatGPT by leveraging its vast ecosystem of user data.

The update signals a shift in how users interact with AI, bringing it closer to traditional search while raising questions for publishers and SEO professionals.

As Gemini increasingly provides direct, personalised answers, it may reduce the need for users to visit external websites. While currently experimental, the potential for Google to push broader adoption of AI-driven personalisation could reshape digital content discovery and search behaviour in the future.

For more information on these topics, visit diplomacy.edu.

OpenAI launches responses API for AI agent development

OpenAI has unveiled new tools to help developers and businesses build AI agents, which are automated systems that can independently perform tasks. These tools are part of OpenAI’s new Responses API, allowing enterprises to create custom AI agents that can search the web, navigate websites, and scan company files, similar to OpenAI’s existing Operator product. The company plans to phase out its older Assistants API by 2026, replacing it with the new capabilities.

The Responses API provides developers with access to powerful AI models, such as GPT-4o search and GPT-4o mini search, which are designed for high factual accuracy. OpenAI claims these models can offer more reliable answers than previous versions, with GPT-4o search achieving a 90% accuracy rate. Additionally, the platform includes a file search feature to help companies quickly retrieve information from their databases. The CUA model, which automates tasks like data entry, is also available, allowing developers to automate workflows with more precision.

Despite its promise, OpenAI acknowledges that there are still challenges to address, such as AI hallucinations and occasional errors in task automation. However, the company continues to improve its models, and the introduction of the Agents SDK gives developers the tools they need to build, debug, and optimise AI agents. OpenAI’s goal is to move beyond demos and create impactful tools that will shape the future of AI in enterprise applications.

For more information on these topics, visit diplomacy.edu.