California’s attempt to regulate online platforms faces legal setback

A federal judge in California has blocked a state law requiring online platforms to take extra measures to protect children, ruling it imposes unconstitutional burdens on tech companies.

The law, signed by Governor Gavin Newsom in 2022, aimed to prevent harm to young users by mandating businesses to assess risks, adjust privacy settings, and estimate users’ ages. Companies faced fines of up to $7,500 per child for intentional violations.

Judge Beth Freeman ruled that the law was too broad and infringed on free speech, siding with NetChoice, a group representing major tech firms, including Amazon, Google, Meta, and Netflix.

NetChoice argued the legislation effectively forced companies to act as government censors under the pretext of protecting privacy.

The ruling marks a victory for the tech industry, which has repeatedly challenged state-level regulations on content moderation and user protections.

California Attorney General Rob Bonta expressed disappointment in the decision and pledged to continue defending the law. The legal battle is expected to continue, as a federal appeals court had previously ordered a reassessment of the injunction.

The case highlights the ongoing conflict between government efforts to regulate online spaces and tech companies’ claims of constitutional overreach.

For more information on these topics, visit diplomacy.edu.

AI Innovation in the UK Advances with new Google initiatives

Google is intensifying its investment in the UK’s AI sector, with plans to expand its data residency offerings and launch new tools for businesses.

At an event in London, Google’s DeepMind CEO Demis Hassabis and Google Cloud CEO Thomas Kurian unveiled plans to add Agentspace, Google’s platform for AI agents, to the UK’s data residency region.

However, this move will allow enterprises to host their AI agents locally, ensuring full control over their data.

In addition to the data residency expansion, Google announced new incentives for AI startups in the UK, offering up to £280,000 in Google Cloud credits for those participating in its accelerator programme.

These efforts come as part of a broader strategy to encourage businesses to adopt Google’s AI services over those of competitors. The company is also focusing on expanding AI skills training to help businesses better leverage these advanced technologies.

Google’s efforts align with the UK government’s push to strengthen its position in the global AI landscape. The government has been actively working to promote AI development, with a particular focus on building services that reduce reliance on big tech companies.

By bringing its latest AI offerings to the UK, Google is positioning itself as a key player in the country’s AI future.

For more information on these topics, visit diplomacy.edu.

UK Technology Secretary uses ChatGPT for advice on media and AI

Technology Secretary Peter Kyle has been using ChatGPT to seek advice on media appearances and to define technical terms related to his role.

His records, obtained by New Scientist through freedom of information laws, reveal that he asked the AI tool for recommendations on which podcasts to feature and for explanations of terms like ‘digital inclusion’ and ‘anti-matter.’

ChatGPT suggested The Infinite Monkey Cage and The Naked Scientists due to their broad reach and scientific focus.

Kyle also inquired why small and medium-sized businesses in the UK have been slow to adopt AI. The chatbot pointed to factors such as a lack of awareness about government initiatives, funding limitations, and concerns over data protection regulations like GDPR.

While AI adoption remains a challenge, Labour leader Sir Keir Starmer has praised its potential, arguing that the UK government should embrace AI more to improve efficiency.

Despite Kyle’s enthusiasm for AI, he has faced criticism for allegedly prioritising the interests of Big Tech over Britain’s creative industries. Concerns have been raised over a proposed policy that could allow tech firms to train AI on copyrighted material without permission unless creators opt out.

His department defended his use of AI, stating that while he utilises the tool, it does not replace expert advice from officials.

For more information on these topics, visit diplomacy.edu.

EU draft AI code faces industry pushback

The tech industry remains concerned about a newly released draft of the Code of Practice on General-Purpose Artificial Intelligence (GPAI), which aims to help AI providers comply with the EU‘s AI Act.

The proposed rules, which cover transparency, copyright, risk assessment, and mitigation, have sparked significant debate, especially among copyright holders and publishers.

Industry representatives argue that the draft still presents serious issues, particularly regarding copyright obligations and external risk assessments, which they believe could hinder innovation.

Tech lobby groups, such as the CCIA and DOT Europe, have expressed dissatisfaction with the latest draft, highlighting that it continues to impose burdensome requirements beyond the scope of the original AI Act.

Notably, the mandatory third-party risk assessments both before and after deployment remain a point of contention. Despite some improvements in the new version, these provisions are seen as unnecessary and potentially damaging to the industry.

Copyright concerns remain central, with organisations like News Media Europe warning that the draft still fails to respect copyright law. They argue that AI companies should not be merely expected to make ‘best efforts’ not to use content without proper authorisation.

Additionally, the draft is criticised for failing to fully address fundamental rights risks, which, according to experts, should be a primary concern for AI model providers.

The draft is open for feedback until 30 March, with the final version expected to be released in May. However, the European Commission’s ability to formalise the Code under the AI Act, which comes into full effect in 2027, remains uncertain.

Meanwhile, the issue of copyright and AI is also being closely examined by the European Parliament.

For more information on these topics, visit diplomacy.edu.

Google enhances Gemini AI with smarter personalisation

Google has announced an update to its Gemini AI assistant, enhancing personalisation to better anticipate user needs and deliver responses that feel more like those of a personal assistant.

The feature, initially available on desktop before rolling out to mobile, allows Gemini to offer tailored recommendations, such as travel ideas, based on search history and, in the future, data from apps like Photos and YouTube.

Users can opt in to the new personalisation features, sharing details like dietary preferences or past conversations to refine responses further.

Google assures that users must explicitly grant permission for Gemini to access search history and other services, and they can disconnect at any time.

However, this level of contextual awareness could give Google an advantage over competitors like ChatGPT by leveraging its vast ecosystem of user data.

The update signals a shift in how users interact with AI, bringing it closer to traditional search while raising questions for publishers and SEO professionals.

As Gemini increasingly provides direct, personalised answers, it may reduce the need for users to visit external websites. While currently experimental, the potential for Google to push broader adoption of AI-driven personalisation could reshape digital content discovery and search behaviour in the future.

For more information on these topics, visit diplomacy.edu.

OpenAI launches responses API for AI agent development

OpenAI has unveiled new tools to help developers and businesses build AI agents, which are automated systems that can independently perform tasks. These tools are part of OpenAI’s new Responses API, allowing enterprises to create custom AI agents that can search the web, navigate websites, and scan company files, similar to OpenAI’s existing Operator product. The company plans to phase out its older Assistants API by 2026, replacing it with the new capabilities.

The Responses API provides developers with access to powerful AI models, such as GPT-4o search and GPT-4o mini search, which are designed for high factual accuracy. OpenAI claims these models can offer more reliable answers than previous versions, with GPT-4o search achieving a 90% accuracy rate. Additionally, the platform includes a file search feature to help companies quickly retrieve information from their databases. The CUA model, which automates tasks like data entry, is also available, allowing developers to automate workflows with more precision.

Despite its promise, OpenAI acknowledges that there are still challenges to address, such as AI hallucinations and occasional errors in task automation. However, the company continues to improve its models, and the introduction of the Agents SDK gives developers the tools they need to build, debug, and optimise AI agents. OpenAI’s goal is to move beyond demos and create impactful tools that will shape the future of AI in enterprise applications.

For more information on these topics, visit diplomacy.edu.

Migrants urged to use new app to self-deport under Trump policy

The Trump administration has introduced a new app that allows undocumented migrants in the US to self-deport rather than risk arrest and detention.

The United States Customs and Border Protection (CBP) app, called CBP Home, includes an option for individuals to signal their ‘intent to depart.’ Homeland Security Secretary Kristi Noem said the app gives migrants a chance to leave voluntarily and potentially return legally in the future.

Noem warned that those who do not leave will face deportation and a lifetime ban from re-entering the country. The administration has stepped up pressure on undocumented migrants, with new regulations set to take effect in April requiring them to register with the government or face fines and jail time.

The launch of CBP Home follows Trump’s decision to shut down CBP One, a Biden-era app that allowed migrants in Mexico to schedule asylum appointments. The move left thousands of migrants stranded at the border with uncertain prospects.

Trump has pledged to carry out record deportations, although his administration’s current removal numbers lag behind those recorded under President Joe Biden.

The CBP Home app marks a shift in immigration policy, aiming to encourage voluntary departures while tightening enforcement measures against those who remain illegally.

For more information on these topics, visit diplomacy.edu.

New digital health file system revolutionises medical data management in Greece

A new electronic health file system is launching on Tuesday in a preliminary form, aiming to provide doctors with an easier, safer, and more reliable way to access Greek patients’ medical histories.

The platform, expected to be fully operational by the end of the year, will store comprehensive records for every patient with a social security number (AMKA).

Once completed, the system will compile detailed medical histories, including hospital admissions, surgeries, diagnostic tests, prescriptions, vaccinations, allergies, and treatment protocols.

Upgrade like this one will significantly streamline healthcare access for both doctors and patients.

The enhanced MyHealth app will eliminate the need for patients to carry test results or verbally summarise their medical history.

It is particularly expected to benefit people with disabilities, as the entire process of claiming benefits will be handled electronically, removing the need for in-person evaluations by specialist committees.

For more information on these topics, visit diplomacy.edu.

Authors challenge Meta’s use of their books in AI training

A lawsuit filed by authors Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates against Meta has taken a significant step forward as a federal judge has ruled that the case will continue.

The authors allege that Meta used their books to train its Llama AI models without consent, violating their intellectual property rights.

They further claim that Meta intentionally removed copyright management information (CMI) from the works to conceal the alleged infringement.

Meta, however, defends its actions, arguing that the training of AI models qualifies as fair use and that the authors lack standing to sue.

Despite this, the judge allowed the lawsuit to move ahead, acknowledging that the authors’ claims suggest concrete injury, specifically regarding the removal of CMI to hide the use of copyrighted works.

While the lawsuit touches on several legal points, the judge dismissed claims related to the California Comprehensive Computer Data Access and Fraud Act, stating that there was no evidence of Meta accessing the authors’ computers or servers.

Meta’s defence team has continued to assert that the AI training practices were legally sound, though the ongoing case will likely provide more insight into the company’s stance on copyright.

The ruling adds to the growing list of copyright-related lawsuits involving AI models, including one filed by The New York Times against OpenAI. As the debate around AI and intellectual property rights intensifies, this case could set important precedents.

For more information on these topics, visit diplomacy.edu.

China expands university enrolment to boost AI talent

China’s top universities are set to expand undergraduate enrolment to develop talent in key strategic fields, particularly AI.

The move follows the rapid rise of AI startup DeepSeek, which has drawn global attention for producing advanced AI models at a fraction of the usual cost.

The company’s success, largely driven by researchers from elite institutions in China, is seen as a major step in Beijing’s efforts to boost its homegrown STEM workforce.

Peking University announced it would add 150 undergraduate spots in 2025 to focus on national strategic needs, particularly in information science, engineering, and clinical medicine.

Renmin University will expand enrolment by over 100 places, aiming to foster innovation in AI. Meanwhile, Shanghai Jiao Tong University plans to add 150 spots dedicated to emerging technologies such as integrated circuits, biomedicine, and new energy.

This expansion aligns with China’s broader strategy to strengthen its education system and technological capabilities. In January, the government introduced a national action plan to enhance education efficiency and innovation by 2035.

Additionally, authorities plan to introduce AI education in primary and secondary schools to nurture digital skills and scientific curiosity from an early age.

For more information on these topics, visit diplomacy.edu.