Migrants urged to use new app to self-deport under Trump policy

The Trump administration has introduced a new app that allows undocumented migrants in the US to self-deport rather than risk arrest and detention.

The United States Customs and Border Protection (CBP) app, called CBP Home, includes an option for individuals to signal their ‘intent to depart.’ Homeland Security Secretary Kristi Noem said the app gives migrants a chance to leave voluntarily and potentially return legally in the future.

Noem warned that those who do not leave will face deportation and a lifetime ban from re-entering the country. The administration has stepped up pressure on undocumented migrants, with new regulations set to take effect in April requiring them to register with the government or face fines and jail time.

The launch of CBP Home follows Trump’s decision to shut down CBP One, a Biden-era app that allowed migrants in Mexico to schedule asylum appointments. The move left thousands of migrants stranded at the border with uncertain prospects.

Trump has pledged to carry out record deportations, although his administration’s current removal numbers lag behind those recorded under President Joe Biden.

The CBP Home app marks a shift in immigration policy, aiming to encourage voluntary departures while tightening enforcement measures against those who remain illegally.

For more information on these topics, visit diplomacy.edu.

New digital health file system revolutionises medical data management in Greece

A new electronic health file system is launching on Tuesday in a preliminary form, aiming to provide doctors with an easier, safer, and more reliable way to access Greek patients’ medical histories.

The platform, expected to be fully operational by the end of the year, will store comprehensive records for every patient with a social security number (AMKA).

Once completed, the system will compile detailed medical histories, including hospital admissions, surgeries, diagnostic tests, prescriptions, vaccinations, allergies, and treatment protocols.

Upgrade like this one will significantly streamline healthcare access for both doctors and patients.

The enhanced MyHealth app will eliminate the need for patients to carry test results or verbally summarise their medical history.

It is particularly expected to benefit people with disabilities, as the entire process of claiming benefits will be handled electronically, removing the need for in-person evaluations by specialist committees.

For more information on these topics, visit diplomacy.edu.

Authors challenge Meta’s use of their books in AI training

A lawsuit filed by authors Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates against Meta has taken a significant step forward as a federal judge has ruled that the case will continue.

The authors allege that Meta used their books to train its Llama AI models without consent, violating their intellectual property rights.

They further claim that Meta intentionally removed copyright management information (CMI) from the works to conceal the alleged infringement.

Meta, however, defends its actions, arguing that the training of AI models qualifies as fair use and that the authors lack standing to sue.

Despite this, the judge allowed the lawsuit to move ahead, acknowledging that the authors’ claims suggest concrete injury, specifically regarding the removal of CMI to hide the use of copyrighted works.

While the lawsuit touches on several legal points, the judge dismissed claims related to the California Comprehensive Computer Data Access and Fraud Act, stating that there was no evidence of Meta accessing the authors’ computers or servers.

Meta’s defence team has continued to assert that the AI training practices were legally sound, though the ongoing case will likely provide more insight into the company’s stance on copyright.

The ruling adds to the growing list of copyright-related lawsuits involving AI models, including one filed by The New York Times against OpenAI. As the debate around AI and intellectual property rights intensifies, this case could set important precedents.

For more information on these topics, visit diplomacy.edu.

China expands university enrolment to boost AI talent

China’s top universities are set to expand undergraduate enrolment to develop talent in key strategic fields, particularly AI.

The move follows the rapid rise of AI startup DeepSeek, which has drawn global attention for producing advanced AI models at a fraction of the usual cost.

The company’s success, largely driven by researchers from elite institutions in China, is seen as a major step in Beijing’s efforts to boost its homegrown STEM workforce.

Peking University announced it would add 150 undergraduate spots in 2025 to focus on national strategic needs, particularly in information science, engineering, and clinical medicine.

Renmin University will expand enrolment by over 100 places, aiming to foster innovation in AI. Meanwhile, Shanghai Jiao Tong University plans to add 150 spots dedicated to emerging technologies such as integrated circuits, biomedicine, and new energy.

This expansion aligns with China’s broader strategy to strengthen its education system and technological capabilities. In January, the government introduced a national action plan to enhance education efficiency and innovation by 2035.

Additionally, authorities plan to introduce AI education in primary and secondary schools to nurture digital skills and scientific curiosity from an early age.

For more information on these topics, visit diplomacy.edu.

Reddit launches new tools to improve user engagement

Reddit has introduced new tools to help users follow community rules and track content performance, aiming to boost engagement on the platform. The update comes after a slowdown in user growth due to Google’s algorithm changes, though traffic from the search engine has since recovered.

Among the new features is a ‘rules check’ tool, currently being tested on smartphones, which helps users comply with subreddit guidelines. Additionally, a post-recovery option allows users to repost content in alternative subreddits if their original submission is removed. Reddit will also suggest subreddits based on post content and clarify posting requirements for specific communities.

The company has enhanced its post insights feature, offering detailed engagement metrics to help users refine their content. This follows Reddit’s December launch of Reddit Answers, an AI-powered search tool designed to provide curated summaries of community discussions, which is still in beta testing.

For more information on these topics, visit diplomacy.edu.

Zalando challenges EU tech rules, seeks exemption

Zalando, Europe’s leading online fashion retailer, has filed a legal challenge against the European Commission’s classification of the company under the Digital Services Act (DSA). The company argues that, unlike platforms such as Amazon and AliExpress, its business model does not fit into the “very large online platform” (VLOP) category, and it should not face the same stringent regulations.

The DSA, which came into force in 2022, requires VLOPs to take additional measures to manage harmful and illegal content or face significant fines. Zalando’s lawyer, Robert Briske, pointed out that the company operates a hybrid model, offering both its own products and those from third-party partners, making it distinct from other online platforms that purely function as marketplaces.

The European Commission contends that Zalando’s business model is similar to those of Amazon and AliExpress. The Commission’s lawyer, Liane Wildpanner, argued that Zalando is seeking to benefit from the flexibility of a hybrid model without bearing the regulatory burden of platforms like Amazon.

Zalando’s case is supported by Germany’s e-commerce association, BEVH, while other EU bodies, including the European Parliament, have sided with the Commission. The General Court is expected to deliver a ruling in the coming months.

For more information on these topics, visit diplomacy.edu.

US national security threatened by large-scale federal workforce reductions

A former top National Security Agency official has warned that widespread federal job cuts could severely undermine US cybersecurity and national security.

Rob Joyce, former NSA director of cybersecurity, told a congressional committee that eliminating probationary employees would weaken the government’s ability to combat cyber threats, particularly those from China.

The remarks were made during a House Select Committee hearing on China‘s cyber operations targeting critical United States infrastructure and telecommunications.

More than 100,000 federal workers have left their jobs through early retirement or layoffs as part of President Donald Trump’s efforts to shrink government agencies, with support from billionaire advisor Elon Musk.

While national security roles were supposed to be exempt, some cybersecurity positions have still been affected.

The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has already cut over 130 positions, raising concerns about the government’s ability to protect critical systems.

The White House and NSA declined to comment on the impact of the job reductions.

A DHS spokesperson confirmed that the cuts are expected to save $50 million and that further reductions in ‘wasteful positions’ are being considered.

However, critics argue that the loss of skilled personnel in cybersecurity roles could leave the country more vulnerable to foreign threats.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.

Google unveils experimental AI search for premium users

Google has introduced an experimental version of its search engine that removes the traditional 10 blue links in favour of AI-generated summaries.

The new ‘AI Mode’ is available to subscribers of Google One AI Premium, a $19.99 per month plan, and can be accessed through a tab alongside existing options like Images and Maps.

Users will see a detailed AI summary with hyperlinks to cited sources, replacing standard search results with a search bar for follow-up questions.

The feature is powered by a customised version of Google’s Gemini 2.0 model, designed to handle complex queries more effectively.

AI Overviews, which provide summaries atop search results, are already available in over 100 countries, with advertisements integrated into them since last May. Google says the new AI-driven approach responds to demand from “power users” seeking more AI-generated responses.

As Google pushes deeper into AI-powered search, it faces competition from Microsoft-backed OpenAI, which introduced search capabilities to ChatGPT last October.

The shift has raised concerns among content creators, with edtech company Chegg suing Google in February, alleging that AI previews are reducing demand for original content and hurting publishers’ ability to compete.

For more information on these topics, visit diplomacy.edu.

Antitrust probe into Microsoft and OpenAI ends in the UK

The UK Competition and Markets Authority (CMA) has concluded its investigation into Microsoft’s partnership with OpenAI, deciding not to move forward with a merger probe.

The decision comes after the CMA found that Microsoft does not hold enough control over OpenAI, a key factor in triggering a merger review. The companies’ collaboration began in 2019, when Microsoft invested $1 billion in the AI startup.

Despite this, the CMA stated that Microsoft’s influence had not evolved to the level of de facto control required for further scrutiny.

This marks the end of the UK’s formal investigation into the deal, although the CMA clarified that its decision should not be interpreted as a dismissal of potential competition concerns related to the partnership.

While the investigation is closed, the CMA has been increasingly active in examining major tech company acquisitions, particularly those involving AI startups.

Microsoft welcomed the CMA’s decision, asserting that their ongoing partnership with OpenAI fosters innovation and competition in AI development.

Meanwhile, the CMA continues to monitor the tech sector, with broader powers to investigate companies deemed to hold ‘strategic market status’.

For more information on these topics, visit diplomacy.edu.