Gender imbalance in EU’s tech industry

A new report has revealed significant gender imbalances across the EU’s tech ecosystem, from education to executive positions. The GENDEX index, funded by the European Innovation Council, found that women remain underrepresented in STEM fields, with only 42% of graduates in 2022 being women.

The imbalance is particularly evident in the information and communication technology (ICT) sector, where just 24% of graduates are women.

However, this discrepancy leads to fewer women founders in deep tech startups, with only one in five European tech companies being led by women over the past decade.

Women’s representation in academia is also limited, comprising just 31% of researchers and scientists in deep tech. Furthermore, only 24% of patent applications are submitted by women.

The report suggests that a narrowing funnel of opportunities negatively impacts the entire tech sector, as talented women are lost along the way. Men continue to dominate leadership positions, with women holding only about 30% of roles in European companies.

The gender gap is most evident at the board level, particularly in male-founded companies.

The study also highlighted the challenges female entrepreneurs face in securing funding. Female-led teams receive just 1% of venture capital funding, and when they do secure investments, they often face less favourable terms and longer waits compared to male-led teams.

The report recommends that investors require gender diversity reporting before providing funding and prioritise women-led companies to address these disparities.

Additionally, experts argue that structural changes are necessary to create a more balanced and effective tech ecosystem, pointing out that gender diversity can lead to better results for companies and the industry as a whole.

For more information on these topics, visit diplomacy.edu.

New York MTA partners with Google to detect track problems

The Metropolitan Transportation Authority (MTA) in New York City has partnered with Google Public Sector on a pilot program designed to detect track defects before they cause significant disruptions. Using Google Pixel smartphones retrofitted onto subway cars, the system captured millions of sensor readings, GPS locations, and hours of audio to identify potential problems. The project aimed to improve the efficiency of the MTA’s response to track issues, potentially saving time and money while reducing delays for passengers.

The AI-powered program, called TrackInspect, analyses the sounds and vibrations from the subway to pinpoint areas that could signal defects, such as loose rails or worn joints. Data collected during the pilot, which ran from September 2024 to January 2025, showed that the AI system successfully identified 92% of defect locations found by human inspectors. The system was trained using feedback from MTA inspectors, helping refine its ability to predict track issues.

While the pilot was considered a success, the future of the program remains uncertain due to financial concerns at the MTA. Despite this, the success of the project has sparked interest from other transit systems looking to adopt similar AI-driven technologies to improve infrastructure maintenance and reduce delays. The MTA is now exploring other technological partnerships to enhance its track monitoring and maintenance efforts.

For more information on these topics, visit diplomacy.edu.

Nagasaki University launches AI program for medical student training

Nagasaki University in southwestern Japan, in collaboration with a local systems development company, has unveiled a new AI program aimed at enhancing medical student training.

The innovative program allows students to practice interviews with virtual patients on a screen, addressing the growing difficulty of securing simulated patients for training, especially in regional areas facing population declines.

In a demonstration earlier this month, an AI-powered virtual patient exhibited symptoms such as fever and cough, responding appropriately to questions from a medical student.

Scheduled for introduction by March 2026, the technology will allow students to interact with virtual patients of different ages, genders, and symptoms, enhancing their learning experience.

The university plans to enhance the program with scoring and feedback functions to make the training more efficient and improve the quality of learning.

Shinya Kawashiri, an associate professor at the university’s School of Medicine, expressed hope that the system would lead to more effective study methods.

Toru Kobayashi, a professor at the university’s School of Information and Data Sciences, highlighted the program as a groundbreaking initiative in Japan’s medical education landscape.

For more information on these topics, visit diplomacy.edu.

NHS looks into Medefer data flaw after security concerns

NHS is investigating allegations that a software flaw at private medical services company Medefer left patient data vulnerable to hacking.

The flaw, discovered in November, affected Medefer’s internal patient record system in the UK, which handles 1,500 NHS referrals monthly.

A software engineer who found the issue believes the vulnerability may have existed for six years, but Medefer denies this claim, stating no data has been compromised.

The engineer discovered that unprotected application programming interfaces (APIs) could have allowed outsiders to access sensitive patient information.

While Medefer has insisted that there is no evidence of any breach, they have commissioned an external security agency to review their systems. The agency confirmed that no breach was found, and the company asserts that the flaw was fixed within 48 hours of being discovered.

Cybersecurity experts have raised concerns about the potential risks posed by the flaw, emphasising that a proper investigation should have been conducted immediately.

Medefer reported the issue to the Information Commissioner’s Office (ICO) and the Care Quality Commission (CQC), both of which found no further action necessary. However, experts suggest that a more thorough response could have been beneficial given the sensitive nature of the data involved.

For more information on these topics, visit diplomacy.edu.

China expands university enrolment to boost AI talent

China’s top universities are set to expand undergraduate enrolment to develop talent in key strategic fields, particularly AI.

The move follows the rapid rise of AI startup DeepSeek, which has drawn global attention for producing advanced AI models at a fraction of the usual cost.

The company’s success, largely driven by researchers from elite institutions in China, is seen as a major step in Beijing’s efforts to boost its homegrown STEM workforce.

Peking University announced it would add 150 undergraduate spots in 2025 to focus on national strategic needs, particularly in information science, engineering, and clinical medicine.

Renmin University will expand enrolment by over 100 places, aiming to foster innovation in AI. Meanwhile, Shanghai Jiao Tong University plans to add 150 spots dedicated to emerging technologies such as integrated circuits, biomedicine, and new energy.

This expansion aligns with China’s broader strategy to strengthen its education system and technological capabilities. In January, the government introduced a national action plan to enhance education efficiency and innovation by 2035.

Additionally, authorities plan to introduce AI education in primary and secondary schools to nurture digital skills and scientific curiosity from an early age.

For more information on these topics, visit diplomacy.edu.

Indonesia approves Apple’s local content certificates

Indonesia has granted local content certificates for 20 Apple products, including the iPhone 16 after the company met requirements for locally-made components.

Apple still needs further approvals from the communications and trade ministries before it can officially sell the devices in the country.

The certification follows Apple’s recent pledge to invest over $300 million in Indonesia, including funding component manufacturing plants and a research and development centre.

Last year, the country had banned iPhone 16 sales due to non-compliance with local content rules.

Industry ministry spokesperson Febri Hendri Antoni Arief confirmed that Apple received certificates for 11 phone models and nine tablets.

However, negotiations had been ‘tricky’, according to Indonesia’s industry minister. Apple remains outside the top five smartphone brands in Indonesia, according to research firm Canalyst.

For more information on these topics, visit diplomacy.edu.

Reddit launches new tools to improve user engagement

Reddit has introduced new tools to help users follow community rules and track content performance, aiming to boost engagement on the platform. The update comes after a slowdown in user growth due to Google’s algorithm changes, though traffic from the search engine has since recovered.

Among the new features is a ‘rules check’ tool, currently being tested on smartphones, which helps users comply with subreddit guidelines. Additionally, a post-recovery option allows users to repost content in alternative subreddits if their original submission is removed. Reddit will also suggest subreddits based on post content and clarify posting requirements for specific communities.

The company has enhanced its post insights feature, offering detailed engagement metrics to help users refine their content. This follows Reddit’s December launch of Reddit Answers, an AI-powered search tool designed to provide curated summaries of community discussions, which is still in beta testing.

For more information on these topics, visit diplomacy.edu.

US House subpoenas Alphabet over content moderation

The US House Judiciary Committee subpoenaed Alphabet on Thursday, demanding information on its communications with the Biden administration regarding content moderation policies. The committee, led by Republican Jim Jordan, also requested similar communications with external companies and groups.

The subpoena specifically seeks details on discussions about restricting or banning content related to US President Donald Trump, Elon Musk, COVID-19, and other conservative topics. Republicans have accused Big Tech companies of suppressing conservative viewpoints, with the Federal Trade Commission warning that coordinating policies or misleading users could breach the law.

Last year, Meta Platforms acknowledged pressure from the Biden administration to censor content, but Alphabet has not publicly distanced itself from similar claims. A Google spokesperson stated the company will demonstrate its independent approach to policy enforcement.

For more information on these topics, visit diplomacy.edu.

Italy debates Starlink for secure communications

Italy’s ruling League party is urging the government to choose Elon Musk’s Starlink over French-led Eutelsat for secure satellite communications, arguing that Starlink’s technology is more advanced.

Prime Minister Giorgia Meloni’s government is looking for an encrypted communication system for officials operating in high-risk areas, with both Starlink and Eutelsat in talks for the contract.

League leader Matteo Salvini, a strong supporter of former US President Donald Trump, has emphasised the need to prioritise US technology over a French alternative.

Meanwhile, Eutelsat’s CEO confirmed discussions with Italy as the country seeks an interim solution before the EU’s delayed IRIS² satellite system becomes operational.

Meloni’s office has stated that no formal negotiations have taken place and that any decision will be made transparently.

However, opposition parties have raised concerns over Starlink’s involvement, given recent speculation that Musk could cut off Ukraine from its service, potentially affecting national security interests.

Musk responded positively to the League’s endorsement, calling it ‘much appreciated’ on his social media platform X.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.