Hackers exploit AI: The hidden dangers of open-source models

As AI adoption grows, security experts warn that malicious actors are finding new ways to exploit vulnerabilities in open-source models.

Yuval Fernbach, CTO of machine learning operations at JFrog, notes that hackers are increasingly embedding harmful code within AI models, making it easier to steal information, manipulate outputs, or disrupt services.

A recent study by JFrog and Hugging Face found that of over one million AI models analyzed, 400 contained malicious code—roughly a 1% chance of encountering a tainted model.

However, the risk has escalated: while the number of available AI models has tripled, attacks have increased sevenfold.

The widespread use of open-source models, often chosen over costly proprietary alternatives, exacerbates security concerns.

Many companies lack proper oversight, with 58% of surveyed firms admitting to having no formal policy for vetting AI models. Meanwhile, banks and other industries worry that AI’s rapid evolution outpaces their ability to implement safeguards.

With agentic AI poised to automate decision-making, businesses face an urgent need to strengthen AI security measures before vulnerabilities lead to significant financial and operational consequences.

For more information on these topics, visit diplomacy.edu.

Trump’s last TikTok call

As the clock ticks toward a 5 April deadline, President Donald Trump is preparing to review a final proposal that could decide the fate of TikTok’s US operations.

A high-stakes Oval Office meeting is set for Wednesday, gathering Vice President JD Vance, Commerce Secretary Howard Lutnick, National Security Adviser Mike Waltz, and Director of National Intelligence Tulsi Gabbard.

The urgency stems from a 2024 law mandating that TikTok divest from Chinese ownership or face a ban on national security grounds.

According to recent reports, a deal may be on the horizon. Trump announced on Sunday that he expects an agreement to be finalised before the deadline.

Central to the negotiations is a group of prominent American investors—including Oracle, private equity firm Blackstone, and venture capital firm Andreessen Horowitz, exploring ways to take over TikTok’s US business from Chinese parent company ByteDance.

The strategy appears to centre on consolidating the stakes of ByteDance’s existing non-Chinese investors, such as Susquehanna International Group and General Atlantic, with an infusion of fresh capital.

The involvement of Andreessen Horowitz, one of Silicon Valley’s most influential firms, underscores the political and financial stakes.

Co-founder Marc Andreessen, a Trump ally, is reportedly coordinating efforts to buy out TikTok’s Chinese stakeholders and reshape the platform’s governance under American leadership.

The Financial Times noted that Oracle and other US-based investors spearhead this initiative, further blurring the lines between political oversight and market acquisition.

Reuters also confirmed that Blackstone is weighing a minority stake in the deal, adding another heavyweight to the potential investor roster.

However, both TikTok and Andreessen Horowitz have declined to comment on the ongoing talks.

Behind the scenes, Trump and his advisors effectively act as intermediaries, with JD Vance reportedly overseeing the auction-like process, a rare move that places the executive branch in a quasi-financial role.

With over 170 million American users, TikTok’s fate is more than just a business matter; it’s a flashpoint in the wider conversation about data sovereignty, tech influence, and US-China digital rivalry.

As negotiations intensify, the Biden-era regulatory stance on tech mergers appears to give way to a more deal-oriented, ‘America First’ strategy under Trump.

For more information on these topics, visit diplomacy.edu.

MetaAI launches in Europe amid data concerns

Meta has resumed the roll-out of its MetaAI across Europe after halting the launch last year due to regulatory uncertainty.

The Irish Data Protection Commission (DPC) still has questions regarding Meta’s AI tool, particularly in relation to its use of personal data from Facebook and Instagram users to train large language models.

The company has been in discussions with the DPC, but instead of an agreement, it remains under review as the tool continues to roll out.

MetaAI was first introduced in the US in September 2023, followed by India in June 2024, and the UK in October. It enables users to interact with a chat function across Facebook, Instagram, Messenger, and WhatsApp.

However, its expansion in Europe faced delays last summer due to concerns raised by the Irish privacy watchdog.

The company has expressed confidence in its compliance with the EU’s data protection laws and has been transparent with the DPC about its launch. However, failure to comply with the General Data Protection Regulation (GDPR) could lead to significant fines.

Additionally, certain aspects of MetaAI fall under the scope of Europe’s Digital Services Act (DSA), which requires the company to meet specific standards on user safety and transparency.

The European Commission has indicated it is waiting for a risk assessment from Meta to ensure that the tool complies with DSA obligations. While initial elements may not be directly relevant to the DSA, the Commission will continue to monitor the deployment closely.

For more information on these topics, visit diplomacy.edu.

European Commission charges €58.2 million in fees for DSA enforcement

The European Commission has charged the largest online platforms in the EU a total of €58.2 million in supervisory fees for their enforcement under the Digital Services Act (DSA).

These fees, which apply to platforms with over 45 million users per month, aim to fund the Commission’s activities for DSA enforcement, including administrative and human resource costs.

Meta, TikTok, and Google have filed five pending court cases against the fees, challenging the charges.

The DSA, designed to increase platform accountability, became fully applicable in February 2024, and the Commission has designated 25 Very Large Online Platforms, including major players like Amazon and LinkedIn.

During the 2024 period, the Commission launched formal proceedings against several platforms and sent over 100 requests for information.

However, instead of these fees fully covering the Commission’s expenses, they led to a deficit of €514,061. Investigations into platforms like X are ongoing, with transparency issues being a key concern.

For more information on these topics, visit diplomacy.edu.

Runway expands AI video capabilities with Gen-4

Runway has unveiled Gen-4, its most advanced AI-powered video generator yet, promising superior character consistency, realistic motion, and world understanding.

The model is now available to individual and enterprise users, allowing them to generate dynamic videos using visual references and text-based instructions.

Backed by investors such as Google and Nvidia, Runway faces fierce competition from OpenAI and Google in the AI video space. The company has differentiated itself by securing Hollywood partnerships and investing heavily in AI-generated filmmaking.

However, it remains tight-lipped about its training data, raising concerns over copyright issues.

Runway is currently embroiled in a lawsuit from artists accusing the company of training its models on copyrighted works instead of getting permission. The company claims fair use as a defence.

Meanwhile, it is reportedly seeking new funding at a $4 billion valuation, with hopes of reaching $300 million in annual revenue. As AI video tools advance, concerns grow over their impact on jobs in the entertainment industry, with thousands of positions at risk.

For more information on these topics, visit diplomacy.edu.

Apple expands AI features with new update

Apple Intelligence is expanding with new features, including Priority Notifications, which highlight time-sensitive alerts for users. This update is part of iOS 18.4, iPadOS 18.4, and macOS Sequoia 15.4, rolling out globally.

The AI suite is now available in more languages and has launched in the EU for iPhone and iPad users.

Additional improvements include a new Sketch style in Image Playground and the ability to generate ‘memory movies’ on Mac using simple text descriptions. Vision Pro users in the US can now access Apple Intelligence features like Writing Tools and Genmoji.

Apple’s AI rollout has been gradual since its introduction at WWDC last year, with features arriving in stages.

The update also brings fresh emojis, child safety enhancements, and the debut of Apple News+ Food, further expanding Apple’s digital ecosystem.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Ghibli-style tool raises privacy and data issues

OpenAI’s Ghibli-style AI image generator has taken social media by storm, with users eagerly transforming their photos into artwork reminiscent of Hayao Miyazaki’s signature style.

However, digital privacy activists are raising concerns that OpenAI might use this viral trend to collect thousands of personal images for AI training, potentially bypassing legal restrictions on web-scraped data.

Critics warn that while users enjoy the feature, they could unknowingly be handing over fresh facial data instead of protecting their privacy, raising ethical questions about AI and data collection.

Beyond privacy concerns, the trend has also reignited debates about AI’s impact on creative industries. Miyazaki, known for his hand-drawn approach, has previously expressed scepticism about artificial intelligence in animation.

Additionally, under GDPR regulations, OpenAI must justify data collection under “legitimate interest,” but experts argue that users voluntarily uploading images could give the company more freedom to use them instead of requiring further legal justification.

OpenAI has yet to issue an official statement regarding data safety, but ChatGPT itself warns users against uploading personal photos to any AI tool unless they are certain about its privacy policies.

Cybersecurity experts advise people to think twice before sharing high-resolution images online, use passwords instead of facial recognition for device security, and limit app access to their cameras.

As AI-generated image trends continue to gain popularity, the debate over privacy and data ownership is unlikely to fade anytime soon.

For more information on these topics, visit diplomacy.edu.

Studio Ghibli AI trend overwhelms OpenAI

A wave of Studio Ghibli-style image generation has taken social media by storm, thanks to OpenAI’s new tool that lets users create art in the beloved animation style. The viral craze began in late March and quickly flooded platforms like TikTok and Instagram.

Initially amused, OpenAI CEO Sam Altman even joined in by updating his profile picture to a Ghibli-inspired version of himself. However, the trend’s popularity soon spiralled out of control, straining the company’s servers and pushing staff to their limits.

Altman has now urged users to ease off, describing the demand as ‘biblical’ and joking that his team needs sleep.

OpenAI plans to introduce temporary usage limits while it works to make the system more efficient. Fans, however, continue to flood Altman’s replies with memes and even more Ghibli art.

For more information on these topics, visit diplomacy.edu.

DeepSeek overtakes ChatGPT in new visits, report shows

Chinese AI startup DeepSeek has emerged as the world’s fastest-growing AI tool, surpassing ChatGPT in new monthly website visits. In February alone, it recorded over 524 million fresh visits, edging past ChatGPT’s 500 million, according to analytics platform aitools.xyz.

Though still third overall behind ChatGPT and Canva in total traffic, DeepSeek’s market share rose sharply to 6.58%, with 792.6 million visits and 136.5 million unique users. India played a significant role, ranking fourth in traffic contribution with over 43 million monthly visits.

The report shows DeepSeek now holds over 12% of the global chatbot market. With the AI industry seeing more than 12 billion visits and 3 billion unique users last month, the rapid rise of DeepSeek signals intensifying competition in the AI space.

For more information on these topics, visit diplomacy.edu.

Canada unveils self-assessment tool for privacy breaches

Privacy Commissioner of Canada Philippe Dufresne has introduced an online tool designed to help businesses and federal institutions assess the impact of privacy breaches.

The web-based self-assessment tool guides users through key questions to determine whether a breach poses a real risk of significant harm to individuals.

Organizations governed by the Personal Information Protection and Electronic Documents Act (PIPEDA) and federal institutions must report breaches that could cause harm, including financial loss, identity theft, or damage to reputation.

The tool assists users in evaluating data sensitivity and the likelihood of misuse, helping them determine if they must notify affected individuals and regulators.

Privacy breaches can result from cyberattacks, scams, or accidental data exposure, particularly involving sensitive health or financial information.

The Privacy Commissioner’s office aims to streamline risk assessments, ensuring compliance with federal privacy laws while improving data protection standards across Canada.

For more information on these topics, visit diplomacy.edu.