Paris-based startup Neuralk-AI has raised $4 million to develop AI models tailored for structured data, such as databases and spreadsheets. Unlike traditional AI, which excels at unstructured content like images and text, Neuralk-AI’s approach aims to help businesses extract deeper insights from their existing data warehouses. Retailers, in particular, could benefit from its models, using AI to optimise inventory, detect fraud, and refine customer recommendations.
The company, co-founded by Alexandre Pasquiou, plans to launch its AI models as an API for data scientists in commerce-focused industries. By automating complex workflows and enhancing data analysis, Neuralk-AI hopes to offer a more efficient alternative to traditional machine learning tools. The startup is already collaborating with major French retailers such as E.Leclerc and Auchan to test its technology.
Backed by Fly Ventures, SteamAI, and industry leaders including Hugging Face’s Thomas Wolf, Neuralk-AI is working towards becoming the leading AI solution for structured data. The first version of its model is expected to launch in the coming months, with a full benchmark release planned for later this year.
Bengaluru-based startup Presentations.ai has raised $3 million in a seed round led by Accel to enhance its AI-powered platform for creating business presentations. The company, which launched in 2019, saw rapid growth after the emergence of ChatGPT, gaining over a million users within three months of its beta release. Now, with over 5 million users worldwide, it aims to become the go-to AI tool for generating high-quality presentation decks.
The Indian platform uses advanced language models to streamline the presentation-making process, offering features like automated slide design, brand-aligned templates, and real-time collaboration. It also integrates text-to-image AI models, allowing users to generate custom visuals effortlessly. With a freemium model introduced in 2024, the startup has attracted tens of thousands of paying users, further solidifying its market presence.
With backing from key investors, including entrepreneurs from Paytm, CRED, and Freshworks, Presentations.ai is now working on an AI-powered assistant that can generate slides within any application. The company is also expanding its enterprise sales team to target businesses looking for more efficient ways to create presentations.
Meta has introduced a new policy framework outlining when it may restrict the release of its AI systems due to security concerns. The Frontier AI Framework categorises AI models into ‘high-risk’ and ‘critical-risk’ groups, with the latter referring to those capable of aiding catastrophic cyber or biological attacks. If an AI system is classified as a critical risk, Meta will suspend its development until safety measures can be implemented.
The company’s evaluation process does not rely solely on empirical testing but also considers input from internal and external researchers. This approach reflects Meta’s belief that existing evaluation methods are not yet robust enough to provide definitive risk assessments. Despite its historically open approach to AI development, the company acknowledges that some models could pose unacceptable dangers if released.
By outlining this framework, Meta aims to demonstrate its commitment to responsible AI development while distinguishing its approach from other firms with fewer safeguards. The policy comes amid growing scrutiny of AI’s potential misuse, especially as open-source models gain wider adoption.
Taiwan has officially banned government agencies from using DeepSeek AI, citing security risks and concerns over potential data exposure to China. The move strengthens previous guidance, which only advised against its use.
Premier Cho Jung-tai announced the decision after a cabinet meeting, stressing the importance of safeguarding national information security. Officials raised fears over possible censorship on DeepSeek and the risk of sensitive data being transferred to China.
The digital ministry had initially stated on Friday that government departments should avoid the AI service but did not explicitly prohibit it. The latest announcement formalises the ban, aligning with Taiwan’s broader approach to restricting Chinese technology.
Authorities in several other countries, including South Korea, France, Italy, and Ireland, have also scrutinised DeepSeek’s handling of personal data.
Australia has imposed sanctions on the extremist online network ‘Terrorgram’ in an effort to combat rising antisemitism and online radicalisation. Foreign Minister Penny Wong stated that engaging with the group would now be a criminal offence, helping to prevent young people from being drawn into far-right extremism. The move follows similar actions by Britain and the US.
Wong described ‘Terrorgram’ as a network that promotes white supremacy and racially motivated violence, making it the first entirely online entity to face Australian counterterrorism financing sanctions. Offenders could face up to 10 years in prison and substantial fines. Sanctions were also renewed against four other right-wing groups, including the Russian Imperial Movement and The Base.
The network primarily operates on the Telegram platform, which stated that it has long banned such content and removed related channels. The US designated ‘Terrorgram’ as a violent extremist group in January, while Britain criminalised affiliation with it in April.
Australia has seen a rise in antisemitic incidents, including attacks on synagogues and vehicles since the Israel-Gaza conflict began in October 2023. Police recently arrested neo-Nazi group members in Adelaide and charged a man for displaying a Nazi symbol on National Day.
Young people in Guernsey are being offered a free six-week course on AI to help them understand both the opportunities and challenges of the technology. Run by Digital Greenhouse in St Peter Port, the programme is open to students and graduates over the age of 16, regardless of their academic background. Experts from University College London (UCL) deliver the lessons remotely each week.
Jenny de la Mare from Digital Greenhouse said the course was designed to “inform and inspire” participants while helping them stand out in job and university applications. She emphasised that the programme was not limited to STEM students and could serve as a strong introduction to AI for anyone interested in the field.
Recognising that young people in Guernsey may have fewer opportunities to attend major tech events in the UK, organisers hope the course will give them a competitive edge. The programme has already started but is still open for registrations, with interested individuals encouraged to contact Digital Greenhouse.
Norwegian-founded startup Tana has raised $25 million to fuel its AI-powered productivity platform, which has already drawn significant attention with a waitlist of over 160,000 users. The company’s software uses AI to streamline task management, automatically capturing, organising, and acting on information from meetings, notes, and conversations. With an approach reminiscent of object-oriented programming, its ‘Supertag’ feature transforms unstructured data into actionable insights.
Led by Tola Capital, the latest funding round brings Tana’s valuation to $100 million, with backing from investors such as Lightspeed Venture Partners and Northzone. Angel investors include notable tech figures like Google Maps co-founder Lars Rasmussen and Dropbox co-founder Arash Ferdowsi, highlighting the growing interest in AI-driven workplace tools. The startup, headquartered in Palo Alto with operations in Norway, is spearheaded by ex-Googlers Tarjei Vassbotn and Grim Iversen, the latter having worked on the now-defunct Google Wave.
Tana integrates with multiple workplace tools like Zoom and is designed to evolve as it processes more data, aiming to address long-standing challenges in productivity software. While currently best suited for tech-savvy professionals, the founders believe their AI knowledge graph will reshape how businesses handle information in the future. Investors are betting on Tana’s long-term vision, with some already using the platform to manage their own operations.
With Germany’s parliamentary elections just weeks away, lawmakers are warning that authoritarian states, including Russia, are intensifying disinformation efforts to destabilise the country. Authorities are particularly concerned about a Russian campaign, known as Doppelgänger, which has been active since 2022 and aims to undermine Western support for Ukraine. The campaign has been linked to fake social media accounts and misleading content in Germany, France, and the US.
CSU MP Thomas Erndl confirmed that Russia is attempting to influence European elections, including in Germany. He argued that disinformation campaigns are contributing to the rise of right-wing populist parties, such as the AfD, by sowing distrust in state institutions and painting foreigners and refugees as a problem. Erndl emphasised the need for improved defences, including modern technologies like AI to detect disinformation, and greater public awareness and education.
The German Foreign Ministry recently reported the identification of over 50,000 fake X accounts associated with the Doppelgänger campaign. These accounts mimic credible news outlets like Der Spiegel and Welt to spread fabricated articles, amplifying propaganda. Lawmakers stress the need for stronger cooperation within Europe and better tools for intelligence agencies to combat these threats, even suggesting that a shift in focus from privacy to security may be necessary to tackle the issue effectively.
Greens MP Konstantin von Notz highlighted the security risks posed by disinformation campaigns, warning that authoritarian regimes like Russia and China are targeting democratic societies, including Germany. He called for stricter regulation of online platforms, stronger counterintelligence efforts, and increased media literacy to bolster social resilience. As the election date approaches, lawmakers urge both government agencies and the public to remain vigilant against the growing threat of foreign interference.
WhatsApp has identified an advanced hacking campaign targeting nearly 90 users across more than two dozen countries. The attack, linked to Israeli spyware firm Paragon Solutions, exploited a zero-click vulnerability, meaning victims’ devices were compromised without them needing to interact with any malicious files. The messaging platform, owned by Meta, has since taken steps to block the hacking attempts and has issued a cease-and-desist letter to Paragon.
While WhatsApp has not disclosed the identities of those targeted, reports indicate that journalists and members of civil society were among the victims. The company has referred affected users to Citizen Lab, a Canadian watchdog that investigates digital security threats. Law enforcement agencies and industry partners have also been alerted, though specifics remain undisclosed.
Paragon, which was recently acquired by US investment firm AE Industrial Partners, has not commented on the allegations. The company presents itself as a responsible player in the spyware industry, claiming to sell its technology only to governments in stable democracies. However, critics argue that the continued spread of surveillance tools increases the risk of human rights abuses, with spyware repeatedly found on the devices of activists, journalists, and officials worldwide.
Cybersecurity experts warn that the growing use of commercial spyware poses an ongoing threat to digital privacy. Despite claims of ethical safeguards, the latest revelations suggest that even companies with supposedly responsible practices may be engaging in questionable surveillance activities.
A trial in Sutton is using AI sensors to monitor the well-being of vulnerable people in their homes. The system tracks movement, temperature, and appliance usage to identify patterns and detect unusual activity, such as a missed meal or a fall. The initiative aims to allow individuals to live independently for longer while providing reassurance to their loved ones.
Margaret Linehan, 86, who has dementia, is one of over 1,200 residents using the system. She described it as a valuable safety net, helping alert her family if something is amiss. Her daughter-in-law, Marianne, can check an app to monitor activity and receive alerts. On one occasion, when Margaret got up for a cup of tea in the middle of the night, the system notified her son, highlighting its ability to detect unexpected behaviour.
The AI-powered technology, which does not use cameras or microphones, has already detected over 1,800 falls in the past year, enabling rapid responses from care teams. Sutton Council is trialling the system as part of a wider government initiative exploring AI’s role in improving public services. Experts hope the technology will revolutionise social care by providing proactive support while ensuring people’s privacy and independence.