Paris-based startup Neuralk-AI has raised $4 million to develop AI models tailored for structured data, such as databases and spreadsheets. Unlike traditional AI, which excels at unstructured content like images and text, Neuralk-AI’s approach aims to help businesses extract deeper insights from their existing data warehouses. Retailers, in particular, could benefit from its models, using AI to optimise inventory, detect fraud, and refine customer recommendations.
The company, co-founded by Alexandre Pasquiou, plans to launch its AI models as an API for data scientists in commerce-focused industries. By automating complex workflows and enhancing data analysis, Neuralk-AI hopes to offer a more efficient alternative to traditional machine learning tools. The startup is already collaborating with major French retailers such as E.Leclerc and Auchan to test its technology.
Backed by Fly Ventures, SteamAI, and industry leaders including Hugging Face’s Thomas Wolf, Neuralk-AI is working towards becoming the leading AI solution for structured data. The first version of its model is expected to launch in the coming months, with a full benchmark release planned for later this year.
Meta has introduced a new policy framework outlining when it may restrict the release of its AI systems due to security concerns. The Frontier AI Framework categorises AI models into ‘high-risk’ and ‘critical-risk’ groups, with the latter referring to those capable of aiding catastrophic cyber or biological attacks. If an AI system is classified as a critical risk, Meta will suspend its development until safety measures can be implemented.
The company’s evaluation process does not rely solely on empirical testing but also considers input from internal and external researchers. This approach reflects Meta’s belief that existing evaluation methods are not yet robust enough to provide definitive risk assessments. Despite its historically open approach to AI development, the company acknowledges that some models could pose unacceptable dangers if released.
By outlining this framework, Meta aims to demonstrate its commitment to responsible AI development while distinguishing its approach from other firms with fewer safeguards. The policy comes amid growing scrutiny of AI’s potential misuse, especially as open-source models gain wider adoption.
OpenAI has announced a new partnership with Kakao to develop AI products for South Korea. This marks OpenAI’s second major alliance in Asia this week, following a similar deal with SoftBank for AI services in Japan. OpenAI CEO Sam Altman, who is on a tour of Asia, also met with leaders from Samsung Electronics, SoftBank, and Arm Holdings. The partnership with Kakao is seen as part of OpenAI’s broader strategy to expand its AI presence in the region, with a focus on messaging and AI applications.
Kakao, which operates South Korea’s dominant messaging app KakaoTalk, plans to integrate OpenAI’s technology into its services as part of its push to grow its AI capabilities. Although Kakao has lagged behind rival Naver in the AI race, the company is positioning AI as a key growth engine. Altman highlighted the importance of South Korea’s energy, semiconductor, and internet sectors in driving demand for AI products, noting that many local companies will play a role in OpenAI’s Stargate data centre project in the US.
In addition to his work with Kakao, Altman met with executives from SK Group and Samsung to discuss AI chips and potential cooperation. SK Hynix, a key player in the production of AI processors, has been in discussions with OpenAI regarding collaboration in the AI ecosystem. Altman also indicated that OpenAI is actively considering involvement in South Korea’s national AI computing centre project, which is expected to attract up to $1.4 billion in investment.
Following the announcement, Kakao’s stock fell by 2%, after a 9% surge the previous day.
Google identified more than 57 cyber threat actors linked to China, Iran, North Korea, and Russia leveraging the company’s AI technology to enhance their cyber and information warfare efforts. According to a new report by Google’s Threat Intelligence Group (GTIG), the state-sponsored hacking groups, known as Advanced Persistent Threats (APTs), primarily use AI for tasks such as researching vulnerabilities, writing malicious code, and creating targeted phishing campaigns.
The company says that Iranian APT actors, particularly APT42, were identified as the most frequent users of Google’s AI tool, Gemini. They used it for reconnaissance on cybersecurity experts and organizations, and phishing operations.
Beyond APT groups, underground cybercriminal forums have begun advertising illicit AI models, such as WormGPT, WolfGPT, FraudGPT, and GhostGPT—AI systems designed to bypass ethical safeguards and facilitate phishing, fraud, and cyberattacks.
In the report, Google stated that the company has implemented countermeasures to prevent abuse of its AI system and has called for stronger collaboration between government and private industry to bolster cybersecurity defenses.
OpenAI has introduced a new AI tool called deep research, designed to conduct multi-step research on the internet for complex tasks. The tool is powered by an optimised version of the upcoming OpenAI o3 model, enabling it to browse and analyse online content, including text, images, and PDFs, to generate detailed reports.
Deep research significantly reduces the time required for research, with OpenAI stating that it accomplishes tasks in minutes that would take a human several hours.
Despite its capabilities, the tool remains in its early stages and has limitations, such as difficulties in distinguishing credible sources from rumours and challenges in conveying uncertainty accurately.
The feature is already accessible via the web version of ChatGPT and will be extended to mobile and desktop applications later in February.
Deep research is the second AI agent introduced by OpenAI this year, following the January preview of Operator, which assists users with tasks like to-do lists and travel planning.
Taiwan has officially banned government agencies from using DeepSeek AI, citing security risks and concerns over potential data exposure to China. The move strengthens previous guidance, which only advised against its use.
Premier Cho Jung-tai announced the decision after a cabinet meeting, stressing the importance of safeguarding national information security. Officials raised fears over possible censorship on DeepSeek and the risk of sensitive data being transferred to China.
The digital ministry had initially stated on Friday that government departments should avoid the AI service but did not explicitly prohibit it. The latest announcement formalises the ban, aligning with Taiwan’s broader approach to restricting Chinese technology.
Authorities in several other countries, including South Korea, France, Italy, and Ireland, have also scrutinised DeepSeek’s handling of personal data.
Japan’s industry ministry acknowledges concerns that expanding data centres could drive up electricity consumption but finds it difficult to predict how demand may shift due to a single technology such as DeepSeek. The government’s latest draft energy plan, released in December, projects a 10-20% rise in electricity generation by 2040, citing increased AI-driven consumption.
DeepSeek, a Chinese AI startup, has raised questions about whether power demand will decline due to its potentially lower energy usage or increase as AI technology becomes more widespread and affordable. Analysts remain divided on the overall effect, highlighting the complexity of forecasting long-term energy trends.
Japan’s Ministry of Economy, Trade and Industry (METI) noted that AI-related energy demand depends on multiple factors, including improvements in performance, cost reductions, and energy-efficient innovations. The ministry emphasised that a single example cannot determine the future impact on electricity needs.
Economic growth and industrial competitiveness will rely on securing adequate decarbonised power sources to meet future demand. METI underscored the importance of balancing AI expansion with sustainable energy policies to maintain stability in Japan’s energy landscape.
Young people in Guernsey are being offered a free six-week course on AI to help them understand both the opportunities and challenges of the technology. Run by Digital Greenhouse in St Peter Port, the programme is open to students and graduates over the age of 16, regardless of their academic background. Experts from University College London (UCL) deliver the lessons remotely each week.
Jenny de la Mare from Digital Greenhouse said the course was designed to “inform and inspire” participants while helping them stand out in job and university applications. She emphasised that the programme was not limited to STEM students and could serve as a strong introduction to AI for anyone interested in the field.
Recognising that young people in Guernsey may have fewer opportunities to attend major tech events in the UK, organisers hope the course will give them a competitive edge. The programme has already started but is still open for registrations, with interested individuals encouraged to contact Digital Greenhouse.
A trial in Sutton is using AI sensors to monitor the well-being of vulnerable people in their homes. The system tracks movement, temperature, and appliance usage to identify patterns and detect unusual activity, such as a missed meal or a fall. The initiative aims to allow individuals to live independently for longer while providing reassurance to their loved ones.
Margaret Linehan, 86, who has dementia, is one of over 1,200 residents using the system. She described it as a valuable safety net, helping alert her family if something is amiss. Her daughter-in-law, Marianne, can check an app to monitor activity and receive alerts. On one occasion, when Margaret got up for a cup of tea in the middle of the night, the system notified her son, highlighting its ability to detect unexpected behaviour.
The AI-powered technology, which does not use cameras or microphones, has already detected over 1,800 falls in the past year, enabling rapid responses from care teams. Sutton Council is trialling the system as part of a wider government initiative exploring AI’s role in improving public services. Experts hope the technology will revolutionise social care by providing proactive support while ensuring people’s privacy and independence.
Google’s X has spun out a new startup, Heritable Agriculture, which applies AI to revolutionise plant breeding. Using machine learning, the company analyses plant genomes to identify combinations that enhance yields, reduce water consumption, and increase carbon storage in soil.
The startup was founded by Brad Zamft, a former Google X researcher with a background in physics and biotech. Under his leadership, Heritable has tested thousands of plants using AI-powered models, running experiments in controlled growth chambers and field sites across the United States. Unlike gene-editing firms, Heritable focuses on refining traditional breeding methods rather than modifying DNA directly.
The company has secured investment from FTW Ventures, Mythos Ventures, and Google itself, though financial details remain undisclosed. As it steps into independence, Heritable Agriculture aims to commercialise its AI-driven approach, potentially reshaping the future of sustainable farming.