OpenAI forms AI alliance with Kakao in South Korea

OpenAI has announced a new partnership with Kakao to develop AI products for South Korea. This marks OpenAI’s second major alliance in Asia this week, following a similar deal with SoftBank for AI services in Japan. OpenAI CEO Sam Altman, who is on a tour of Asia, also met with leaders from Samsung Electronics, SoftBank, and Arm Holdings. The partnership with Kakao is seen as part of OpenAI’s broader strategy to expand its AI presence in the region, with a focus on messaging and AI applications.

Kakao, which operates South Korea’s dominant messaging app KakaoTalk, plans to integrate OpenAI’s technology into its services as part of its push to grow its AI capabilities. Although Kakao has lagged behind rival Naver in the AI race, the company is positioning AI as a key growth engine. Altman highlighted the importance of South Korea’s energy, semiconductor, and internet sectors in driving demand for AI products, noting that many local companies will play a role in OpenAI’s Stargate data centre project in the US.

In addition to his work with Kakao, Altman met with executives from SK Group and Samsung to discuss AI chips and potential cooperation. SK Hynix, a key player in the production of AI processors, has been in discussions with OpenAI regarding collaboration in the AI ecosystem. Altman also indicated that OpenAI is actively considering involvement in South Korea’s national AI computing centre project, which is expected to attract up to $1.4 billion in investment.

Following the announcement, Kakao’s stock fell by 2%, after a 9% surge the previous day.

Google: Over 57 cyber threat actors using AI for hacking

Google identified more than 57 cyber threat actors linked to China, Iran, North Korea, and Russia leveraging the company’s AI technology to enhance their cyber and information warfare efforts. According to a new report by Google’s Threat Intelligence Group (GTIG), the state-sponsored hacking groups, known as Advanced Persistent Threats (APTs), primarily use AI for tasks such as researching vulnerabilities, writing malicious code, and creating targeted phishing campaigns.

The company says that Iranian APT actors, particularly APT42, were identified as the most frequent users of Google’s AI tool, Gemini. They used it for reconnaissance on cybersecurity experts and organizations, and phishing operations.

Beyond APT groups, underground cybercriminal forums have begun advertising illicit AI models, such as WormGPT, WolfGPT, FraudGPT, and GhostGPT—AI systems designed to bypass ethical safeguards and facilitate phishing, fraud, and cyberattacks.

In the report, Google stated that the company has implemented countermeasures to prevent abuse of its AI system and has called for stronger collaboration between government and private industry to bolster cybersecurity defenses.

New AI research tool launched by OpenAI

OpenAI has introduced a new AI tool called deep research, designed to conduct multi-step research on the internet for complex tasks. The tool is powered by an optimised version of the upcoming OpenAI o3 model, enabling it to browse and analyse online content, including text, images, and PDFs, to generate detailed reports.

Deep research significantly reduces the time required for research, with OpenAI stating that it accomplishes tasks in minutes that would take a human several hours.

Despite its capabilities, the tool remains in its early stages and has limitations, such as difficulties in distinguishing credible sources from rumours and challenges in conveying uncertainty accurately.

The feature is already accessible via the web version of ChatGPT and will be extended to mobile and desktop applications later in February.

Deep research is the second AI agent introduced by OpenAI this year, following the January preview of Operator, which assists users with tasks like to-do lists and travel planning.

Authorities in Taiwan block DeepSeek AI over data and censorship risks

Taiwan has officially banned government agencies from using DeepSeek AI, citing security risks and concerns over potential data exposure to China. The move strengthens previous guidance, which only advised against its use.

Premier Cho Jung-tai announced the decision after a cabinet meeting, stressing the importance of safeguarding national information security. Officials raised fears over possible censorship on DeepSeek and the risk of sensitive data being transferred to China.

The digital ministry had initially stated on Friday that government departments should avoid the AI service but did not explicitly prohibit it. The latest announcement formalises the ban, aligning with Taiwan’s broader approach to restricting Chinese technology.

Authorities in several other countries, including South Korea, France, Italy, and Ireland, have also scrutinised DeepSeek’s handling of personal data.

DeepSeek’s impact on power demand remains uncertain in Japan

Japan’s industry ministry acknowledges concerns that expanding data centres could drive up electricity consumption but finds it difficult to predict how demand may shift due to a single technology such as DeepSeek. The government’s latest draft energy plan, released in December, projects a 10-20% rise in electricity generation by 2040, citing increased AI-driven consumption.

DeepSeek, a Chinese AI startup, has raised questions about whether power demand will decline due to its potentially lower energy usage or increase as AI technology becomes more widespread and affordable. Analysts remain divided on the overall effect, highlighting the complexity of forecasting long-term energy trends.

Japan’s Ministry of Economy, Trade and Industry (METI) noted that AI-related energy demand depends on multiple factors, including improvements in performance, cost reductions, and energy-efficient innovations. The ministry emphasised that a single example cannot determine the future impact on electricity needs.

Economic growth and industrial competitiveness will rely on securing adequate decarbonised power sources to meet future demand. METI underscored the importance of balancing AI expansion with sustainable energy policies to maintain stability in Japan’s energy landscape.

UK course aims to equip young people with important AI skills

Young people in Guernsey are being offered a free six-week course on AI to help them understand both the opportunities and challenges of the technology. Run by Digital Greenhouse in St Peter Port, the programme is open to students and graduates over the age of 16, regardless of their academic background. Experts from University College London (UCL) deliver the lessons remotely each week.

Jenny de la Mare from Digital Greenhouse said the course was designed to “inform and inspire” participants while helping them stand out in job and university applications. She emphasised that the programme was not limited to STEM students and could serve as a strong introduction to AI for anyone interested in the field.

Recognising that young people in Guernsey may have fewer opportunities to attend major tech events in the UK, organisers hope the course will give them a competitive edge. The programme has already started but is still open for registrations, with interested individuals encouraged to contact Digital Greenhouse.

Smart sensors detect risks for people living alone

A trial in Sutton is using AI sensors to monitor the well-being of vulnerable people in their homes. The system tracks movement, temperature, and appliance usage to identify patterns and detect unusual activity, such as a missed meal or a fall. The initiative aims to allow individuals to live independently for longer while providing reassurance to their loved ones.

Margaret Linehan, 86, who has dementia, is one of over 1,200 residents using the system. She described it as a valuable safety net, helping alert her family if something is amiss. Her daughter-in-law, Marianne, can check an app to monitor activity and receive alerts. On one occasion, when Margaret got up for a cup of tea in the middle of the night, the system notified her son, highlighting its ability to detect unexpected behaviour.

The AI-powered technology, which does not use cameras or microphones, has already detected over 1,800 falls in the past year, enabling rapid responses from care teams. Sutton Council is trialling the system as part of a wider government initiative exploring AI’s role in improving public services. Experts hope the technology will revolutionise social care by providing proactive support while ensuring people’s privacy and independence.

New Google X spinoff uses AI to boost crop yields

Google’s X has spun out a new startup, Heritable Agriculture, which applies AI to revolutionise plant breeding. Using machine learning, the company analyses plant genomes to identify combinations that enhance yields, reduce water consumption, and increase carbon storage in soil.

The startup was founded by Brad Zamft, a former Google X researcher with a background in physics and biotech. Under his leadership, Heritable has tested thousands of plants using AI-powered models, running experiments in controlled growth chambers and field sites across the United States. Unlike gene-editing firms, Heritable focuses on refining traditional breeding methods rather than modifying DNA directly.

The company has secured investment from FTW Ventures, Mythos Ventures, and Google itself, though financial details remain undisclosed. As it steps into independence, Heritable Agriculture aims to commercialise its AI-driven approach, potentially reshaping the future of sustainable farming.

DeepSeek AI gains popularity in China

Chinese internet users have been captivated by the DeepSeek AI app, which has gained immense popularity since its launch during the Lunar New Year holiday. Users have explored its predictive and analytical capabilities, with some posing questions on politics, economics, and even personal matters. For example, law professor Wang Jiangyu asked how China should respond to US President Donald Trump’s tariffs, receiving a comprehensive seven-point answer that included potential new tariffs on US industries and other strategic moves. The model’s detailed responses have impressed users, though it censors certain politically sensitive topics, such as questions about Xi Jinping or the Tiananmen Square protests.

DeepSeek’s low-cost yet powerful AI has made waves in the tech sector, surpassing ChatGPT in downloads on the Apple App Store. The Hangzhou-based startup has become a source of national pride, with users sharing personal experiences, such as using the app to predict their fortunes or interpret dreams. This surge in popularity has drawn attention to the company’s rapid growth, and its founder, Liang Wengfeng, has emerged as a pop culture figure.

Despite its success, DeepSeek’s claims about the minimal cost of training its latest AI model—less than $6 million in computing power—have raised scepticism among some experts. Nevertheless, the platform’s effectiveness has prompted comparisons to the billions invested by US tech giants in AI development. The app’s rapid rise has also led to investigations by authorities in several countries, including Japan, South Korea, and several European nations, over concerns about its handling of personal data.

Europe eyes DeepSeek as a game changer in AI

DeepSeek, a Chinese AI company, is shaking up the ΑΙ landscape by offering technology at a significantly lower cost compared to US competitors like OpenAI. Hemanth Mandapati, CEO of German startup Novo AI, recently switched to DeepSeek’s chatbot services, noting that the transition was quick and easy, and the cost savings were substantial. Mandapati reported that DeepSeek’s pricing was five times lower than what he was paying, with no noticeable difference in user performance. Analysts estimate that DeepSeek’s prices are 20 to 40 times cheaper than OpenAI’s, making it an attractive option for many startups.

The rise of DeepSeek is seen as a potential game-changer, particularly in Europe, where tech startups have struggled to compete with their US counterparts due to limited funding. Some believe DeepSeek’s low-cost offerings could democratise AI and help level the playing field with major tech companies. However, concerns about DeepSeek’s data practices, particularly regarding the potential copying of OpenAI’s data or censorship of Chinese content, have raised regulatory questions across Europe.

Despite scepticism around the actual cost of DeepSeek’s training and data usage, the company has garnered significant attention, especially after its model topped the productivity app rankings on the Apple App Store. Industry leaders argue that this shift in pricing could spark a broader movement in AI, with smaller companies gaining more access to advanced technologies without needing large budgets. This could foster innovation across the sector, although major corporations remain cautious due to security and integration concerns.

As the cost of AI continues to fall, competition is intensifying. For example, Microsoft recently made OpenAI’s reasoning model available for free to users of its Copilot platform. While price is becoming a dominant factor in AI adoption, industry experts suggest that trust and security certifications will still play a critical role for larger businesses when choosing their AI partners.