AI technology set to reshape farming and rural life in South Korea

South Korea has launched a national agenda to expand AI across agriculture, aiming to boost productivity and improve living standards in rural communities. Officials from the Ministry of Agriculture, Food and Rural Affairs and the Ministry of Science and ICT presented the strategy as part of a wider digital transformation effort.

Plans include expanding smart farm models that reduce labour-intensive tasks and allow more farmers to benefit from automated technologies. Shared machinery centres and autonomous farming tools such as drones will be developed with support from the Rural Development Administration.

Authorities also intend to apply AI to agricultural distribution through smart logistics facilities that manage receiving, sorting and shipping processes. Around 300 smart Agricultural Products Processing Centres are expected to operate nationwide by 2030.

Livestock grading systems using AI will be introduced to improve accuracy and consumer trust across pork and beef processing facilities. Officials aim to raise the share of AI-graded meat from 19.4 percent in 2025 to 70 percent by 2030.

Beyond production, the programme seeks to expand ‘smart rural communities’ offering AI-based services such as transport, daily living support and farming assistance. Policymakers believe that a stronger digital infrastructure will help rural regions respond to climate pressures and an ageing population.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake attacks push organisations to rethink cybersecurity strategies

Organisations are strengthening their cybersecurity strategies as deepfake attacks become more convincing and easier to produce using generative AI.

Security experts alert that enterprises must move beyond basic detection tools and adopt layered security strategies to defend against the growing threat of deepfake attacks targeting communications and digital identity.

Many existing tools for identifying manipulated media are still imperfect. Digital forensics expert Hany Farid estimates that some systems used to detect deepfake attacks are only about 80 percent effective and often fail to explain how they determine whether an image, video, or audio recording is authentic. The lack of explainability also raises challenges for legal investigations and public verification of suspicious media.

Cybersecurity companies are creating new technologies to improve the detection of deepfake attacks by analysing slight signals that are difficult for humans to notice. Firms such as GetReal Security, Reality Defender, Deep Media, and Sensity AI examine lighting consistency, shadow angles, voice patterns, and facial movements. Environmental indicators such as device location, metadata, and IP information can also help security teams spot potential deepfake attacks.

However, experts say detection alone cannot fully protect organisations from deepfake attacks. Companies are increasingly conducting internal red-team exercises that simulate impersonation scenarios to expose weaknesses in verification procedures. Multi-factor authentication techniques can reduce the risk of employees responding to fraudulent communications.

Another emerging defence involves digital provenance systems designed to track the origin and modification history of digital content. Initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) embed cryptographically signed metadata into media files, allowing organisations to verify whether content linked to suspected deepfake attacks has been altered.

Recent experiments highlight how testing these threats can be. In February, cybersecurity company Reality Defender conducted an exercise with NATO by introducing deepfake media into a simulated military scenario. The findings showed how easily even experienced officials can struggle to identify manipulated communications, reinforcing calls for automated systems capable to detecting deepfake attacks across critical infrastructure.

As generative AI tools continue to advance, organisations are expected to combine detection technologies, stronger verification procedures, and provenance tracking to reduce the risks posed by deepfake attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers target WhatsApp and Signal in global encrypted messaging attacks

Foreign state-backed hackers are targeting accounts on WhatsApp and Signal used by government officials, diplomats, military personnel, and other high-value individuals, according to a security alert issued by the Portuguese Security Intelligence Service (SIS).

Portuguese authorities described the activity as part of a global cyber-espionage campaign aimed at gaining access to sensitive communications and extracting privileged information from Portugal and allied countries. The advisory did not identify the origin of the suspected attackers.

The warning follows similar alerts from other European intelligence agencies. Earlier this week, Dutch authorities reported that hackers linked to Russia were conducting a global campaign targeting the messaging accounts of officials, military personnel, and journalists.

Security agencies say the attackers are not exploiting vulnerabilities in the messaging platforms themselves. Both WhatsApp and Signal rely on end-to-end encryption designed to protect the content of messages from interception.

Instead, the campaign focuses on social engineering tactics that trick users into granting access to their accounts. According to the SIS report, attackers use phishing messages, malicious links, fake technical support requests, QR-code lures, and impersonation of trusted contacts.

The agency also warned that AI tools are increasingly being used to make such attacks more convincing. AI can help impersonate support staff, mimic familiar voices or identities, and conduct more realistic conversations through messages, phone calls, or video.

Once attackers gain access to an account, they may be able to read private messages, group chats, and shared files via WhatsApp and Signal. They can also impersonate the compromised user to launch additional phishing attacks targeting the victim’s contacts.

The alert echoes a previous warning issued by the Cybersecurity and Infrastructure Security Agency (CISA), which reported that encrypted messaging apps are increasingly being used as entry points for spyware and phishing campaigns targeting high-value individuals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tesla moves to enter the British household electricity market

A licence that would allow Tesla to supply electricity directly to households and businesses across Great Britain has been applied for.

The application was submitted to the national energy regulator Ofgem, which oversees energy suppliers in England, Scotland and Wales.

Approval would enable the company to enter the retail electricity market as early as next year. The service is expected to operate under the brand ‘Tesla Electric’, extending the company’s strategy of combining electric vehicles, battery storage and energy supply into a single ecosystem.

Tesla’s UK energy subsidiary, Tesla Energy Ventures, filed the application through its Manchester-based operation. Regulatory review may take several months, as Ofgem typically requires up to nine months to evaluate electricity supplier licences.

A future electricity offer could primarily target households that already use Tesla technologies, including home batteries and electric vehicle charging systems.

The company sells Powerwall storage batteries in the UK, which allow homeowners to store electricity generated by solar panels or purchased during off-peak hours.

Such systems also allow surplus energy stored in batteries to be sold back to the grid.

Similar services are already available in the US, where Tesla launched a residential electricity supply programme in Texas in 2022.

The expansion into the energy supply market comes amid pressure on Tesla’s automotive business in Europe. Sales of Tesla vehicles in the UK declined significantly during 2025, reducing the company’s share of the national car market.

Diversifying into energy services could therefore represent a broader strategic shift for the company led by Elon Musk. Integrating electricity supply with electric vehicles and home energy systems could allow Tesla to build a more comprehensive energy platform for consumers.

If approved, the initiative would position Tesla as both a technology manufacturer and a direct energy supplier in the British market.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU approves signature of global AI framework

The European Parliament has approved the Council of Europe Framework Convention on Artificial Intelligence, the first international legally binding treaty on AI governance.

With 455 votes in favour, 101 against, and 74 abstentions, Parliament endorsed the EU’s signature to embed existing AI legislation in a global framework. The move reinforces the safe and rights-respecting deployment of AI across the EU and worldwide.

The convention sets standards for transparency, documentation, risk management, and oversight, applying to both public authorities and private actors acting on their behalf.

It establishes a global baseline for AI governance while allowing the EU to maintain higher protections under the AI Act, GDPR, and other EU legislation covering product safety, liability, and non-discrimination.

The EU co-rapporteurs highlighted that the agreement demonstrates the EU’s commitment to human-centric AI. By prioritising democracy, accountability, and fundamental rights, the framework aims to ensure AI strengthens open societies while supporting stable economic growth.

Negotiations on the convention began in 2022 with participation from the EU member states, international partners, civil society, academia, and industry. Current signatories include the EU, the UK, Ukraine, Canada, Israel, and the United States, with the convention open to additional global partners.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DIGITALEUROPE urges changes to EU AI Act rules for industry

European industry representatives are urging policymakers to reconsider parts of the EU AI Act, arguing that the current framework could impose significant compliance costs on companies developing AI tools for industrial and medical technologies.

According to Cecilia Bonfeld-Dahl, director-general of DIGITALEUROPE, manufacturers of high-tech machines, medical devices, and radio equipment are already subject to strict product safety regulations. Adding AI-specific requirements could create unnecessary administrative burdens for companies already heavily regulated. She argues that policymakers should aim for balanced AI regulation that encourages innovation while maintaining safety standards.

Industry groups warn that classifying certain AI systems as high-risk under Annex I of the AI Act could be particularly costly for smaller firms. DIGITALEUROPE estimates that a company with around 50 employees developing an AI-based product could incur initial compliance costs of €320,000 to €600,000, followed by annual expenses of up to €150,000. According to the organisation, such costs could reduce profits significantly and discourage smaller companies from pursuing AI innovation.

Manufacturing and medical technology sectors across Europe employ millions of workers and increasingly rely on AI to improve product performance and safety. Industry representatives argue that many applications, such as AI systems used to enhance industrial equipment safety or improve medical devices, already operate under established regulatory frameworks. These existing frameworks could be adapted rather than introducing additional layers of regulation.

The broader regulatory landscape is also contributing to concerns among technology companies. Over the past six years, the EU has introduced nearly 40 new technology-related regulations, some of which overlap or impose similar compliance requirements. DIGITALEUROPE estimates that compliance with the AI Act could cost companies approximately €3.3 billion annually, while cybersecurity and data-sharing regulations add further financial obligations.

Industry leaders warn that rising compliance costs could affect investment in AI development across Europe. Current estimates suggest that the EU accounts for about 7.5% of global AI investment, significantly behind the United States and China.

DIGITALEUROPE has called on the EU institutions to consider postponing parts of the AI Act’s implementation timeline to allow further discussion on how high-risk AI systems should be defined. Supporters of this approach argue that additional consultation could help ensure the regulatory framework protects consumers while also enabling European companies to compete globally in the rapidly evolving AI sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers move forward on AI Act changes

Members of the European Parliament have reached a preliminary political agreement on amendments to the EU Artificial Intelligence Act. The compromise will be reviewed by parliamentary committees before a scheduled vote in Brussels.

Lawmakers in the EU agreed to extend compliance deadlines for some high risk AI systems. The changes aim to give companies and regulators more time to prepare technical standards and enforcement frameworks.

The proposed amendments also include a ban on AI systems that create non consensual explicit deepfakes. Officials in the EU say the measure aims to strengthen consumer protection and improve online safety for children.

Industry groups in the EU have raised concerns about compliance burdens linked to the revised rules. Policymakers in the EU continue negotiations as the legislation moves toward committee approval.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil society urges stronger EU digital fairness rules

More than 200 civil society organisations have urged the European Commission to deliver strong consumer protections through the upcoming Digital Fairness Act. Advocacy groups in the EU say the proposal must address risks created by modern online platforms.

Campaigners argue that many existing EU consumer laws were designed decades ago and no longer reflect the realities of the digital market. The coalition warned policymakers in the EU not to treat regulatory simplification as a path toward deregulation.

Advocates are pushing for binding rules targeting deceptive design practices and addictive digital features. Survey responses across the EU show broad public support for stronger protections against dark patterns and unfair personalisation.

The European Commission is expected to present the Digital Fairness Act later this year. Officials in the EU are also considering expanding enforcement powers to strengthen consumer safeguards online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Telegram faces global outages as Russia slows service

Users of the messaging app Telegram have experienced outages in multiple regions over the past 24 hours, with the largest volume of complaints coming from Russia. Reports from the US, UK, Germany, the Netherlands, and Norway suggest the issues could be global.

Difficulties primarily affected the mobile app, with users reporting login issues, messaging delays, and limited access to features. In Russia, outages result from traffic slowdowns by Roskomnadzor, with similar restrictions affecting WhatsApp.

Telegram’s founder, Pavel Durov, has criticised the Russian government’s actions, arguing that authorities aim to push citizens towards a state-controlled alternative, the ‘Max’ messenger.

Despite Telegram overtaking WhatsApp in Russia with over 95 million active users, Max has now surpassed 100 million users, showing the Kremlin’s growing influence over digital communications.

Russian authorities have stated that Telegram must comply with local laws, moderate content, and consider data localisation to avoid further restrictions. Durov has reaffirmed the platform’s commitment to protecting user privacy and upholding freedom of speech.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New venture aims to build AI that understands the real world

AI pioneer Yann LeCun has secured more than $1 billion in funding for a new startup that aims to rethink how AI systems learn about the world.

The venture, called Advanced Machine Intelligence (AMI), will focus on developing AI that learns from real-world signals, such as camera and sensor data, rather than relying primarily on text. According to this French company, such systems could make better decisions by understanding how events unfold in the physical world.

AMI plans to build what researchers call ‘world models’, AI systems designed to predict the consequences of actions before they happen. Developers believe that grounding AI in real-world data could make the technology more reliable and easier to control, especially in critical safety applications.

Operations will span several global research hubs, including Paris, New York City, Montreal and Singapore. The company has already begun assembling its leadership team, appointing entrepreneur Alex LeBrun as chief executive and AI researcher Saining Xie as chief science officer.

Support for the project quickly appeared online. Emmanuel Macron welcomed the launch, saying it represented a new chapter in AI and highlighting the role of researchers and innovators in shaping the technology’s future.

LeCun is widely regarded as one of the key figures behind modern AI. In 2018, he shared the prestigious Turing Award with fellow researchers Geoffrey Hinton and Yoshua Bengio for their contributions to deep learning.

Research at AMI will focus on building AI systems that can reason, plan actions and maintain long-term memory. Possible applications range from robotics and industrial automation to healthcare and wearable technologies, areas where dependable AI could have a major impact.

LeCun and his team argue that genuine intelligence cannot emerge from language alone. Understanding the world, they say, requires machines that learn directly from it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!