South Korea unveils five-year AI blueprint for ‘super-innovation economy’

South Korea’s new administration has unveiled a five-year economic plan to build what it calls a ‘super-innovation economy’ by integrating AI across all sectors of society.

The strategy, led by President Lee Jae-myung, commits 100 trillion won (approximately US$71.5 billion) to position the country among the world’s top three AI powerhouses. Private firms will drive development, with government support for nationwide adoption.

Plans include a sovereign Korean-language AI model, humanoid robots for logistics and industry, and commercialising autonomous vehicles by 2027. Unmanned ships are targeted for completion by 2030, alongside widespread use of drones in firefighting and aviation.

AI will also be introduced into drug approvals, smart factories, welfare services, and tax administration, with AI-based tax consultations expected by 2026. Education initiatives and a national AI training data cluster will nurture talent and accelerate innovation.

Five domestic firms, including Naver Cloud, SK Telecom, and LG AI Research, will receive state support to build homegrown AI foundation models. Industry reports currently rank South Korea between sixth and 10th in global AI competitiveness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Students seek emotional support from AI chatbots

College students are increasingly turning to AI chatbots for emotional support, prompting concern among mental health professionals. A 2025 report ranked ‘therapy and companionship’ as the top use case for generative AI, particularly among younger users.

Studies by MIT and OpenAI show that frequent AI use can lower social confidence and increase avoidance of face-to-face interaction. On campuses, digital mental health platforms now supplement counselling services, offering tools that identify at-risk students and provide basic support.

Experts warn that chatbot companionship may create emotional habits that lack grounding in reality and hinder social skill development. Counsellors advocate for educating students on safe AI use and suggest universities adopt tools that flag risky engagement patterns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rethinking ‘soft skills’ as core drivers of transformation

Communication, empathy, and judgment were dismissed for years as ‘soft skills‘, sidelined while technical expertise dominated training and promotion. A new perspective argues that these human competencies are fundamental to resilience and transformation.

Researchers and practitioners emphasise that AI can expedite decision-making but cannot replace human judgment, trust, or narrative. Failures in leadership often stem from a lack of human capacity rather than technical gaps.

Redefining skills like decision-making, adaptability, and emotional intelligence as measurable behaviours helps organisations train and evaluate leaders effectively. Embedding these human disciplines ensures transformation holds under pressure and uncertainty.

Career and cultures are strengthened when leaders are assessed on their ability to build trust, resolve conflicts, and influence through storytelling. Without funding the human core alongside technical skills, strategies collapse, and talent disengages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google prepares Duolingo rival using Translate

Google Translate may soon evolve into a full-featured language learning tool, introducing AI-powered lessons rivalling apps like Duolingo.

The latest Translate app release recently uncovered a hidden feature called Practice. It enables users to take part in interactive learning scenarios.

Early tests allow learners to choose languages such as Spanish and French, then engage with situational exercises from beginner to advanced levels.

The tool personalises lessons using AI, adapting difficulty and content based on a user’s goals, such as preparing for specific trips.

Users can track progress, receive daily practice reminders, and customise prompts for listening and speaking drills through a dedicated settings panel.

The feature resembles gamified learning apps and may join Google’s premium AI offerings, though pricing and launch plans remain unconfirmed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study finds chain-of-thought reasoning in LLMs is a brittle mirage

A new study from Arizona State University researchers suggests that chain-of-thought reasoning in large language models (LLMs) is closer to pattern matching than accurate logical inference. The findings challenge assumptions about human-like intelligence in these systems.

The researchers used a data distribution lens to examine where chain-of-thought fails, testing models on new tasks, different reasoning lengths, and altered prompt formats. Across all cases, performance degraded sharply outside familiar training structures.

Their framework, DataAlchemy, showed that models replicate training patterns rather than reason abstractly. Failures could be patched quickly through fine-tuning on small new datasets, but this reinforced the pattern-matching theory.

The paper warns developers against relying on chain-of-thought reasoning for high-stakes domains, emphasising the risks of fluent but flawed rationale. It urges practitioners to implement rigorous out-of-distribution testing and treat fine-tuning as a limited patch.

The researchers argue that applications can remain effective for enterprise use by systematically mapping a model’s boundaries and aligning them with predictable tasks. Targeted fine-tuning then becomes a tool for precision rather than broad generalisation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK colleges hit by phishing incident

Weymouth and Kingston Maurward College in Dorset is investigating a recent phishing attack that compromised several email accounts. The breach occurred on Friday, 15 August, during the summer holidays.

Spam emails were sent from affected accounts, though the college confirmed that personal data exposure was minimal.

The compromised accounts may have contained contact information from anyone who previously communicated with the college. Early detection allowed the college to lock down affected accounts promptly, limiting the impact.

A full investigation is ongoing, with additional security measures now in place to prevent similar incidents. The matter has been reported to the Information Commissioner’s Office (ICO).

Phishing attacks involve criminals impersonating trusted entities to trick individuals into revealing sensitive information such as passwords or personal data. The college reassured students, staff, and partners that swift action and robust systems limited the disruption.

The colleges, which merged just over a year ago, recently received a ‘Good’ rating across all areas in an Ofsted inspection, reflecting strong governance and oversight amid the cybersecurity incident.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study finds AI-generated responses flooding research platforms

Online questionnaires are being increasingly swamped by AI-generated responses, raising concerns that a vital data source for researchers is becoming polluted. Platforms like Prolific, which pay participants to answer questions, are widely used in behavioural studies.

Researchers at the Max Planck Institute noticed suspicious patterns in their work and began investigating. They found that nearly half of the respondents copied and pasted answers, strongly suggesting that many were outsourcing tasks to AI chatbots.

Analysis showed clear giveaways, including overly verbose and distinctly non-human language. The researchers concluded that a substantial proportion of behavioural studies may already be compromised by chatbot-generated content.

In follow-up tests, they set traps to detect AI use, including invisible text instructions and restrictions on copy-paste. The measures caught a further share of participants, highlighting the scale of the challenge facing online research platforms.

Experts say the responsibility lies with both researchers and platforms. Stronger verification methods and tighter controls are needed for online behavioural research to remain credible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pakistan launches national AI innovation competition

Pakistan’s Ministry of Planning, Development, and Special Initiatives has launched a national innovation competition to drive the development of AI solutions in priority sectors. The initiative aims to attract top talent to develop impactful health, education, agriculture, industry, and governance projects.

Minister Ahsan Iqbal said AI is no longer a distant prospect but a present reality that is already transforming economies. He described the competition as a milestone in Pakistan’s digital history and urged the nation to embrace AI’s global momentum.

Iqbal stressed that algorithms now shape decisions more than traditional markets, warning that technological dependence must be avoided. Pakistan, he argued, must actively participate in the AI revolution or risk being left behind by more advanced economies.

He highlighted AI’s potential to predict crop diseases, aid doctors in diagnosis, and deliver quality education to every child nationwide. He said Pakistan will not be a bystander but an emerging leader in shaping the digital future.

The government has begun integrating AI into curricula and expanding capacity-building initiatives. Officials expect the competition to unlock new opportunities for innovation, empowering youth and driving sustainable development across the country.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp trials AI-powered Writing Help for personalised messaging

WhatsApp is testing a new AI feature for iOS users that provides real-time writing assistance.

Known as ‘Writing Help’, the tool suggests alternative phrasings, adjusts tone, and enhances clarity, with all processing handled on-device to safeguard privacy.

The feature allows users to select professional, friendly, or concise tones before the AI generates suitable rewordings while keeping the original meaning. According to reports, the tool is available only to a small group of beta testers through TestFlight, with no confirmed release date.

WhatsApp says it uses Meta’s Private Processing technology to ensure sensitive data never leaves the device, mirroring privacy-first approaches like Apple’s Writing Tools.

Industry watchers suggest the new tool could give WhatsApp an edge over rivals such as Telegram and Signal, which have not yet introduced generative AI writing aids.

Analysts also see potential for integration with other Meta platforms, although challenges remain in ensuring accurate, unbiased results across different languages.

Writing Help could streamline business communication by improving grammar, structure, and tone accuracy if successful. While some users have praised its seamless integration, others warn that heavy reliance on AI could undermine authenticity in digital conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSPRA warns AI must complement, not replace, human voices in education

A new report from the National School Public Relations Association (NSPRA) and ThoughtExchange highlights the growing role of AI in K-12 communications, offering detailed guidance for ethical integration and effective school engagement.

Drawing on insights from 200 professionals across 37 states, the study reveals how AI tools boost efficiency while underscoring the need for stronger policies, transparency, and ongoing training.

Barbara M Hunter, APR, NSPRA executive director, explained that AI can enhance communication work but will never replace strategy, human judgement, relationships, and authentic school voices.

Key findings show that 91 percent of respondents already use AI, yet most districts still lack clear policies or disclosure practices for employee use.

The report recommends strengthening AI education, accelerating policy development, expanding the scope to cover staff, and building proactive strategies supported by human oversight and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!