Malaysia tackles online scams with AI and new cyber guidelines

Cybercrime involving financial scams continues to rise in Malaysia, with 35,368 cases reported in 2024, a 2.53 per cent increase from the previous year, resulting in losses of RM1.58 billion.

The situation remains severe in 2025, with over 12,000 online scam cases recorded in the first quarter alone, involving fake e-commerce offers, bogus loans, and non-existent investment platforms. Losses during this period reached RM573.7 million.

Instead of waiting for the situation to worsen, the Digital Ministry is rolling out proactive safeguards. These include new AI-related guidelines under development by the Department of Personal Data Protection, scheduled for release by March 2026.

The documents will cover data protection impact assessments, automated decision-making, and privacy-by-design principles.

The ministry has also introduced an official framework for responsible AI use in the public sector, called GPAISA, to ensure ethical compliance and support across government agencies.

Additionally, training initiatives such as AI Untuk Rakyat and MD Workforce aim to equip civil servants and enforcement teams with skills to handle AI and cyber threats.

In partnership with CyberSecurity Malaysia and Universiti Kebangsaan Malaysia, the ministry is also creating an AI-powered application to verify digital images and videos.

Instead of relying solely on manual analysis, the tool will help investigators detect online fraud, identity forgery, and synthetic media more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US urges Asia-Pacific to embrace open AI innovation over strict regulation

A senior White House official has urged Asia-Pacific economies to support an AI future built on US technology, warning against adopting Europe’s heavily regulated model. Michael Kratsios remarked during the APEC Digital and AI Ministerial Meeting in Incheon.

Kratsios said countries now choose between embracing American-led innovation or falling behind under regulatory burdens. He framed the US approach as one driven by freedom and open-source innovation rather than centralised control.

The US is offering partnerships with South Korea to respect data concerns while enabling shared progress. Kratsios noted that open-weight models could soon shape industry standards worldwide.

He met South Korea’s science minister in bilateral talks to discuss AI cooperation. The US reaffirmed its commitment to supporting nations in building trustworthy AI systems based on mutual economic benefit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law curbs AI use in mental health services across US state

A new law in a US state has banned the use of AI for delivering mental health care, drawing a firm line between digital tools and licensed professionals. The legislation limits AI systems to administrative tasks such as note-taking and scheduling, explicitly prohibiting them from offering therapy or clinical advice.

The move comes as concerns grow over the use of AI chatbots in sensitive care roles. Lawmakers in the midwestern state of Illinois approved the measure, citing the need to protect residents from potentially harmful or misleading AI-generated responses.

Fines of up to $10,000 may be imposed on companies or individuals who violate the ban. Officials stressed that AI lacks the empathy, accountability and clinical oversight necessary to ensure safe and ethical mental health treatment.

One infamous case saw an AI-powered chatbot suggest drug use to a fictional recovering addict, a warning signal, experts say, of what can go wrong without strict safeguards. The law is named the Wellness and Oversight for Psychological Resources Act.

Other parts of the United States are considering similar steps. Florida’s governor recently described AI as ‘the biggest issue’ facing modern society and pledged new state-level regulations within months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI news summaries to affect the future of journalism

Generative AI tools like ChatGPT significantly impact traditional online news by reducing search traffic to media websites.

As these AI assistants summarise news content directly in search results, users are less likely to click through to the sources, threatening already struggling publishers who depend on ad revenue and subscriptions.

A Pew Research Centre study found that when AI summaries appear in search, users click suggested links half as often as in traditional search formats.

Matt Karolian of the Boston Globe Media warns that the next few years will be especially difficult for publishers, urging them to adapt or risk being ‘swept away.’

While some, like the Boston Globe, have gained a modest number of new subscribers through ChatGPT, these numbers pale compared to other traffic sources.

To adapt, publishers are turning to Generative Engine Optimisation (GEO), tailoring content so AI tools can be used and cited more effectively. Some have blocked crawlers to prevent data harvesting, while others have reopened access to retain visibility.

Legal battles are unfolding, including a major lawsuit from The New York Times against OpenAI and Microsoft. Meanwhile, licensing deals between tech giants and media organisations are beginning to take shape.

With nearly 15% of under-25s now relying on AI for news, concerns are mounting over the credibility of information. As AI reshapes how news is consumed, the survival of original journalism and public trust in it face grave uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple develops smart search engine to rival ChatGPT

Apple is developing its AI-powered answer engine to rival ChatGPT, marking a strategic turn in its company’s AI approach. The move comes as Apple aims to close the gap with competitors in the fast-moving AI race.

A newly formed internal team, Answers, Knowledge and Information, is working on a tool to browse the web and deliver direct responses to users.

Led by former Siri head Robby Walker, the project is expected to expand across key Apple services, including Siri, Safari and Spotlight.

Job postings suggest Apple is recruiting talent with search engine and algorithm expertise. CEO Tim Cook has signalled Apple’s willingness to acquire companies that could speed up its AI progress.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI to improve its ability in detecting mental or emotional distress

In search of emotional support during a mental health crisis, it has been reported that people use ChatGPT as their ‘therapist.’ While this may seem like an easy getaway, reports have shown that ChatGPT’s responses have had an amplifying effect on people’s delusions rather than helping them find coping mechanisms. As a result, OpenAI stated that it plans to improve the chatbot’s ability to detect mental distress in the new GPT-5 AI model, which is expected to launch later this week.

OpenAI admits that GPT-4 sometimes failed to recognise signs of delusion or emotional dependency, especially in vulnerable users. To encourage healthier use of ChatGPT, which now serves nearly 700 million weekly users, OpenAI is introducing break reminders during long sessions, prompting users to pause or continue chatting.

Additionally, it plans to refine how and when ChatGPT displays break reminders, following a trend seen on platforms like YouTube and TikTok.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption soothes stress even as job fears rise among employees

A recent Fortune survey indicates that 61 percent of white‑collar professionals expect AI to make their roles, or even their entire teams, obsolete within 3–5 years, yet most continue to rely on AI tools daily without visible concern.

Seventy percent of respondents credit AI with boosting their creativity and productivity, and 40  percent say it has eased stress and improved work‑life balance. Despite these benefits, many admit to ‘feigning’ AI use in workplace settings, often driven by peer pressure or a lack of formal training.

Executive commentary underscores the tension: senior business leaders, including Jim Farley and Dario Amodei, predict rapid AI‑driven disruption of white‑collar roles. Some executives forecast up to 50  percent of certain job categories could be eliminated, though others argue AI may open new opportunities.

Academic studies suggest a more nuanced impact: AI is reshaping role definitions by automating routine tasks while increasing demand for complementary skills, such as ethics, teamwork, and digital fluency. Wage benefits are growing in jobs that effectively blend AI with human oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s transformation of work habits, mindset and lifestyle

At Mindvalley’s AI Summit, former Google Chief Decision Scientist Cassie Kozyrkov described AI as not a substitute for human thought but a magnifier of what the human mind can produce. Rather than replacing us, AI lets us offload mundane tasks and focus on deeper cognitive and creative work.

Work structures are being transformed, not just in factories, but behind computer screens. AI now handles administrative ‘work about work,’ multitasking, scheduling, and research summarisation, lowering friction in knowledge work and enabling people to supervise agents rather than execute tasks manually.

Personal life is being reshaped, too. AI tools for finance or health, such as budgeting apps or personalised diagnostics, move decisions into data-augmented systems with faster insight and fewer human biases.

Meanwhile, creativity is co-authored via AI-generated design, music or writing, requiring humans to filter, refine and ideate beyond the algorithm.

Recognising cognitive change, AI thought leaders envision a new era where ‘blended work’ prevails: humans manage AI agents, call the shots, and wield ethical oversight, while the AI executes pipelines of repetitive or semi-intelligent tasks.

Scholars warn that this model demands new fairness, transparency, and collaboration skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!