Malaysia tackles online scams with AI and new cyber guidelines

Cybercrime involving financial scams continues to rise in Malaysia, with 35,368 cases reported in 2024, a 2.53 per cent increase from the previous year, resulting in losses of RM1.58 billion.

The situation remains severe in 2025, with over 12,000 online scam cases recorded in the first quarter alone, involving fake e-commerce offers, bogus loans, and non-existent investment platforms. Losses during this period reached RM573.7 million.

Instead of waiting for the situation to worsen, the Digital Ministry is rolling out proactive safeguards. These include new AI-related guidelines under development by the Department of Personal Data Protection, scheduled for release by March 2026.

The documents will cover data protection impact assessments, automated decision-making, and privacy-by-design principles.

The ministry has also introduced an official framework for responsible AI use in the public sector, called GPAISA, to ensure ethical compliance and support across government agencies.

Additionally, training initiatives such as AI Untuk Rakyat and MD Workforce aim to equip civil servants and enforcement teams with skills to handle AI and cyber threats.

In partnership with CyberSecurity Malaysia and Universiti Kebangsaan Malaysia, the ministry is also creating an AI-powered application to verify digital images and videos.

Instead of relying solely on manual analysis, the tool will help investigators detect online fraud, identity forgery, and synthetic media more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US urges Asia-Pacific to embrace open AI innovation over strict regulation

A senior White House official has urged Asia-Pacific economies to support an AI future built on US technology, warning against adopting Europe’s heavily regulated model. Michael Kratsios remarked during the APEC Digital and AI Ministerial Meeting in Incheon.

Kratsios said countries now choose between embracing American-led innovation or falling behind under regulatory burdens. He framed the US approach as one driven by freedom and open-source innovation rather than centralised control.

The US is offering partnerships with South Korea to respect data concerns while enabling shared progress. Kratsios noted that open-weight models could soon shape industry standards worldwide.

He met South Korea’s science minister in bilateral talks to discuss AI cooperation. The US reaffirmed its commitment to supporting nations in building trustworthy AI systems based on mutual economic benefit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law curbs AI use in mental health services across US state

A new law in a US state has banned the use of AI for delivering mental health care, drawing a firm line between digital tools and licensed professionals. The legislation limits AI systems to administrative tasks such as note-taking and scheduling, explicitly prohibiting them from offering therapy or clinical advice.

The move comes as concerns grow over the use of AI chatbots in sensitive care roles. Lawmakers in the midwestern state of Illinois approved the measure, citing the need to protect residents from potentially harmful or misleading AI-generated responses.

Fines of up to $10,000 may be imposed on companies or individuals who violate the ban. Officials stressed that AI lacks the empathy, accountability and clinical oversight necessary to ensure safe and ethical mental health treatment.

One infamous case saw an AI-powered chatbot suggest drug use to a fictional recovering addict, a warning signal, experts say, of what can go wrong without strict safeguards. The law is named the Wellness and Oversight for Psychological Resources Act.

Other parts of the United States are considering similar steps. Florida’s governor recently described AI as ‘the biggest issue’ facing modern society and pledged new state-level regulations within months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT checkout could sideline major platforms

OpenAI is preparing to add a payment system into ChatGPT, allowing users to complete purchases without ever leaving the chatbot. Retail leaders are calling it a turning point in e-commerce, as it may significantly simplify how people shop online.

The company is expected to take a cut of transactions and work with platforms such as Shopify to streamline operations. With over 77 million users, ChatGPT has the reach to become a dominant shopping tool, potentially bypassing platforms like Amazon.

Executives worry visibility could depend on revenue-sharing, forcing brands to pay for prominence in the chatbot. Some fear this pay-to-play model could leave smaller retailers behind and limit consumer choice.

At the same time, personalised AI-driven recommendations may enhance user experiences while raising questions about data use and bias. Entrepreneurs on X are already predicting widespread AI-led shopping within a year.

Retailers are now adjusting strategies to remain visible in this new market. While some early adopters show success using AI to complete purchases, others highlight technical challenges in integration and website compatibility.

Observers say search engines could lose relevance as shoppers turn to AI instead. Regulators remain cautious, particularly in markets like Australia, where many consumers are open to AI-led transactions.

The industry faces a shift where chatbots may evolve into full-scale digital marketplaces. Brands are urged to act quickly, or risk losing out as AI commerce becomes the norm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI news summaries to affect the future of journalism

Generative AI tools like ChatGPT significantly impact traditional online news by reducing search traffic to media websites.

As these AI assistants summarise news content directly in search results, users are less likely to click through to the sources, threatening already struggling publishers who depend on ad revenue and subscriptions.

A Pew Research Centre study found that when AI summaries appear in search, users click suggested links half as often as in traditional search formats.

Matt Karolian of the Boston Globe Media warns that the next few years will be especially difficult for publishers, urging them to adapt or risk being ‘swept away.’

While some, like the Boston Globe, have gained a modest number of new subscribers through ChatGPT, these numbers pale compared to other traffic sources.

To adapt, publishers are turning to Generative Engine Optimisation (GEO), tailoring content so AI tools can be used and cited more effectively. Some have blocked crawlers to prevent data harvesting, while others have reopened access to retain visibility.

Legal battles are unfolding, including a major lawsuit from The New York Times against OpenAI and Microsoft. Meanwhile, licensing deals between tech giants and media organisations are beginning to take shape.

With nearly 15% of under-25s now relying on AI for news, concerns are mounting over the credibility of information. As AI reshapes how news is consumed, the survival of original journalism and public trust in it face grave uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft offers $5 million for cloud and AI vulnerabilities

Microsoft is offering security researchers up to $5 million for uncovering critical vulnerabilities in its products, with a focus on cloud and AI systems. The Zero Day Quest contest will return in spring 2026, following a $1.6 million payout in its previous edition.

Researchers are invited to submit discoveries between 4 August and 4 October 2025, targeting Azure, Copilot, M365, and other significant services. High-severity flaws are eligible for a 50% bonus payout, increasing the incentive for impactful findings.

Top participants will receive exclusive invitations to a live hacking event at Microsoft’s Redmond campus. The event promises collaboration with product teams and the Microsoft Security Response Centre.

Training from Microsoft’s AI Red Team and other internal experts will also be available. The company encourages public disclosure of patched findings to support the broader cybersecurity community.

The competition aligns with Microsoft’s Secure Future Initiative, which aims to strengthen cloud and AI security by default, design, and operation. Vulnerabilities will be disclosed transparently, even if no customer action is needed.

Full details and submission rules are available through the MSRC Researcher Portal. All reports will be subject to Microsoft’s bug bounty terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple develops smart search engine to rival ChatGPT

Apple is developing its AI-powered answer engine to rival ChatGPT, marking a strategic turn in its company’s AI approach. The move comes as Apple aims to close the gap with competitors in the fast-moving AI race.

A newly formed internal team, Answers, Knowledge and Information, is working on a tool to browse the web and deliver direct responses to users.

Led by former Siri head Robby Walker, the project is expected to expand across key Apple services, including Siri, Safari and Spotlight.

Job postings suggest Apple is recruiting talent with search engine and algorithm expertise. CEO Tim Cook has signalled Apple’s willingness to acquire companies that could speed up its AI progress.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Late-stage GenAI deals triple, Ireland sees growing interest

According to EY Ireland, global investment in generative AI surged to $49.2bn in the first half of 2025, eclipsing the full-year total for 2024. Despite a drop in deals, total value doubled year-on-year, reflecting a pivot towards more mature and revenue-focused ventures.

Average late-stage deal size has more than tripled to $1.55bn, while early and seed-stage activity has stagnated or declined. Landmark rounds from OpenAI, xAI, Anthropic, and Databricks drove much of the volume, alongside a notable $3.3bn agentic AI acquisition by Capgemini.

Ireland remains a strong adopter of AI, with 63% of startups using the technology. Yet funding gaps persist, particularly between €1m and €10m, posing challenges for growth-stage firms despite a strong local talent base.

Sprout Social’s acquisition of Irish analytics firm NewsWhip, though not part of the H1 figures, points to growing international interest in Irish AI capabilities. Meanwhile, US firms still dominate global deal value, capturing 97%, with the Middle East rising fast and Europe trailing at just 2%.

EY forecasts that sector-specific GenAI platforms, especially in cybersecurity and compliance, will become the next magnet for venture capital through late 2025 and beyond.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI to improve its ability in detecting mental or emotional distress

In search of emotional support during a mental health crisis, it has been reported that people use ChatGPT as their ‘therapist.’ While this may seem like an easy getaway, reports have shown that ChatGPT’s responses have had an amplifying effect on people’s delusions rather than helping them find coping mechanisms. As a result, OpenAI stated that it plans to improve the chatbot’s ability to detect mental distress in the new GPT-5 AI model, which is expected to launch later this week.

OpenAI admits that GPT-4 sometimes failed to recognise signs of delusion or emotional dependency, especially in vulnerable users. To encourage healthier use of ChatGPT, which now serves nearly 700 million weekly users, OpenAI is introducing break reminders during long sessions, prompting users to pause or continue chatting.

Additionally, it plans to refine how and when ChatGPT displays break reminders, following a trend seen on platforms like YouTube and TikTok.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!