Malaysia expands AI learning across universities with Google tools

AI tools from Google are now available across all public universities in Malaysia after the nationwide deployment of Gemini for Education.

An initiative that integrates AI capabilities into university systems, providing digital research and learning support to nearly 600,000 students and 75,000 faculty members.

The rollout is coordinated with the Ministry of Higher Education Malaysia as part of the country’s broader strategy to become an AI-driven economy by 2030. Universities already using Google Workspace for

Education can now access advanced tools, including NotebookLM and the reasoning model Gemini 3.1 Pro, which are designed to support research, writing and personalised learning.

Several universities are already experimenting with AI-assisted teaching. At Universiti Malaysia Perlis, lecturers have created customised AI assistants to guide students through specialised engineering courses.

Meanwhile, researchers and students at Universiti Putra Malaysia are using AI tools to improve literature reviews and academic research workflows.

Other institutions are focusing on digital literacy and AI skills.

At Universiti Malaysia Sarawak, hundreds of lecturers and students are receiving AI certifications, while training programmes are expanding across campuses.

Officials believe the combination of AI tools, training and research support will strengthen the education system of Malaysia and prepare graduates for an increasingly AI-driven economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Blockchain and AI security central to US cyber framework

The US National Cyber Strategy emphasises support for emerging technologies, including blockchain, cryptocurrencies, AI, and post-quantum cryptography. The strategy highlights the importance of securing digital infrastructure while advancing technological leadership.

The strategy rests on six pillars, including modernising federal networks, protecting critical infrastructure, and advancing secure technology. Specific sections reference cryptocurrencies and blockchain, noting the need to safeguard digital systems from design to deployment.

Financial systems, data centres, and telecommunications networks are identified as key components of the broader cybersecurity framework. The strategy also stresses collaboration with private-sector technology companies and research institutions to foster innovation and strengthen protections.

AI plays a central role, with measures to secure AI data centres and deploy AI-driven tools for network defence. The plan avoids direct crypto rules but signals greater integration of blockchain and cryptography into national digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Samsung’s AI smart glasses are coming to take on Meta Ray-Ban

Samsung has confirmed key details about its upcoming AI smart glasses, including a camera positioned at ‘eye level’ and smartphone connectivity, ahead of a planned 2026 launch.

The device is being developed in partnership with Qualcomm and Google, building on the same ecosystem that produced the Galaxy XR headset, and will be powered by Google’s Gemini AI.

Samsung executive Jay Kim indicated that the glasses will be able to understand ‘where you’re looking at’, allowing the AI to analyse objects or scenes in the user’s field of view and provide contextual information in real time.

Processing is expected to take place on a connected smartphone rather than within the glasses themselves, and Samsung has not confirmed whether a built-in display will be included, suggesting multiple versions may be in development.

The announcement puts Samsung on a direct collision course with Meta, whose Ray-Ban Meta Gen 2 glasses are already on the market, offering 3K video recording and up to eight hours of battery life. Meta has also launched the Oakley Meta HSTN glasses, aimed at sports and outdoor users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI biotech firm pushes limits of human lifespan

Longevity research is gaining momentum as AI transforms the way scientists search for new medicines. Insilico Medicine, founded by Alex Zhavoronkov in 2014, combines machine learning and automation to study ageing and accelerate drug discovery.

Company research focuses on identifying biological targets linked to ageing and developing molecules to treat related diseases. Several experimental treatments have already received Investigational New Drug clearance, allowing them to move towards human clinical trials.

Insilico also became the first AI-driven biotech company to list on the Hong Kong Stock Exchange, raising HK$2.28 billion in its public offering. Zhavoronkov said careful financial planning was essential because enthusiasm around AI could still form a market bubble.

Expansion plans now include deeper partnerships across China and the Middle East. A new collaboration in the UAE aims to build regional AI drug discovery programmes and diversify economies beyond oil.

Beyond medicines, Zhavoronkov envisions integrated biotech ecosystems where living spaces, healthcare and research operate together. Such hubs allow scientists and citizens to contribute health data that helps develop future treatments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Network Slicing unlocks powerful opportunities for Africa’s 5G future

Accelerating the deployment of standalone 5G networks is the most critical step for enabling network slicing in Africa. Standalone 5G uses cloud-native cores that allow operators to create and manage virtual network slices with guaranteed performance. Many African networks still rely on non-standalone architecture, which limits full slicing capabilities.

Releasing and harmonising mid-band spectrum is another key policy priority. Spectrum in the 3.5 GHz band is particularly important for delivering high throughput and low latency. Without timely spectrum allocation, operators may struggle to support advanced industrial and enterprise applications.

Clear enterprise service frameworks are also essential. Industries such as mining, logistics, and energy require reliable connectivity with strict service-level agreements. Regulators and operators must define transparent pricing models and performance guarantees to support enterprise adoption.

Investment in automation and technical skills will also play a central role. Network slicing relies on AI-driven orchestration, cloud infrastructure, and cybersecurity capabilities. Strengthening technical expertise will help operators manage complex network environments.

Once these policy foundations are in place, network slicing can unlock new business models for telecom providers. Operators can offer slice-as-a-service, allowing enterprises to subscribe to dedicated network segments tailored to specific operational needs.

African telecom companies are already exploring these opportunities. Operators such as MTN, Vodacom, Safaricom, and Telkom are developing enterprise connectivity solutions for sectors including mining, manufacturing, logistics, and energy.

Private 5G deployments in mining operations illustrate the potential value of these services. Dedicated networks support automation, real-time monitoring, and remote equipment management. These projects often involve multi-year contracts worth several million dollars.

Network slicing also enables telecom providers to move beyond traditional consumer data services. Instead of charging primarily for data volume, operators can generate revenue from long-term enterprise connectivity and managed digital services.

As 5G infrastructure expands across the continent, network slicing is expected to play an increasing role in enterprise connectivity. By aligning network performance with industry needs, it could become a key driver of digital transformation in Africa.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI security risks grow as companies integrate AI into daily workflows

AI is rapidly transforming workplaces as companies automate tasks and boost productivity. From writing code to analysing documents, AI tools help employees work faster, but also introduce new AI security and compliance risks.

One of the main concerns is the handling of sensitive information. Employees may upload confidential documents, proprietary code, or customer data into AI chatbots without realising the consequences. Doing so could violate privacy regulations such as the EU’s GDPR or breach internal non-disclosure agreements, making AI security an important priority for organisations.

Another challenge is the reliability of AI-generated content. While large language models can produce convincing responses, they sometimes generate false information, which is a phenomenon known as hallucination. High-profile cases have already shown professionals submitting work with fabricated references generated by AI. Such incidents highlight the need for rigorous AI security and oversight.

Cybersecurity risks are also growing. AI systems rely on complex infrastructure that can become targets for attackers through techniques such as prompt injection, which tricks the model into producing unintended responses, or data poisoning, which involves injecting malicious data into training sets to alter behaviour or outputs. Addressing these threats requires stronger AI security practices and careful monitoring.

When adopting AI, organisations must develop clear policies, strengthen cybersecurity measures, and maintain human oversight. Taking those steps is essential to ensuring that the technology is used safely and responsibly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online scams rise as Parkin urges Dubai residents to stay vigilant

Dubai’s parking provider, Parkin, has warned residents to stay alert as online scams targeting digital service users continue to rise, urging people to take immediate steps to protect their digital identities.

In an advisory, the company stressed that official entities will never ask users to log in or disclose sensitive information through unsolicited messages, emails, or phone calls. The warning comes amid growing concerns about phishing attempts and other online scams targeting users of digital platforms.

Parkin said residents should exercise caution if they receive unexpected requests for personal details, passwords, or verification codes. Users are strongly advised not to respond to suspicious links, attachments, or messages from unknown sources, which are commonly used in online scams.

The operator also urged the public to verify the authenticity of communications before taking any action. Residents who are unsure about the legitimacy of a message should check official websites or contact customer service channels directly. The advice applies to messages claiming to come from Parkin or other service providers.

Authorities and service providers across the UAE have repeatedly warned that cybercriminals often impersonate trusted organisations in online scams designed to steal sensitive information. Such attacks can lead to identity theft, financial losses, or unauthorised access to personal accounts.

Parkin encouraged residents who receive suspicious communications to report them through official channels so that appropriate action can be taken. The company added that staying vigilant and safeguarding personal data remain essential to preventing online scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools linked to rise in abuse disclosures

Support organisations in the UK report that some abuse survivors are turning to AI tools such as ChatGPT before contacting helplines. Charities in the UK say individuals increasingly use AI to explore experiences and seek guidance before approaching professional support services.

The National Association of People Abused in Childhood said callers in the UK have recently reported being referred to its helpline after conversations with ChatGPT. Staff say AI is being used as an informal step in processing trauma.

Law enforcement and support groups in the UK have also recorded a rise in disclosures involving ritualistic sexual abuse. Authorities in the UK say only 14 criminal cases since 1982 have formally recognised such practices.

Police and support organisations are responding by improving training and launching specialist working groups. Officials aim to strengthen the identification and investigation of complex cases of abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Codex Security expands OpenAI’s push into cybersecurity tools

OpenAI has launched Codex Security, an AI-powered application security agent that detects hard-to-find software vulnerabilities and proposes fixes through advanced reasoning. By providing detailed context about a system’s architecture, the tool identifies security risks that are often missed by conventional automation.

The system uses advanced models to analyse repositories, construct project-specific threat models, and prioritise vulnerabilities based on their potential real-world impact. By combining automated validation with system-level context, Codex Security aims to reduce the number of false positives that security teams must review while highlighting high-confidence findings.

Initially developed under the name Aardvark, the tool has been tested in private deployments over the past year. During early use, OpenAI said it uncovered several critical vulnerabilities, including a cross-tenant authentication flaw and a server-side request forgery issue, allowing internal teams to quickly patch affected systems.

The company says improvements during the beta phase significantly reduced noise in vulnerability reports. In some repositories, unnecessary alerts fell by 84 percent, while over-reported severity dropped by more than 90 percent, and false positives declined by more than half.

Codex Security is now rolling out in research preview for ChatGPT Pro, Enterprise, Business, and Edu customers. OpenAI also plans to expand access to open-source maintainers through a dedicated programme that offers security scanning and support to help identify and remediate vulnerabilities across widely used projects.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers can use AI to de-anonymise social media accounts

AI technology behind platforms like ChatGPT is making it significantly easier for hackers to identify anonymous social media users, a new study warns. LLMs could match anonymised accounts to real identities by analysing users’ posts across platforms.

Researchers Simon Lermen and Daniel Paleka warned that AI enables cheap, highly personalised privacy attacks, urging a rethink of what counts as private online. The study highlighted risks from government surveillance to hackers exploiting public data for scams.

Experts caution that AI-driven de-anonymisation is not flawless. Errors in linking accounts could wrongly implicate individuals, while public datasets beyond social media- such as hospital or statistical records- may be exposed to unintended analysis.

Users are urged to reconsider what information they share, and platforms are encouraged to limit bulk data access and detect automated scraping.

The study underscores growing concerns about AI surveillance. While the technology cannot guarantee complete de-anonymisation, its rapid capabilities demand stronger safeguards to protect privacy online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot