Malaysia has intensified its push to build an AI-ready workforce, with Huawei pledging to train 30,000 local professionals under a new initiative. The plan aligns with Malaysia’s National Cloud Computing Policy, balancing sovereignty and digital economy competitiveness.
Digital Minister Gobind Singh Deo stressed that AI adoption must benefit all Malaysians, highlighting applications from small business platforms to AI-assisted diagnostics in remote clinics. He urged collaboration across industries to ensure inclusivity as the country pursues its digital future.
Huawei’s Gartner recognition for container management highlights its cloud-native strength. Its Pangu models and container products will support Malaysia’s AI goals in manufacturing, healthcare, transport, and ASEAN industries.
The programme will target students, officials, industry leaders, and associations while supporting 200 local AI partners. Huawei’s network of availability zones in ASEAN provides low-latency infrastructure, with AI-native innovations designed to accelerate training, inference, and industrial upgrades.
The government of Malaysia views AI as crucial to achieving its 2030 goals, which aim to balance infrastructure, security, and governance. With Huawei’s backing and a new policy framework, the country seeks to establish itself as a regional hub for AI expertise.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Enterprise employees are increasingly building their own AI tools, sparking a surge in shadow AI that raises security concerns.
Netskope reports a 50% rise in generative AI platform use, with over half of current adoption estimated to be unsanctioned by IT.
Platforms like Azure OpenAI, Amazon Bedrock, and Vertex AI lead this trend, allowing users to connect enterprise data to custom AI agents.
The growth of shadow AI has prompted calls for better oversight, real-time user training, and updated data loss prevention strategies.
On-premises deployment is also increasing, with 34% of firms using local LLM interfaces like Ollama and LM Studio. Security risks grow as AI agents retrieve data using API calls beyond browsers, particularly from OpenAI and Anthropic endpoints.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has released Gemma 3 270M, an open-source AI model with 270 million parameters designed to run efficiently on smartphones and Internet of Things devices.
Drawing on technology from the larger Gemini family, it focuses on portability, low energy use and quick fine-tuning, enabling developers to create AI tools that work on everyday hardware instead of relying on high-end servers.
The model supports instruction-following and text structuring with a 256,000-token vocabulary, offering scope for natural language processing and on-device personalisation.
Its design includes quantisation-aware training to work in low-precision formats such as INT4, reducing memory use and improving speed on mobile processors instead of requiring extensive computational power.
Industry commentators note that the model could help meet demand for efficient AI in edge computing, with applications in healthcare wearables and autonomous IoT systems. Keeping processing on-device also supports privacy and reduces dependence on cloud infrastructure.
Google highlights the environmental benefits of the model, pointing to reduced carbon impact and greater accessibility for smaller firms and independent developers. While safeguards like ShieldGemma aim to limit risks, experts say careful use will still be needed to avoid misuse.
Future developments may bring new features, including multimodal capabilities, as part of Google’s strategy to blend open and proprietary AI within hybrid systems.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has announced a $9 billion investment in Oklahoma over the next two years to expand cloud and AI infrastructure.
The funds will support a new data centre campus in Stillwater and an expansion of the existing facility in Pryor, forming part of a broader $1 billion commitment to American education and competitiveness.
The announcement was made alongside Governor Kevin Stitt, Alphabet and Google executives, and community leaders.
Alongside the infrastructure projects, Google funds education and workforce initiatives with the University of Oklahoma and Oklahoma State University through the Google AI for Education Accelerator.
Students will gain no-cost access to Career Certificates and AI training courses, helping them acquire critical AI and job-ready skills instead of relying on standard curricula.
Additional funding will support ALLIANCE’s electrical training to expand Oklahoma’s electrical workforce by 135%, creating the talent needed to power AI-driven energy infrastructure.
Google described the investment as part of an ‘extraordinary time for American innovation’ and a step towards maintaining US leadership in AI.
The move also addresses national security concerns, ensuring the country has the infrastructure and expertise to compete with domestic rivals like OpenAI and Anthropic, as well as international competitors such as China’s DeepSeek.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
GitHub CEO Thomas Dohmke has announced his decision to step down later in the year to pursue new entrepreneurial ventures.
Instead of appointing a new CEO, Microsoft will integrate GitHub more closely into its CoreAI division. Since Microsoft acquired GitHub in 2018, the platform has operated chiefly independently, but with this change, leadership will report directly to several Microsoft executives.
Under Dohmke’s leadership since 2021, GitHub’s user base more than doubled to over 150 million developers, supporting over one billion repositories and forks.
The platform has become essential to Microsoft’s AI and developer strategy, especially with growing competition from Google, Replit, and others in the AI coding market.
GitHub recently launched advanced AI tools like Copilot, which suggest code and automate programming tasks, helping developers work more efficiently.
Microsoft’s investment in AI is shaping the future of coding, with GitHub playing a central role by providing direct access to developers worldwide.
Dohmke will remain with Microsoft until the end of the year to assist with the transition, emphasising GitHub’s importance to Microsoft’s broader ambitions in AI and cloud computing.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Security researcher Dirk-jan Mollema demonstrated methods for bypassing authentication in hybrid Active Directory (AD) and Entra ID environments at the Black Hat conference in Las Vegas. The techniques could let attackers impersonate any synced hybrid user, including privileged accounts, without triggering alerts.
Mollema demonstrated how a low-privilege cloud account can be converted into a hybrid user, granting administrative rights. He also demonstrated ways to modify internal API policies, bypass enforcement controls, and impersonate Exchange mailboxes to access emails, documents, and attachments.
Microsoft has addressed some issues by hardening global administrator security and removing specific API permissions from synchronised accounts. However, a complete fix is expected only in October 2025, when hybrid Exchange and Entra ID services will be separated.
Until then, Microsoft recommends auditing synchronisation servers, using hardware key storage, monitoring unusual API calls, enabling hybrid application splitting, rotating SSO keys, and limiting user permissions.
Experts say hybrid environments remain vulnerable if the weakest link is exploited, making proactive monitoring and least-privilege policies critical to defending against these threats.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
South Korean chipmaker SK Hynix forecasts that the market for high-bandwidth memory (HBM) chips, vital for AI, will expand by 30% annually until 2030. Demand growth is driven by cloud giants like Amazon, Microsoft, and Google, boosting AI investments and memory needs.
HBM chips are specialised dynamic RAM designed for ultra-fast data processing with low energy use. SK Hynix’s head of HBM business planning, Choi Joon-yong, highlighted the strong link between AI infrastructure growth and HBM chip purchases.
Customised HBM products, tailored to specific AI models and workloads, are expected to form a multibillion-dollar market by 2030.
The upcoming HBM4 generation introduces client-specific ‘base die’ layers, allowing performance to be fine-tuned to match exact customer requirements. Such customisation builds strong supplier-client ties, benefiting SK Hynix and strengthening partnerships with key customers like Nvidia.
SK Hynix remains confident despite short-term price pressures from a potential oversupply of HBM3E chips. The company believes the launch of HBM4 and rising demand for tailored solutions will sustain growth.
Given its significant US manufacturing investments, geopolitical factors such as proposed US tariffs on foreign chip imports have had a limited impact on SK Hynix.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s eSafety commissioner report showed that tech giants, including Apple, Google, Meta, and Microsoft, have failed to act against online child sexual abuse. Namely, it was found that Apple and YouTube do not track the number of abuse reports they receive or how quickly they respond, raising serious concerns. Additionally, both companies failed to disclose the number of trust and safety staff they employ, highlighting ongoing transparency and accountability issues in protecting children online.
In July 2024, the eSafety Commissioner of Australia took action by issuing legally enforceable notices to major tech companies, pressuring them to improve their response to child sexual abuse online.
These notices legally require recipients to comply within a set timeframe. Under the order, each companies were required to report eSafety every six months over a two-year period, detailing their efforts to combat child sexual abuse material, livestreamed abuse, online grooming, sexual extortion, and AI-generated content.
While these notices were issued in 2022 and 2023, there has been minimal effort by the companies to take action to prevent such crimes, according to Australia’s eSafety Commissioner Julie Inman Grant.
Apple did not use hash-matching tools to detect known CSEA images on iCloud (which was opt-in, end-to-end encrypted) and did not use hash-matching tools to detect known CSEA videos on iCloud or iCloud email. For iMessage and FaceTime (which were end-to-end encrypted), Apple only used Communication Safety, Apple’s safety intervention to identify images or videos that likely contain nudity, as a means of ‘detecting’ CSEA.
Discord did not use hash-matching tools for known CSEA videos on any part of the service (despite using hash-matching tools for known images and tools to detect new CSEA material).
Google did not use hash-matching tools to detect known CSEA images on Google Messages (end-to-end encrypted), nor did it detect known CSEA videos on Google Chat, Google Messages, or Gmail.
Microsoft did not use hash-matching tools for known CSEA images stored on OneDrive18, nor did it use hash-matching tools to detect known videos within content stored on OneDrive or Outlook.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft is offering security researchers up to $5 million for uncovering critical vulnerabilities in its products, with a focus on cloud and AI systems. The Zero Day Quest contest will return in spring 2026, following a $1.6 million payout in its previous edition.
Researchers are invited to submit discoveries between 4 August and 4 October 2025, targeting Azure, Copilot, M365, and other significant services. High-severity flaws are eligible for a 50% bonus payout, increasing the incentive for impactful findings.
Top participants will receive exclusive invitations to a live hacking event at Microsoft’s Redmond campus. The event promises collaboration with product teams and the Microsoft Security Response Centre.
Training from Microsoft’s AI Red Team and other internal experts will also be available. The company encourages public disclosure of patched findings to support the broader cybersecurity community.
The competition aligns with Microsoft’s Secure Future Initiative, which aims to strengthen cloud and AI security by default, design, and operation. Vulnerabilities will be disclosed transparently, even if no customer action is needed.
Full details and submission rules are available through the MSRC Researcher Portal. All reports will be subject to Microsoft’s bug bounty terms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!