GenAI app usage up 50% as firms struggle with oversight

Enterprise employees are increasingly building their own AI tools, sparking a surge in shadow AI that raises security concerns.

Netskope reports a 50% rise in generative AI platform use, with over half of current adoption estimated to be unsanctioned by IT.

Platforms like Azure OpenAI, Amazon Bedrock, and Vertex AI lead this trend, allowing users to connect enterprise data to custom AI agents.

The growth of shadow AI has prompted calls for better oversight, real-time user training, and updated data loss prevention strategies.

On-premises deployment is also increasing, with 34% of firms using local LLM interfaces like Ollama and LM Studio. Security risks grow as AI agents retrieve data using API calls beyond browsers, particularly from OpenAI and Anthropic endpoints.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches small AI model for mobiles and IoT

Google has released Gemma 3 270M, an open-source AI model with 270 million parameters designed to run efficiently on smartphones and Internet of Things devices.

Drawing on technology from the larger Gemini family, it focuses on portability, low energy use and quick fine-tuning, enabling developers to create AI tools that work on everyday hardware instead of relying on high-end servers.

The model supports instruction-following and text structuring with a 256,000-token vocabulary, offering scope for natural language processing and on-device personalisation.

Its design includes quantisation-aware training to work in low-precision formats such as INT4, reducing memory use and improving speed on mobile processors instead of requiring extensive computational power.

Industry commentators note that the model could help meet demand for efficient AI in edge computing, with applications in healthcare wearables and autonomous IoT systems. Keeping processing on-device also supports privacy and reduces dependence on cloud infrastructure.

Google highlights the environmental benefits of the model, pointing to reduced carbon impact and greater accessibility for smaller firms and independent developers. While safeguards like ShieldGemma aim to limit risks, experts say careful use will still be needed to avoid misuse.

Future developments may bring new features, including multimodal capabilities, as part of Google’s strategy to blend open and proprietary AI within hybrid systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google backs workforce and AI education in Oklahoma with a $9 billion investment

Google has announced a $9 billion investment in Oklahoma over the next two years to expand cloud and AI infrastructure.

The funds will support a new data centre campus in Stillwater and an expansion of the existing facility in Pryor, forming part of a broader $1 billion commitment to American education and competitiveness.

The announcement was made alongside Governor Kevin Stitt, Alphabet and Google executives, and community leaders.

Alongside the infrastructure projects, Google funds education and workforce initiatives with the University of Oklahoma and Oklahoma State University through the Google AI for Education Accelerator.

Students will gain no-cost access to Career Certificates and AI training courses, helping them acquire critical AI and job-ready skills instead of relying on standard curricula.

Additional funding will support ALLIANCE’s electrical training to expand Oklahoma’s electrical workforce by 135%, creating the talent needed to power AI-driven energy infrastructure.

Google described the investment as part of an ‘extraordinary time for American innovation’ and a step towards maintaining US leadership in AI.

The move also addresses national security concerns, ensuring the country has the infrastructure and expertise to compete with domestic rivals like OpenAI and Anthropic, as well as international competitors such as China’s DeepSeek.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GitHub CEO to leave as Microsoft integrates platform into CoreAI amid AI coding race

GitHub CEO Thomas Dohmke has announced his decision to step down later in the year to pursue new entrepreneurial ventures.

Instead of appointing a new CEO, Microsoft will integrate GitHub more closely into its CoreAI division. Since Microsoft acquired GitHub in 2018, the platform has operated chiefly independently, but with this change, leadership will report directly to several Microsoft executives.

Under Dohmke’s leadership since 2021, GitHub’s user base more than doubled to over 150 million developers, supporting over one billion repositories and forks.

The platform has become essential to Microsoft’s AI and developer strategy, especially with growing competition from Google, Replit, and others in the AI coding market.

GitHub recently launched advanced AI tools like Copilot, which suggest code and automate programming tasks, helping developers work more efficiently.

Microsoft’s investment in AI is shaping the future of coding, with GitHub playing a central role by providing direct access to developers worldwide.

Dohmke will remain with Microsoft until the end of the year to assist with the transition, emphasising GitHub’s importance to Microsoft’s broader ambitions in AI and cloud computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Black Hat demo reveals risks in hybrid Microsoft environments

Security researcher Dirk-jan Mollema demonstrated methods for bypassing authentication in hybrid Active Directory (AD) and Entra ID environments at the Black Hat conference in Las Vegas. The techniques could let attackers impersonate any synced hybrid user, including privileged accounts, without triggering alerts.

Mollema demonstrated how a low-privilege cloud account can be converted into a hybrid user, granting administrative rights. He also demonstrated ways to modify internal API policies, bypass enforcement controls, and impersonate Exchange mailboxes to access emails, documents, and attachments.

Microsoft has addressed some issues by hardening global administrator security and removing specific API permissions from synchronised accounts. However, a complete fix is expected only in October 2025, when hybrid Exchange and Entra ID services will be separated.

Until then, Microsoft recommends auditing synchronisation servers, using hardware key storage, monitoring unusual API calls, enabling hybrid application splitting, rotating SSO keys, and limiting user permissions.

Experts say hybrid environments remain vulnerable if the weakest link is exploited, making proactive monitoring and least-privilege policies critical to defending against these threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

SK Hynix forecasts strong AI memory chip growth

South Korean chipmaker SK Hynix forecasts that the market for high-bandwidth memory (HBM) chips, vital for AI, will expand by 30% annually until 2030. Demand growth is driven by cloud giants like Amazon, Microsoft, and Google, boosting AI investments and memory needs.

HBM chips are specialised dynamic RAM designed for ultra-fast data processing with low energy use. SK Hynix’s head of HBM business planning, Choi Joon-yong, highlighted the strong link between AI infrastructure growth and HBM chip purchases.

Customised HBM products, tailored to specific AI models and workloads, are expected to form a multibillion-dollar market by 2030.

The upcoming HBM4 generation introduces client-specific ‘base die’ layers, allowing performance to be fine-tuned to match exact customer requirements. Such customisation builds strong supplier-client ties, benefiting SK Hynix and strengthening partnerships with key customers like Nvidia.

SK Hynix remains confident despite short-term price pressures from a potential oversupply of HBM3E chips. The company believes the launch of HBM4 and rising demand for tailored solutions will sustain growth.

Given its significant US manufacturing investments, geopolitical factors such as proposed US tariffs on foreign chip imports have had a limited impact on SK Hynix.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants under fire in Australia for failing online child protection standards

Australia’s eSafety commissioner report showed that tech giants, including Apple, Google, Meta, and Microsoft, have failed to act against online child sexual abuse. Namely, it was found that Apple and YouTube do not track the number of abuse reports they receive or how quickly they respond, raising serious concerns. Additionally, both companies failed to disclose the number of trust and safety staff they employ, highlighting ongoing transparency and accountability issues in protecting children online.

In July 2024, the eSafety Commissioner of Australia took action by issuing legally enforceable notices to major tech companies, pressuring them to improve their response to child sexual abuse online.

These notices legally require recipients to comply within a set timeframe. Under the order, each companies were required to report eSafety every six months over a two-year period, detailing their efforts to combat child sexual abuse material, livestreamed abuse, online grooming, sexual extortion, and AI-generated content.

While these notices were issued in 2022 and 2023, there has been minimal effort by the companies to take action to prevent such crimes, according to Australia’s eSafety Commissioner Julie Inman Grant.

Key findings from the eSafety commissioner are:

  • Apple did not use hash-matching tools to detect known CSEA images on iCloud (which was opt-in, end-to-end encrypted) and did not use hash-matching tools to detect known CSEA videos on iCloud or iCloud email. For iMessage and FaceTime (which were end-to-end encrypted), Apple only used Communication Safety, Apple’s safety intervention to identify images or videos that likely contain nudity, as a means of ‘detecting’ CSEA.
  • Discord did not use hash-matching tools for known CSEA videos on any part of the service (despite using hash-matching tools for known images and tools to detect new CSEA material).
  • Google did not use hash-matching tools to detect known CSEA images on Google Messages (end-to-end encrypted), nor did it detect known CSEA videos on Google Chat, Google Messages, or Gmail.
  • Microsoft did not use hash-matching tools for known CSEA images stored on OneDrive18, nor did it use hash-matching tools to detect known videos within content stored on OneDrive or Outlook.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft offers $5 million for cloud and AI vulnerabilities

Microsoft is offering security researchers up to $5 million for uncovering critical vulnerabilities in its products, with a focus on cloud and AI systems. The Zero Day Quest contest will return in spring 2026, following a $1.6 million payout in its previous edition.

Researchers are invited to submit discoveries between 4 August and 4 October 2025, targeting Azure, Copilot, M365, and other significant services. High-severity flaws are eligible for a 50% bonus payout, increasing the incentive for impactful findings.

Top participants will receive exclusive invitations to a live hacking event at Microsoft’s Redmond campus. The event promises collaboration with product teams and the Microsoft Security Response Centre.

Training from Microsoft’s AI Red Team and other internal experts will also be available. The company encourages public disclosure of patched findings to support the broader cybersecurity community.

The competition aligns with Microsoft’s Secure Future Initiative, which aims to strengthen cloud and AI security by default, design, and operation. Vulnerabilities will be disclosed transparently, even if no customer action is needed.

Full details and submission rules are available through the MSRC Researcher Portal. All reports will be subject to Microsoft’s bug bounty terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon reports $18.2B profit boost as AI strategy takes off

Amazon has reported a 35% increase in quarterly profit, driven by rapid growth in its AI-powered services and cloud computing arm, Amazon Web Services (AWS).

The tech and e-commerce giant posted net income of $18.2 billion for Q2 2025, up from $13.5 billion a year earlier, while net sales rose 13% to $167.7 billion and exceeded analyst expectations.

CEO Andy Jassy attributed the strong performance to the company’s growing reliance on AI. ‘Our conviction that AI will change every customer experience is starting to play out,’ Jassy said, referencing Amazon’s AI-powered Alexa+ upgrades and new generative AI shopping tools.

AWS remained the company’s growth engine, with revenue climbing 17.5% to $30.9 billion and operating profit rising to $10.2 billion. The surge reflects the increasing demand for cloud infrastructure to support AI deployment across industries.

Despite the solid earnings, Amazon’s share price dipped more than 3% in after-hours trading. Analysts pointed to concerns over the company’s heavy capital spending, particularly its aggressive $100 billion AI investment strategy.

Free cash flow over the past year fell to $18.2 billion, down from $53 billion a year earlier. In Q2 alone, Amazon spent $32.2 billion on infrastructure, nearly double the previous year’s figure, much of it aimed at expanding its data centre and logistics capabilities to support AI workloads.

For the current quarter, Amazon projected revenue of $174.0 to $179.5 billion and operating income between $15.5 and $20.5 billion, slightly below investor hopes but still reflecting double-digit year-on-year growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!