AI cloaking helps hackers dodge browser defences

Cybercriminals increasingly use AI-powered cloaking tools to bypass browser security systems and trick users into visiting scam websites.

These tools conceal malicious content from automated scanners, showing it only to human visitors, making it harder to detect phishing attacks and malware delivery.

Platforms such as Hoax Tech and JS Click Cloaker are being used to filter web traffic and serve fake pages to victims while hiding them from security systems.

The AI behind these services analyses a visitor’s browser, location, and behaviour before deciding which version of a site to display.

Known as white page and black page cloaking, the technique shows harmless content to detection tools and harmful pages to real users. However, this allows fraudulent sites to live longer, boosting the effectiveness and lifespan of cyberattacks.

Experts warn that cloaking is no longer a fringe method but a core part of cybercrime, now available as a commercial service. As these tactics grow more sophisticated, the pressure increases on browser developers to improve detection and protect users more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft’s Cloud and AI strategy lifts revenue beyond expectations

Microsoft has reported better-than-expected results for the fourth quarter of its 2025 fiscal year, attributing much of its success to the continued expansion of its cloud services and the integration of AI.

‘Cloud and AI are the driving force of business transformation across every industry and sector,’ said Satya Nadella, Microsoft’s chairman and chief executive, in a statement on Wednesday.

For the first time, Nadella disclosed annual revenue figures for Microsoft Azure, the company’s cloud computing platform. Azure generated more than $75 billion in the fiscal year ending 30 June, representing a 34 percent increase compared to the previous year.

Nadella noted that this growth was ‘driven by growth across all workloads’, including those powered by AI. On average, Azure contributed approximately $19 billion in revenue per quarter.

While this trails Amazon Web Services (AWS), which posted net sales of $29 billion in the first quarter alone, Azure remains a strong second in the cloud market. Google Cloud, by comparison, has an annual run rate of $50 billion, according to parent company Alphabet’s Q2 2025 earnings report.

‘We continue to lead the AI infrastructure wave and took share each quarter this year,’ Nadella told investors during the company’s earnings call.

However, he did not provide specific figures showing how AI factored into the results, a point of interest for financial analysts given Microsoft’s projected $80 billion in capital expenditures this fiscal year to support AI-related data centre expansion.

During the call, Bernstein Research senior analyst Mark Moerdler asked how businesses might ultimately monetise AI as a software service.

Nadella responded with a broad comparison to the cloud business, suggesting the two were now deeply connected. It was left to CFO Amy Hood to offer a more structured explanation.

‘There’s a per-user logic,’ Hood explained. ‘There are tiers of per-user. Sometimes those tiers relate to consumption. Sometimes there are pure consumption models. I think you’ll continue to see a blending of these, especially as the AI model capability grows.’

In essence, Microsoft intends to monetise AI in a manner similar to its traditional software offerings—charging either per user, by usage tier, or based on consumption.

With AI now embedded across Microsoft’s portfolio of products and services, the company appears to be positioning itself to keep attributing more of its revenue to AI-powered innovation.

The numbers suggest there is plenty of revenue to go around. Microsoft posted $76.4 billion in revenue for the quarter, up 18 percent compared to the same period last year.

Operating income stood at $34.3 billion (up 23 percent), with net income reaching $27.2 billion (up 24 percent). Earnings per share climbed 24 percent to $3.65.

For the full fiscal year, Microsoft reported $281.7 billion in revenue—an increase of 15 percent. Operating income rose to $128.5 billion (up 17 percent), while net income hit $101.8 billion (up 16 percent). Annual earnings per share reached $13.64, also up by 16 percent.

Azure forms part of Microsoft’s Intelligent Cloud division, which generated $29.9 billion in quarterly revenue, a 26 percent year-on-year increase.

The Productivity and Business Processes group, which includes Microsoft 365, LinkedIn, and Dynamics, managed to earn $33.1 billion, upping its revenue by 16 percent. Meanwhile, the More Personal Computing segment, covering Windows, Xbox, and advertising, grew nine percent to $13.5 billion.

Despite some concerns among analysts regarding Microsoft’s significant capital spending and the ambiguous short-term returns on AI investments, investor confidence remains strong.

Microsoft’s share price jumped roughly eight percent after the earnings announcement, pushing its market capitalisation above $4 trillion in after-hours trading. It became only the second company, after Nvidia, to cross that symbolic threshold.

Market observers noted that while questions remain over the precise monetisation of AI, Microsoft’s aggressive positioning in cloud infrastructure and AI services has clearly resonated with shareholders.

With AI now woven into the company’s strategic fabric, Microsoft appears determined to maintain its lead in the next phase of enterprise computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan university launches smart farming lab

A new AI-powered agriculture lab in southern Taiwan has opened at the National Pingtung University of Science and Technology. The facility has cutting-edge sensors and automation systems to boost innovative farming capabilities.

Funded by a donation from Taiwan Hipoint, the lab enables real-time monitoring of crop conditions and automated adjustments to growing environments. The AI system analyses sensor and image data to optimise greenhouse conditions and detect early signs of pests or diseases.

Specialised chambers inside the lab simulate various environmental conditions, helping researchers identify ideal settings for plant growth. University staff say the technology is expected to play a crucial role in making agriculture more precise and resource-efficient.

The university also hosted a hands-on greenhouse training camp and showcased its innovations at a major food expo. Located near key research centres, the university aims to become Taiwan’s leading hub for agricultural technology and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian companies unite cybersecurity defences to combat AI threats

Australian companies are increasingly adopting unified, cloud-based cybersecurity systems as AI reshapes both threats and defences.

A new report from global research firm ISG reveals that many enterprises are shifting away from fragmented, uncoordinated tools and instead opting for centralised platforms that can better detect and counter sophisticated AI-driven attacks.

The rapid rise of generative AI has introduced new risks, including deepfakes, voice cloning and misinformation campaigns targeting elections and public health.

In response, organisations are reinforcing identity protections and integrating AI into their security operations to improve both speed and efficiency. These tools also help offset a growing shortage of cybersecurity professionals.

After a rushed move to the cloud during the pandemic, many businesses retained outdated perimeter-focused security systems. Now, firms are switching to cloud-first strategies that target vulnerabilities at endpoints and prevent misconfigurations instead of relying on legacy solutions.

By reducing overlap in systems like identity management and threat detection, businesses are streamlining defences for better resilience.

ISG also notes a shift in how companies choose cybersecurity providers. Firms like IBM, PwC, Deloitte and Accenture are seen as leaders in the Australian market, while companies such as TCS and AC3 have been flagged as rising stars.

The report further highlights growing demands for compliance and data retention, signalling a broader national effort to enhance cyber readiness across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI won’t replace coaches, but it will replace coaching without outcomes

Many coaches believe AI could never replace the human touch. They pride themselves on emotional intelligence — their empathy, intuition, and ability to read between the lines. They consider these traits irreplaceable. But that belief could be costing them their business.

The reason AI poses a real threat to coaching isn’t because machines are becoming more human. It’s because they’re becoming more effective. And clients aren’t hiring coaches for human connection — they’re hiring them for outcomes.

People seek coaches to overcome challenges, make decisions, or experience a transformation. They want results — and they want them as quickly and painlessly as possible. If AI can deliver those results faster and more conveniently, many clients will choose it without hesitation.

So what should coaches do? They shouldn’t ignore AI, fear it, or dismiss it as a passing fad. Instead, they should learn how to integrate it. Live, one-to-one sessions still matter. They provide the deepest insights and most lasting impact. But coaching must now extend beyond the session.

Coaching must be supported by systems that make success inevitable — and AI is the key to building those systems. Here lies a fundamental disconnect: coaches often believe their value lies in personal connections.

Clients, on the other hand, value results. The gap is where AI is stepping in — and where forward-thinking coaches are stepping up. Currently, most coaches are trapped in a model that trades time for money. More sessions, they assume, equals more transformation.

However, this model doesn’t scale. Many are burning out trying to serve everyone personally. Meanwhile, the most strategic among them are turning their coaching into scalable assets: digital products, automated workflows, and AI-trained tools that do their job around the clock.

They’re not being replaced by AI. They’re being amplified by it. The coaches are packaging their methods into online courses that clients can revisit between sessions. They’re building tools that track client progress automatically, offering midnight reassurance when doubts creep in.

The coaches are even training AI on their own frameworks, allowing clients to access support informed by the coach’s actual thinking — not generic chatbot responses. The business model in question isn’t science fiction. It’s already happening.

AI can be trained on your transcripts, methodologies, and session notes. It can conduct initial assessments and reinforce your teachings between meetings. Your clients receive consistent, on-demand support — and you free up time for the deep, human work only you can do.

Coaches who embrace this now will dominate their niches tomorrow. Even the content generated from coaching sessions is underutilised. Every call contains valuable insights — breakthroughs, reframes, moments of clarity.

The insights shouldn’t stay confined to just one client. Strip away personal details, extract the universal truths, and turn those insights into content that attracts your next ideal client. AI can also help you uncover patterns across your coaching history.

Feed your notes into analysis tools, and you might find that 80% of your executive clients hit the same obstacle in month three. Or that a particular intervention consistently delivers rapid breakthroughs.

The insights help you refine your practice and anticipate challenges before they arise — making your coaching more effective and less predictable. Then there’s the admin. Scheduling, invoicing, progress tracking — all of it can be automated.

Tools like Zapier or Make can optimise such repetitive tasks, giving you back hours each week. That’s time better spent on transformation, not operations. Your clients don’t want tradition. They want transformation.

The coaches who succeed in this new era will be those who understand that human insight and AI systems are not in competition. They’re complementary. Choose one area where AI could support your work — a progress tracker, a digital guide, or a content workflow. Start there.

The future of coaching doesn’t belong to the ones who resist AI. It belongs to those who combine wisdom with scalability. Your enhanced coaching model is waiting to be built — and your future clients are waiting to experience it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alignment Project to tackle safety risks of advanced AI systems

The UK’s Department for Science, Innovation and Technology (DSIT) has announced a new international research initiative aimed at ensuring future AI systems behave in ways aligned with human values and interests.

Called the Alignment Project, the initiative brings together global collaborators including the Canadian AI Safety Institute, Schmidt Sciences, Amazon Web Services (AWS), Anthropic, Halcyon Futures, the Safe AI Fund, UK Research and Innovation, and the Advanced Research and Invention Agency (ARIA).

DSIT confirmed that the project will invest £15 million into AI alignment research – a field concerned with developing systems that remain responsive to human oversight and follow intended goals as they become more advanced.

Officials said this reflects growing concerns that today’s control methods may fall short when applied to the next generation of AI systems, which are expected to be significantly more powerful and autonomous.

This positioning reinforces the urgency and motivation behind the funding initiative, before going into the mechanics of how the project will work.

The Alignment Project will provide funding through three streams, each tailored to support different aspects of the research landscape. Grants of up to £1 million will be made available for researchers across a range of disciplines, from computer science to cognitive psychology.

A second stream will provide access to cloud computing resources from AWS and Anthropic, enabling large-scale technical experiments in AI alignment and safety.

The third stream focuses on accelerating commercial solutions through venture capital investment, supporting start-ups that aim to build practical tools for keeping AI behaviour aligned with human values.

An expert advisory board will guide the distribution of funds and ensure that investments are strategically focused. DSIT also invited further collaboration, encouraging governments, philanthropists, and industry players to contribute additional research grants, computing power, or funding for promising start-ups.

Science, Innovation and Technology Secretary Peter Kyle said it was vital that alignment research keeps pace with the rapid development of advanced systems.

‘Advanced AI systems are already exceeding human performance in some areas, so it’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests,’ Kyle said.

‘AI alignment is all geared towards making systems behave as we want them to, so they are always acting in our best interests.’

The announcement follows recent warnings from scientists and policy leaders about the risks posed by misaligned AI systems. Experts argue that without proper safeguards, powerful AI could behave unpredictably or act in ways beyond human control.

Geoffrey Irving, chief scientist at the AI Safety Institute, welcomed the UK’s initiative and highlighted the need for urgent progress.

‘AI alignment is one of the most urgent and under-resourced challenges of our time. Progress is essential, but it’s not happening fast enough relative to the rapid pace of AI development,’ he said.

‘Misaligned, highly capable systems could act in ways beyond our ability to control, with profound global implications.’

He praised the Alignment Project for its focus on international coordination and cross-sector involvement, which he said were essential for meaningful progress.

‘The Alignment Project tackles this head-on by bringing together governments, industry, philanthropists, VC, and researchers to close the critical gaps in alignment research,’ Irving added.

‘International coordination isn’t just valuable – it’s necessary. By providing funding, computing resources, and interdisciplinary collaboration to bring more ideas to bear on the problem, we hope to increase the chance that transformative AI systems serve humanity reliably, safely, and in ways we can trust.’

The project positions the UK as a key player in global efforts to ensure that AI systems remain accountable, transparent, and aligned with human intent as their capabilities expand.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

White House launches AI Action Plan with Executive Orders on exports and regulation

The White House has unveiled a sweeping AI strategy through its new publication Winning the Race: America’s AI Action Plan.

Released alongside three Executive Orders, the plan outlines the federal government’s next phase in shaping AI policy, focusing on innovation, infrastructure, and global leadership.

The AI Action Plan centres on three key pillars: accelerating AI development, establishing national AI infrastructure, and promoting American AI standards globally. Four consistent themes run through each pillar: regulation and deregulation, investment, research and standardisation, and cybersecurity.

Notably, deregulation is central to the plan’s strategy, particularly in reducing barriers to AI growth and speeding up infrastructure approval for data centres and grid expansion.

Investment plays a dominant role. Federal funds will support AI job training, data access, lab automation, and domestic component manufacturing, instead of relying on foreign suppliers.

Alongside, the plan calls for new national standards, improved dataset quality, and stronger evaluation mechanisms for AI interpretability, control, and safety. A dedicated AI Workforce Research Hub is also proposed.

In parallel, three Executive Orders were issued. One bans ‘woke’ or ideologically biased AI tools in federal use, another fast-tracks data centre development using federal land and brownfield sites, and a third launches an AI exports programme to support full-stack US AI systems globally.

While these moves open new opportunities, they also raise questions around regulation, bias, and the future shape of AI development in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan invests NT$50 million to train AI-ready professionals

Taiwan’s Ministry of Economic Affairs has announced the launch of the first phase of its 2025 AI talent training programme, set to begin in August.

The initiative aims to develop 152 skilled professionals capable of supporting businesses in adopting AI technologies across a vast range of sectors.

Chiu Chiu-hui, Director-General of the Industrial Development Administration, said the programme has attracted over 60 domestic and international companies that will contribute instructors and offer internship placements.

Notable participating firms include Microsoft Taiwan, ASE Group, and Acer. Students will be selected from leading universities, such as National Taipei University, National Taipei University of Technology, National Formosa University, and National Cheng Kung University.

Structured as a one-year curriculum, the training is divided into three four-month phases. The initial stage will focus on theoretical foundations and current industry trends.

The first training stage will be followed by four months of practical application, and finally, four months of on-site corporate internships. Graduates of the programme are required to commit to working for one of the participating companies for a minimum of two years upon completion.

Participants will receive financial support throughout their training. A monthly stipend of NT$20,000 (approximately US$673) will be provided during the academic and practical stages, increasing to NT$30,000 during the internship period.

The government has earmarked NT$50 million for the first phase of the programme, and additional co-investment from private companies is being actively encouraged.

According to Chiu, some Taiwanese firms are struggling to find qualified talent to support their AI ambitions. In response, the ministry trained approximately 70,000 AI professionals last year and has set a slightly lower target of over 50,000 for 2025.

However, the long-term vision remains ambitious — to develop a total of 200,000 AI specialists within the next four years.

Registration for the second phase of the initiative is now open and will close in September. Training will expand to include universities and research institutions across Taiwan, with the next round of classes scheduled to start in October.

Industry leaders have praised the initiative as a timely response to the rapidly evolving technological landscape.

Lee Shu-hsia, Vice President of Human Resources at ASE Group, noted that AI is no longer confined to manufacturing but is increasingly being integrated into various functions such as procurement, human resources, and management.

The cross-departmental adoption is creating demand for AI-literate professionals who can bridge technical knowledge with operational needs.

Danny Chen, General Manager of Microsoft Taiwan’s public business group, added that the digital transformation underway in many companies has led to a significant increase in demand for AI-related talent.

Chen expressed optimism that the training programme will help companies not only recruit but also retain skilled personnel. The Ministry of Economic Affairs has expressed its expectation for participation to grow in the coming years and plans to expand both the scope and scale of the training.

In addition to co-investment, the ministry is exploring partnerships with international institutions to further enhance the programme’s global relevance and ensure alignment with emerging industry standards.

While the government’s long-term goal is to future-proof Taiwan’s workforce, the immediate focus is on plugging the talent gap that threatens to slow industrial innovation.

By linking academic institutions with real-world corporate challenges, the programme aims to produce graduates who are not only technically proficient but also industry-ready from day one.

Observers say the initiative represents a proactive strategy in preparing Taiwan’s economy for the next wave of AI-driven transformation. With AI applications becoming increasingly prevalent in sectors ranging from logistics to administration, building a robust talent pipeline is now viewed as a national priority.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scientists use quantum AI to solve chip design challenge

Scientists in Australia have used quantum machine learning to model semiconductor properties more accurately, potentially transforming how microchips are designed and manufactured.

The hybrid technique combines AI with quantum computing to solve a long-standing challenge in chip production: predicting electrical resistance where metal meets semiconductor.

The Australian researchers developed a new algorithm, the Quantum Kernel-Aligned Regressor (QKAR), which uses quantum methods to detect complex patterns in small, noisy datasets, a common issue in semiconductor research.

By improving how engineers predict Ohmic contact resistance, the approach could lead to faster, more energy-efficient chips. It also offers real-world compatibility, meaning it can eventually run on existing quantum machines as the hardware matures.

The findings highlight the growing role of quantum AI in hardware design and suggest the method could be adopted in commercial chip production in the near future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!