AI agents platform NemoClaw targets enterprise tools in Nvidia strategy

Nvidia is reportedly preparing to launch an open-source platform for AI agents, according to sources familiar with the company’s plans.

The platform, internally known as NemoClaw, is being pitched to enterprise software companies and would enable businesses to deploy AI agents to perform tasks for employees. Companies will be able to access the platform regardless of whether their products run on Nvidia hardware.

The initiative comes ahead of Nvidia’s annual developer conference in San Jose next week. The company has reportedly approached firms including Salesforce, Cisco, Google, Adobe, and CrowdStrike to explore potential partnerships for the platform. However, it remains unclear whether any formal agreements have been reached.

Sources say the open-source platform could offer early access to partners in exchange for contributions to the project, while also including built-in security and privacy tools designed for enterprise environments. Interest in the project reflects enthusiasm around open-source AI ‘claws,’ agents that run locally and autonomously handle multi-step tasks with less human oversight.

However, the use of autonomous agents in corporate environments remains controversial. Some technology companies have reportedly restricted their use on work devices due to concerns about unpredictability and security risks.

For Nvidia, NemoClaw may also represent a broader effort to expand its influence beyond hardware. By supporting open-source AI agents, the company could strengthen its position in enterprise AI infrastructure, even as several major AI developers are building their own chips.

NVIDIA has not publicly commented on the reported plans. Representatives from several companies linked to the project also declined to comment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-driven education push reshapes Chungnam National University strategy

Chungnam National University aims to become a leading centre for AI-driven education in Korea as AI reshapes how universities teach, learn, and manage operations. University President Kim Jeong-kyoum said higher education institutions must rethink how they approach AI and prepare for the profound changes AI-driven education is expected to bring across society.

‘AI will undoubtedly bring significant changes across industries and in our daily lives,’ Kim told The Korea Times in a recent interview. ‘Universities need to approach this shift with an open mindset and be ready to accept it. I want Chungnam National University to become a university that uses AI better than anyone else.’

While acknowledging that the phrase ‘AI-leading university’ is increasingly common, Kim said the university’s real priority is integrating AI into teaching practically. The institution is considering incorporating AI-related elements into more than 30 percent of its curriculum to ensure students gain hands-on experience with the technology and support the expansion of AI-driven education across disciplines.

‘We want to teach students how to use AI effectively in practice,’ he said. ‘Professors need to use and understand AI themselves to teach it properly, and students also need systematic training on how to use these tools well.’

Beyond the classroom, the university also plans to introduce AI into administrative systems to improve campus operations. ‘Administration is often the hardest part of a university to change,’ Kim said. ‘That’s why we believe introducing AI into administrative systems first could be particularly meaningful.’

The university is also expanding research through its Glocal Lab project, which aims to strengthen Chungnam National University’s role in AI-driven pharmaceutical and biotechnology research. The initiative is expected to more directly connect academic research with industry and support the development of specialised talent, strengthening the university’s broader ambitions in AI-driven education and innovation.

Kim said, ‘Until now, there have been clear limits to translating the university’s strong basic research into applications in local industries. We expect the Glocal Lab project to help bridge that gap by connecting academic research more directly with the industrial field.’

The project will integrate AI, mathematical sciences, and pharmaceutical and biotechnology research into a unified R&D platform. ‘Ultimately, the Glocal Lab project will help the university grow into a global R&D hub,’ Kim said. ‘By creating high-quality jobs locally, it can also help curb the outflow of talented young people to the Seoul metropolitan area and foster a virtuous cycle of regional settlement and innovation.’

The university is also enhancing internationalisation efforts, aiming to increase the share of international students to 10 percent while expanding global partnerships and strengthening its global profile in AI-driven education. ‘Universities should take the lead in presenting new models in a global society,’ Kim said. ‘By doing so, these ideas can spread beyond campus and ultimately influence local industries and businesses.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI deepfakes detection expands on YouTube for politicians and journalists

YouTube is expanding its likeness-detection technology designed to identify AI-generated deepfakes, extending access to a pilot group of government officials, political candidates, and journalists.

The tool allows participants to detect unauthorised AI-generated videos that simulate their faces and request removal if the content violates YouTube policies. The system builds on technology launched last year for around four million creators in the YouTube Partner Program.

Similar to YouTube’s Content ID system, which detects copyrighted material in uploaded videos, the likeness detection feature scans for AI-generated faces created with deepfake tools. Such technologies are increasingly used to spread misinformation or manipulate public perception by making prominent figures appear to say or do things they never did.

According to YouTube, the pilot programme aims to balance free expression with safeguards against AI impersonation, particularly in sensitive civic contexts.

‘This expansion is really about the integrity of the public conversation,’ said Leslie Miller, YouTube’s vice president of Government Affairs and Public Policy. ‘We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it.’

Removal requests will be assessed individually under YouTube’s privacy policy rules to determine whether the content constitutes parody or political critique, which remain protected forms of expression. Participants must verify their identity by uploading a selfie and a government-issued ID before accessing the tool. Once verified, they can review detected matches and submit removal requests for content they believe violates policy.

YouTube also said it supports the proposed NO FAKES Act in the United States, which aims to regulate the unauthorised use of an individual’s voice or visual likeness in AI-generated media. AI-generated videos on the platform are already labelled, though label placement varies depending on the topic’s sensitivity.

‘There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,’ said Amjad Hanif, YouTube’s vice president of Creator Products. The company said it plans to expand the technology over time to detect AI-generated voices and other intellectual property.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gigabyte pushes accessible AI computing strategy at Mobile World Congress

Taiwanese computer manufacturer Gigabyte is expanding its AI strategy, focusing on making AI computing more widely accessible. Speaking at the Mobile World Congress in Barcelona, Gigabyte outlined its vision of ‘democratising AI’ by delivering infrastructure that ranges from data centre systems to tools that allow individuals to build and run AI models at home.

‘We believe that AI will be good for everyone when it’s more accessible to more people,’ said Jack Chou, brand marketing specialist at Gigabyte Technology.

Founded in 1986, the company initially built its reputation as one of the world’s leading motherboard manufacturers. The company has since expanded into full-stack AI infrastructure, telecom networking systems, and specialised AI supercomputers.

According to Chou, the company’s strategy reflects a shift from traditional consumer computing toward broader empowerment through AI. ‘In the past, we provided computing solutions for end users that might be used more for entertainment and gaming, but now we believe we’re empowering more people with AI computing,’ he said.

Gigabyte is also exploring physical AI systems, including robots for tasks such as automated assembly line monitoring and quality control in manufacturing environments. These systems rely on AI models trained in data centres and deployed through embedded industrial computing platforms that allow machines to interact with real-world environments.

As demand for AI infrastructure grows, Gigabyte is prioritising sustainability by investing in energy-saving cooling technologies such as direct liquid and immersion cooling for its data centres.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sustainable AI discussed by UNESCO and Saudi leaders under Vision 2030

Leaders from government, academia, and industry gathered to emphasise that sustainable AI must shape efficient, inclusive, and environmentally responsible systems. The discussion focused on embedding sustainability, ethics, and human-centred principles throughout the AI lifecycle by adopting a sustainable-by-design approach.

The workshop was built on Saudi Arabia’s expanding role in AI and digital transformation through the Saudi Data & AI Authority (SDAIA) and the National Strategy for Data and AI (NSDAI). The efforts are supported by significant investments in cloud infrastructure and data centres under the Kingdom’s Vision 2030 programme. Participants highlighted that sustainable AI must become a core principle in the development of emerging digital infrastructure and AI-powered services.

Abdulrahman Habib, Director of the International Centre for Artificial Intelligence Research and Ethics (ICAIRE), highlighted Saudi Arabia’s growing leadership in AI ethics and governance. With national AI Ethics Principles and a maturing regulatory landscape, the Kingdom is positioning itself as a global contributor to responsible AI dialogue, translating principles into operational governance systems rather than just policy statements.

Leona Verdadero of UNESCO highlighted two core concepts: Greening with AI, which uses AI to accelerate sustainability, and Greening of AI, which ensures systems are energy-efficient, ethical, and human-centred. She stressed that effective AI governance requires collaboration and industry leadership at every stage of development.

Per Ola Kristensson from the University of Cambridge urged action beyond rhetoric, stressing that true AI sustainability means developing technology to augment, not replace, human potential. Industry presentations reinforced that sustainable AI drives real-world progress. Companies like RECYCLEE optimise resource recovery, Remedium reduces environmental impacts in healthcare and infrastructure, and IDOM strengthens sustainability reporting through AI-enhanced design.

UNESCO supports Saudi Arabia’s drive for inclusive, ethical, and sustainable AI ecosystems, framing sustainable AI as critical in the global transition to green digital transformation.

Faisal Al Azib, Executive Director of the UN Global Compact Network Saudi Arabia, stated: ‘As the Kingdom advances its digital transformation under Vision 2030, we have a responsibility to ensure that innovation advances hand in hand with sustainability and human dignity.’

Al Azib concluded: ‘Sustainable AI is central to building resilient, future-ready businesses. Through partnerships with UNESCO and our local ecosystem, we aim to equip companies with the governance tools to embed responsible, energy-efficient, and human-centred AI into their core strategies.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google adds option to disable AI search in Google Photos

Users of Google Photos will now have greater control over how they search their images, after Google introduced a visible toggle that returns to the traditional search experience.

The update follows complaints about the AI-powered Ask Photos feature.

Ask Photos was designed to allow users to search for images using natural language queries rather than simple keywords. The tool aimed to make photo searches more flexible, enabling complex queries such as descriptions of people, events or locations captured in images.

However, some users reported that the AI system produced slower results and occasionally failed to locate images that the classic search had previously found more reliably.

Although an option to turn off the AI feature already existed, it was hidden within settings and often overlooked.

The new update introduces a visible switch directly on the search interface. Users can now easily alternate between the AI-powered search and the traditional search system depending on their preferences.

Google said improvements have also been made to the quality of common searches following user feedback. The company emphasised that search remains one of the most frequently used functions within Google Photos and that ongoing updates will continue to refine the experience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Japan expands strategic investment in AI, quantum computing, and drones

Japan has identified dozens of advanced technologies as priority investment targets as part of an economic strategy led by Sanae Takaichi.

The plan aims to channel public and private capital into industries expected to drive long-term economic growth.

Government officials selected 61 technologies and products for support across 17 strategic sectors. The list includes emerging fields such as AI, quantum computing, regenerative medicine and marine drones.

Many of these technologies are still in early development, but are considered important for economic security and global competitiveness.

The strategy forms a central pillar of Takaichi’s broader economic agenda to strengthen Japan’s industrial base and encourage investment in high-growth sectors. Authorities plan to release spending estimates and implementation timelines by summer as part of a detailed investment roadmap.

Japan has also set ambitious market goals in several sectors. Officials aim to secure more than 30% of the global AI robotics market by 2040 while increasing annual sales of domestically produced semiconductors to ¥40 trillion.

Several Japanese technology companies could benefit from the policy direction. Firms such as Fanuc, Yaskawa Electric and Mitsubishi Electric are integrating AI into industrial robots, while Sony Group produces sensors used in robotic systems.

Chipmakers, including Rohm, Kioxia and Renesas Electronics, may also benefit from increased investment in semiconductor manufacturing and related supply chains.

Despite strong investor interest, analysts note uncertainty about how the programme will be financed, particularly as Japan faces rising spending pressures from social security, defence and public debt.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Dutch court increases pressure on Meta over non-profiling social media feeds

A court in the Netherlands has increased potential penalties against Meta after ruling that changes to social media timelines must be implemented urgently.

The decision raises the potential fine for non-compliance from €5 million to €10 million if required adjustments are not applied to Facebook and Instagram feeds.

Judges at the Amsterdam Court of Appeals said users must be able to select a timeline that does not rely on profiling-based recommendations.

The ruling follows a legal challenge from the digital rights organisation Bits of Freedom, which argued that users who switched away from algorithmic feeds were automatically returned to them after navigating the platform or reopening the application.

The court concluded that the automatic resetting mechanism represents a deceptive design practice known as a ‘dark pattern’.

Such practices are prohibited under the EU’s Digital Services Act, which requires large online platforms to provide greater transparency and user control over recommendation systems.

Judges acknowledged that Meta had already introduced several technical changes, although not all required measures were fully implemented. The company must ensure that the non-profiling timeline option remains active once selected, rather than reverting to algorithmic recommendations.

The dispute also highlights regulatory tensions within the European framework. Before turning to the courts, Bits of Freedom submitted a complaint to Coimisiún na Meán, the national authority responsible for overseeing Meta’s compliance with the EU rules.

According to the organisation, the lack of progress from regulators encouraged legal action in Dutch courts.

Meta indicated that the company intends to challenge the decision and pursue further legal proceedings. The case could become an important test of how the Digital Services Act is enforced against major online platforms across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Digital sovereignty in Asia moves beyond US versus non-US cloud debate

AI, cloud computing, and cross-border data flows have made questions about control and jurisdiction increasingly important for governments and businesses. In Asia, the debate around digital sovereignty often focuses on ‘US versus non-US cloud’ providers or data localisation.

Such simplifications miss the practical challenges organisations face when choosing hosting locations or training AI models while navigating diverse regulatory regimes.

At the same time, Asia’s digital economy is building its own regulatory foundations. In Vietnam and Indonesia, new rules such as Vietnam’s Decree 53 and Indonesia’s data protection framework show how governments are shaping data governance while still relying on global cloud and AI platforms. Most organisations across the region continue to operate using a mix of local, regional, and international providers.

Organisations must address key questions about data jurisdiction and workload mobility when risks change. They must also control who can access sensitive systems during incidents. Digital sovereignty is clearer when seen through three pillars: data sovereignty, technical sovereignty, and operational sovereignty.

Data sovereignty is about jurisdiction, not just data storage. As AI regulation expands, businesses need to know which authorities can access their data and how it may be used. Technical sovereignty is the ability to move or redesign systems as regulations or geopolitics shift. Multi-cloud and hybrid strategies help organisations remain adaptable.

Operational sovereignty focuses on governance and control. It addresses who can access systems, from where, and under what safeguards, thus linking sovereignty directly to cybersecurity and incident response.

For Asia-Pacific organisations, digital sovereignty should not be a simple procurement checklist. Instead, it should guide cloud and AI strategies from the start, ensuring legal clarity, technical flexibility, and operational trust as the digital landscape evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!