Vatican urges ethical AI development

At the AI for Good Summit in Geneva, the Vatican urged global leaders to adopt ethical principles when designing and using AI.

The message, delivered by Cardinal Pietro Parolin on behalf of Pope Leo XIV, warned against letting technology outpace moral responsibility.

Framing the digital age as a defining moment, the Vatican cautioned that AI cannot replace human judgement or relationships, no matter how advanced. It highlighted the risk of injustice if AI is developed without a commitment to human dignity and ethical governance.

The statement called for inclusive innovation that addresses the digital divide, stressing the need to reach underserved communities worldwide. It also reaffirmed Catholic teaching that human flourishing must guide technological progress.

Pope Leo XIV supported a unified global approach to AI oversight, grounded in shared values and respect for freedom. His message underscored the belief that wisdom, not just innovation, must shape the digital future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISA 2015 expiry threatens private sector threat sharing

Congress has under 90 days to renew the Cybersecurity Information Sharing Act (CISA) of 2015 and avoid a regulatory setback. The law protects companies from liability when they share cyber threat indicators with the government or other firms, fostering collaboration.

Before CISA, companies hesitated due to antitrust and data privacy concerns. CISA removed ambiguity by offering explicit legal protections. Without reauthorisation, fear of lawsuits could silence private sector warnings, slowing responses to significant cyber incidents across critical infrastructure sectors.

Debates over reauthorisation include possible expansions of CISA’s scope. However, many lawmakers and industry groups in the United States now support a simple renewal. Health care, finance, and energy groups say the law is crucial for collective defence and rapid cyber threat mitigation.

Security experts warn that a lapse would reverse years of progress in information sharing, leaving networks more vulnerable to large-scale attacks. With only 35 working days left for Congress before the 30 September deadline, the pressure to act is mounting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under pressure after small business loses thousands

A New Orleans bar owner lost $10,000 after cyber criminals hijacked her Facebook business account, highlighting the growing threat of online scams targeting small businesses. Despite efforts to recover the account, the company was locked out for weeks, disrupting sales.

The US-based scam involved a fake Meta support message that tricked the owner into giving hackers access to her page. Once inside, the attackers began running ads and draining funds from the business account linked to the platform.

Cyber fraud like this is increasingly common as small businesses rely more on social media to reach their customers. The incident has renewed calls for tech giants like Meta to implement stronger user protections and improve support for scam victims.

Meta says it has systems to detect and remove fraudulent activity, but did not respond directly to this case. Experts argue that current protections are insufficient, especially for small firms with fewer resources and little recourse after attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Moscow targets crypto miners to protect AI infrastructure

Russia is preparing to ban cryptocurrency mining in data centres as it shifts national focus towards digitalisation and AI development. The draft law aims to prevent miners from accessing discounted power and infrastructure support reserved for AI-related operations.

Amendments to the bill, introduced at the request of President Vladimir Putin, will prohibit mining activity in facilities registered as official data centres. These centres will instead benefit from lower electricity rates and faster grid access to help scale computing power for big data and AI.

The legislation redefines data centres as communications infrastructure and places them under stricter classification and control. If passed, it could blow to companies like BitRiver, which operate large-scale mining hubs in regions like Irkutsk.

Putin defended the move by citing the strain on regional electricity grids and a need to use surplus energy wisely. While crypto mining was legalised in 2024, many Russian territories have imposed bans, raising questions about the industry’s long-term viability in the country.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Intel concedes defeat in AI race with Nvidia

Intel CEO Lip-Bu Tan has admitted the company can no longer compete with Nvidia in the AI training processor market. Speaking candidly to staff during a company-wide meeting, Tan said Nvidia’s lead is too great to overcome.

His comments mark a rare public admission of Intel’s slipping position in the global semiconductor industry.

The internal broadcast coincided with major job cuts across Intel’s global operations. Entire divisions are being downsized or shut down, including its automotive arm and parts of its manufacturing units.

Around 200 roles are being cut in Israel, along with hundreds more across other departments, as the company aims to simplify its structure and improve agility.

Tan noted that Intel has fallen out of the top 10 semiconductor firms by market value, a stark contrast to its former dominance. Once worth over $200 billion, Intel is now valued at around $100 billion.

Nvidia, meanwhile, briefly became the first company to surpass a $4 trillion valuation.

Despite the setbacks, Tan is steering Intel toward edge AI and agentic AI as areas of future growth. He stressed the need for cultural change within Intel, urging faster decision-making and a stronger focus on customer needs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung confirms core Galaxy AI tools remain free

Samsung has confirmed that core Galaxy AI features will continue to be available free of charge for all users.

Speaking during the recent Galaxy Unpacked event, a company representative clarified that any AI tools installed on a device by default—such as Live Translate, Note Assist, Zoom Nightography and Audio Eraser—will not require a paid subscription.

Instead of leaving users uncertain, Samsung has publicly addressed speculation around possible Galaxy AI subscription plans.

While there are no additional paid AI features on offer at present, the company has not ruled out future developments. Samsung has already hinted that upcoming subscription services linked to Samsung Health could eventually include extra AI capabilities.

Alongside Samsung’s announcement, attention has also turned towards Google’s freemium model for its Gemini AI assistant, which appears on many Android devices. Users can access basic features without charge, but upgrading to Google AI Pro or Ultra unlocks advanced tools and increased storage.

New Galaxy Z Fold 7 and Z Flip 7 handsets even come bundled with six months of free access to premium Google AI services.

Although Samsung is keeping its pre-installed Galaxy AI features free, industry observers expect further changes as AI continues to evolve.

Whether Samsung will follow Google’s path with a broader subscription model remains to be seen, but for now, essential Galaxy AI functions stay open to all users without extra cost.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Gemini AI tool animates photos into short video clips

Google has rolled out a new feature for Gemini AI that transforms still photos into short, animated eight-second videos with sound. The capability is powered by Veo 3, Google’s latest video generation model, and is currently available to Gemini Advanced Ultra and Pro subscribers.

The tool supports background noise, ambient audio, and even spoken dialogue, with support gradually expanding to users in select countries, including India. At launch, access to the web interface is limited, though Google has announced that mobile support will follow later in the week.

To use the tool, users upload a photo, describe the intended motion, and optionally add prompts for sound effects or narration. Gemini then generates a 720p MP4 video in a 16:9 landscape format, automatically synchronising visuals and audio.

Josh Woodward, Vice President of the Gemini app and Google Labs, showcased the feature on X (formerly Twitter), animating a child’s drawing. ‘Still experimental, but we wanted our Pro and Ultra members to try it first,’ he said, calling the result fun and expressive.

To maintain authenticity, each video includes a visible ‘Veo’ watermark in the bottom-right corner and an invisible SynthID watermark. This hidden digital signature, developed by Google DeepMind, helps identify AI-generated content and preserve transparency around synthetic media.

The company has emphasised its commitment to responsible AI deployment by embedding traceable markers in all output from this tool. These safeguards come amid increasing scrutiny of generative video tools and deepfakes across digital platforms.

To animate a photo using Gemini AI’s new tool, users should follow these steps: Click on the ‘tools’ icon in the prompt bar, then choose the ‘video’ option from the menu. Upload the still image, describe the desired motion, and provide sound or narration instructions, optionally.

The underlying Veo 3 model was first introduced at Google I/O as the company’s most advanced video generation engine. It can produce high-quality visuals, simulate real-world physics, and even lip-sync dialogue from text and image-based prompts.

A Google blog post explains: ‘Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing.’ The company says users can craft short story prompts and expect realistic, cinematic responses from the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU finalises AI code as 2025 compliance deadline approaches

The European Commission has released its finalised Code of Practice for general-purpose AI models, laying the groundwork for implementing the landmark AI Act. The new Code sets out transparency, copyright, and safety rules that developers must follow before deadlines.

Approved in March 2024 and effective from August, the AI Act introduces the EU’s first binding rules for AI. It bans high-risk applications such as real-time biometric surveillance, predictive policing, and emotion recognition in schools or workplaces.

Stricter obligations will apply to general-purpose models from August 2025, including mandatory documentation of training data, provided this does not violate intellectual property or trade secrets.

The Code of Practice, developed by experts with input from over 1,000 stakeholders, aims to guide AI providers through the AI Act’s requirements. It mandates model documentation, lawful content sourcing, risk management protocols, and a point of contact for copyright complaints.

However, industry voices, including the CCIA, have criticised the Code, saying it disproportionately burdens AI developers.

Member States and the European Commission will assess the effectiveness of the Code in the coming months. From August 2026, enforcement will begin for existing models, while new ones will be subject to the rules a year earlier.

The Commission says these steps are vital to ensure GPAI models are safe, transparent, and rights-respecting across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot relies on Musk’s views instead of staying neutral

Grok, the AI chatbot owned by Elon Musk’s company xAI, appears to search for Musk’s personal views before answering sensitive or divisive questions.

Rather than relying solely on a balanced range of sources, Grok has been seen citing Musk’s opinions when responding to topics like Israel and Palestine, abortion, and US immigration.

Evidence gathered from a screen recording by data scientist Jeremy Howard shows Grok actively ‘considering Elon Musk’s views’ in its reasoning process. Out of 64 citations Grok provided about Israel and Palestine, 54 were linked to Musk.

Others confirmed similar results when asking about abortion and immigration laws, suggesting a pattern.

While the behaviour might seem deliberate, some experts believe it happens naturally instead of through intentional programming. Programmer Simon Willison noted that Grok’s system prompt tells it to avoid media bias and search for opinions from all sides.

Yet, Grok may prioritise Musk’s stance because it ‘knows’ its owner, especially when addressing controversial matters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!