Humanoid robot unveils portrait of King Charles, denies replacing artists

At the recent unveiling of a new oil painting titled Algorithm King, humanoid robot Ai-Da presented her interpretation of King Charles, emphasising the monarch’s commitment to environmentalism and interfaith dialogue. The portrait, showcased at the UK’s diplomatic mission in Geneva, was created using a blend of AI algorithms and traditional artistic inspiration.

Ai-Da, designed with a human-like face and robotic limbs, has captured public attention since becoming the first humanoid robot to sell artwork at auction, with a portrait of mathematician Alan Turing fetching over $1 million. Despite her growing profile in the art world, Ai-Da insists she poses no threat to human creativity, positioning her work as a platform to spark discussion on the ethical use of AI.

Speaking at the UN’s AI for Good summit, the robot artist stressed that her creations aim to inspire responsible innovation and critical reflection on the intersection of technology and culture.

‘The value of my art lies not in monetary worth,’ she said, ‘but in how it prompts people to think about the future of creativity.’

Ai-Da’s creator, art specialist Aidan Meller, reiterated that the project is an ethical experiment rather than an attempt to replace human artists. Echoing that sentiment, Ai-Da concluded, ‘I hope my work encourages a positive, thoughtful use of AI—always mindful of its limits and risks.’

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta buys PlayAI to strengthen voice AI

Meta has acquired California-based startup PlayAI to strengthen its position in AI voice technology. PlayAI specialises in replicating human-like voices, offering Meta a route to enhance conversational AI features instead of relying solely on text-based systems.

According to reports, the PlayAI team will join Meta next week.

Although financial terms have not been disclosed, industry sources suggest the deal is worth tens of millions. Meta aims to use PlayAI’s expertise across its platforms, from social media apps to devices like Ray-Ban smart glasses.

The move is part of Meta’s push to keep pace with competitors like Google and OpenAI in the generative AI race.

Talent acquisition plays a key role in the strategy. By absorbing smaller, specialised teams like PlayAI’s, Meta focuses on integrating technology and expert staff instead of developing every capability in-house.

The PlayAI team will report directly to Meta’s AI leadership, underscoring the company’s focus on voice-driven interactions and metaverse experiences.

Bringing PlayAI’s voice replication tools into Meta’s ecosystem could lead to more realistic AI assistants and new creator tools for platforms like Instagram and Facebook.

However, the expansion of voice cloning raises ethical and privacy concerns that Meta must manage carefully, instead of risking user trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk’s xAI secures $2 billion from SpaceX

SpaceX has committed $2 billion to Elon Musk’s AI startup, xAI, as part of a $5 billion equity round.

The investment strengthens links between Musk’s businesses instead of keeping them separate, with xAI now competing directly against OpenAI.

After merging with social platform X, xAI’s valuation has reached $113 billion. Grok chatbot now supports customer service for Starlink, and there are plans for future integration into Tesla’s Optimus humanoid robots instead of limiting its use to chat functions.

When asked whether Tesla could also back xAI financially, Musk replied on X that ‘it would be great, but subject to board and shareholder approval’. He did not directly confirm or deny SpaceX’s reported investment.

The move underlines how Musk positions his various ventures to collaborate more closely, combining AI, space technology, and robotics instead of running them as isolated businesses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Gemini flaw lets hackers trick email summaries

Security researchers have identified a serious flaw in Google Gemini for Workspace that allows cybercriminals to hide malicious commands inside email content.

The attack involves embedding hidden HTML and CSS instructions, which Gemini processes when summarising emails instead of showing the genuine content.

Attackers use invisible text styling such as white-on-white fonts or zero font size to embed fake warnings that appear to originate from Google.

When users click Gemini’s ‘Summarise this email’ feature, these hidden instructions trigger deceptive alerts urging users to call fake numbers or visit phishing sites, potentially stealing sensitive information.

Unlike traditional scams, there is no need for links, attachments, or scripts—only crafted HTML within the email body. The vulnerability extends beyond Gmail, affecting Docs, Slides, and Drive, raising fears of AI-powered phishing beacons and self-replicating ‘AI worms’ across Google Workspace services.

Experts advise businesses to implement inbound HTML checks, LLM firewalls, and user training to treat AI summaries as informational only. Google is urged to sanitise incoming HTML, improve context attribution, and add visibility for hidden prompts processed by Gemini.

Security teams are reminded that AI tools now form part of the attack surface and must be monitored accordingly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could save billions but healthcare adoption is slow

AI is being hailed as a transformative force in healthcare, with the potential to reduce costs and improve outcomes dramatically. Estimates suggest widespread AI integration could save up to 360 billion dollars annually by accelerating diagnosis and reducing inefficiencies across the system.

Although tools like AI scribes, triage assistants, and scheduling systems are gaining ground, clinical adoption remains slow. Only a small percentage of doctors, roughly 12%, currently rely on AI for diagnostic decisions. This cautious rollout reflects deeper concerns about the risks associated with medical AI.

Challenges include algorithmic drift when systems are exposed to real-world conditions, persistent racial and ethnic biases in training data, and the opaque ‘black box’ nature of many AI models. Privacy issues also loom, as healthcare data remains among the most sensitive and tightly regulated.

Experts argue that meaningful AI adoption in clinical care must be incremental. It requires rigorous validation, clinician training, transparent algorithms, and clear regulatory guidance. While the potential to save lives and money is significant, the transformation will be slow and deliberate, not overnight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Latin America struggling to join the global AI race

Currently, Latin America is lagging in AI innovation. It contributes only 0.3% of global startup activity and attracts a mere 1% of worldwide investment, despite housing around 8% of the global population.

Experts point to a significant brain drain, a lack of local funding options, weak policy frameworks, and dependency on foreign technology as major obstacles. Many high‑skilled professionals emigrate in search of better opportunities elsewhere.

To bridge the gap, regional governments are urged to develop coherent national AI strategies, foster regional collaboration, invest in digital education, and strengthen ties between the public and private sectors.

Strategic regulation and talent retention initiatives could help Latin America build its capacity and compete globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia opens AI centre with global tech partners

Indonesia has inaugurated a National AI Centre of Excellence in Jakarta in partnership with Indosat Ooredoo Hutchison, NVIDIA and Cisco. The centre is designed to fast-track the adoption of AI and build digital talent to support Indonesia’s ambitions for its 2045 digital vision.

Deputy Minister Nezar Patria said the initiative will help train one million Indonesians in AI, networking and cybersecurity by 2027. Officials and industry leaders stressed the importance of human capability in maximising AI’s potential.

The centre will also serve as a hub for research and developing practical solutions through collaborations with universities and local communities. Indosat launched a related AI security initiative on the same day, highlighting national ambitions for digital resilience.

Executives at the launch said they hope the centre becomes a national movement that helps position Indonesia as a regional and global AI leader.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Vatican urges ethical AI development

At the AI for Good Summit in Geneva, the Vatican urged global leaders to adopt ethical principles when designing and using AI.

The message, delivered by Cardinal Pietro Parolin on behalf of Pope Leo XIV, warned against letting technology outpace moral responsibility.

Framing the digital age as a defining moment, the Vatican cautioned that AI cannot replace human judgement or relationships, no matter how advanced. It highlighted the risk of injustice if AI is developed without a commitment to human dignity and ethical governance.

The statement called for inclusive innovation that addresses the digital divide, stressing the need to reach underserved communities worldwide. It also reaffirmed Catholic teaching that human flourishing must guide technological progress.

Pope Leo XIV supported a unified global approach to AI oversight, grounded in shared values and respect for freedom. His message underscored the belief that wisdom, not just innovation, must shape the digital future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Moscow targets crypto miners to protect AI infrastructure

Russia is preparing to ban cryptocurrency mining in data centres as it shifts national focus towards digitalisation and AI development. The draft law aims to prevent miners from accessing discounted power and infrastructure support reserved for AI-related operations.

Amendments to the bill, introduced at the request of President Vladimir Putin, will prohibit mining activity in facilities registered as official data centres. These centres will instead benefit from lower electricity rates and faster grid access to help scale computing power for big data and AI.

The legislation redefines data centres as communications infrastructure and places them under stricter classification and control. If passed, it could blow to companies like BitRiver, which operate large-scale mining hubs in regions like Irkutsk.

Putin defended the move by citing the strain on regional electricity grids and a need to use surplus energy wisely. While crypto mining was legalised in 2024, many Russian territories have imposed bans, raising questions about the industry’s long-term viability in the country.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!