Spot the red flags of AI-enabled scams, says California DFPI

The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.

Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.

Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.

Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.

DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA expands open-source AI models to boost global innovation

The US tech giant, NVIDIA, has released open-source AI models and data tools across language, biology and robotics to accelerate innovation and expand access to cutting-edge research.

New model families, Nemotron, Cosmos, Isaac GR00T and Clara, are designed to empower developers to build intelligent agents and applications with enhanced reasoning and multimodal capabilities.

The company is contributing these open models and datasets to Hugging Face, further solidifying its position as a leading supporter of open research.

Nemotron models improve reasoning for digital AI agents, while Cosmos and Isaac GR00T enable physical AI and robotic systems to perform complex simulations and behaviours. Clara advances biomedical AI, allowing scientists to analyse RNA, generate 3D protein structures and enhance medical imaging.

Major industry partners, including Amazon Robotics, ServiceNow, Palantir and PayPal, are already integrating NVIDIA’s technologies to develop next-generation AI agents.

An initiative that reflects NVIDIA’s aim to create an open ecosystem that supports both enterprise and scientific innovation through accessible, transparent and responsible AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Most Greeks have never used AI at work

A new Focus Bari survey shows that AI is still unfamiliar territory for most Greeks.

Although more than eight in ten have heard of AI, 68 percent say they have never used it professionally. The study highlights that Greece integrates AI into its workplace more slowly than many other countries.

The survey covered 21 nations and found that 83 percent of Greeks know about AI, compared with 17 percent who do not. Only 35 percent feel well-informed, while about one in three admits to knowing little about the technology.

Similar trends appear worldwide, with Switzerland, Mexico, and Romania leading in AI awareness, while countries like Nigeria, Japan, and Australia show limited familiarity.

Globally, almost half of respondents use AI in their everyday lives, yet only one in three applies it in their work. In Greece, that gap remains wide, suggesting that AI is still seen as a distant concept rather than a professional tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adobe Firefly expands with new AI tools for audio and video creation

Adobe has unveiled major updates to its Firefly creative AI studio, introducing advanced audio, video, and imaging tools at the Adobe MAX 2025 conference.

These new features include Generate Soundtrack for licensed music creation, Generate Speech for lifelike multilingual voiceovers, and a timeline-based video editor that integrates seamlessly with Firefly’s existing creative tools.

The company also launched the Firefly Image Model 5, which can produce photorealistic 4MP images with prompt-based editing. Firefly now includes partner models from Google, OpenAI, ElevenLabs, Topaz Labs, and others, bringing the industry’s top AI capabilities into one unified workspace.

Adobe also announced Firefly Custom Models, allowing users to train AI models to match their personal creative style.

In a preview of future developments, Adobe showcased Project Moonlight, a conversational AI assistant that connects across creative apps and social channels to help creators move from concept to content in minutes.

A system that can offer tailored suggestions and automate parts of the creative process while keeping creators in complete control.

Adobe emphasised that Firefly is designed to enhance human creativity rather than replace it, offering responsible AI tools that respect intellectual property rights.

With such a release, the company continues integrating generative AI across its ecosystem to simplify production and empower creators at every stage of their workflow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk launches AI-powered Grokipedia to rival Wikipedia

Elon Musk has launched Grokipedia, an AI-driven online encyclopedia developed by his company xAI. The platform, described as an alternative to Wikipedia, debuted on Monday with over 885,000 articles written and verified by AI.

Musk claimed the early version already surpasses Wikipedia in quality and transparency, promising significant improvements with the release of version 1.0.

Unlike Wikipedia’s crowdsourced model, Grokipedia does not allow users to edit content directly. Instead, users can request modifications through xAI’s chatbot Grok, which decides whether to implement changes and explains its reasoning.

Musk said the project’s guiding principle is ‘the truth, the whole truth, and nothing but the truth,’ acknowledging the platform’s imperfections while pledging continuous refinement.

However, Grokipedia’s launch has raised questions about originality. Several entries contain disclaimers crediting Wikipedia under a Creative Commons licence, with some articles appearing nearly identical.

Musk confirmed awareness of the issue and stated that improvements are expected before the end of the year. The Wikimedia Foundation, which operates Wikipedia, responded calmly, noting that human-created knowledge remains at the heart of its mission.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A generative AI model helps athletes avoid injuries and recover faster

Researchers at the University of California, San Diego, have developed a generative AI model designed to prevent sports injuries and assist rehabilitation.

The system, named BIGE (Biomechanics-informed GenAI for Exercise Science), integrates data on human motion with biomechanical constraints such as muscle force limits to create realistic training guidance.

BIGE can generate video demonstrations of optimal movements that athletes can imitate to enhance performance or avoid injury. It can also produce adaptive motions suited for athletes recovering from injuries, offering a personalised approach to rehabilitation.

The model merges generative AI with accurate modelling, overcoming limitations of previous systems that produced anatomically unrealistic results or required heavy computational resources.

To train BIGE, researchers used motion-capture data of athletes performing squats, converting them into 3D skeletal models with precise force calculations. The project’s next phase will expand to other types of movements and individualised training models.

Beyond sports, researchers suggest the tool could predict fall risks among the elderly. Professor Andrew McCulloch described the technology as ‘the future of exercise science’, while co-author Professor Rose Yu said its methods could be widely applied across healthcare and fitness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Church of Greece launches AI tool LOGOS for believers

LOGOS, a digital tool developed by the Metropolis of Nea Ionia, Filadelfia, Iraklio and Halkidona alongside the University of the Aegean, has marked the Church of Greece’s entry into the age of AI.

The tool gathers information on questions of Christian faith and provides clear, practical answers instead of replacing human guidance.

Metropolitan Gabriel, who initiated the project, emphasised that LOGOS does not substitute priests but acts as a guide, bringing believers closer to the Church. He said the Church must engage the digital world, insisting that technology should serve humanity instead of the other way around.

An AI tool that also supports younger users, allowing them to safely access accurate information on Orthodox teachings and counter misleading or harmful content found online. While it cannot receive confessions, it offers prayers and guidance to prepare believers spiritually.

The Church views LOGOS as part of a broader strategy to embrace digital tools responsibly, ensuring that faith remains accessible and meaningful in the modern technological landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

At UMN, AI meets ethics, history, and craft

AI is remaking daily life, but it can’t define what makes us human. The liberal arts help us probe ethics, meaning, and power as algorithms scale. At the University of Minnesota Twin Cities, that lens anchors curiosity with responsibility.

In the College of Liberal Arts, scholars are treating AI as both a tool and a textbook. They test its limits, trace its histories, and surface trade-offs around bias, authorship, and agency. Students learn to question design choices rather than just consume outputs.

Linguist Amanda Dalola, who directs the Language Center, experiments with AI as a language partner and reflective coach. Her aim isn’t replacement but augmentation, faster feedback, broader practice, richer cultural context. The point is discernment: when to use, when to refuse.

Statistician Galin Jones underscores the scaffolding beneath the hype. You cannot do AI without statistics, he tells students, so the School of Statistics emphasises inference, uncertainty, and validation. Graduates leave fluent in models, and in the limits of what models claim.

Composer Frederick Kennedy’s opera I am Alan Turing turns theory into performance. By staging Turing’s questions about machine thought and human identity, the work fuses history, sound design, and code. Across philosophy, music, and more, CLA frames AI as a human story first.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines Japan’s AI Blueprint for inclusive economic growth

A new Japan Economic Blueprint released by OpenAI sets out how AI can power innovation, competitiveness, and long-term prosperity across the country. The plan estimates that AI could add more than ¥100 trillion to Japan’s economy and raise GDP by up to 16%.

Centred on inclusive access, infrastructure, and education, the Blueprint calls for equal AI opportunities for citizens and small businesses, national investment in semiconductors and renewable energy, and expanded lifelong learning to build an adaptive workforce.

AI is already reshaping Japanese industries from manufacturing and healthcare to education and public administration. Factories reduce inspection costs, schools use ChatGPT Edu for personalised teaching, and cities from Saitama to Fukuoka employ AI to enhance local services.

OpenAI suggests that the focus of Japan on ethical and human-centred innovation could make it a model for responsible AI governance. By aligning digital and green priorities, the report envisions technology driving creativity, equality, and shared prosperity across generations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!