Fake Gemini AI chatbot used in Google Coin crypto investment scam

Fraudsters are using a fake AI chatbot posing as Google’s Gemini to promote a bogus ‘Google Coin’ cryptocurrency presale. The automated assistant delivers convincing investment projections and directs victims to send irreversible crypto payments.

The scam site copies Google branding and claims the token will surge in value after launch, despite Google having no cryptocurrency project. Visitors are shown fabricated presale stages, countdowns and token sales figures to create urgency.

When questioned about regulatory or company details, the chatbot avoids providing verifiable information and instead repeats scripted claims about security and transparency. Tougher queries are redirected to a supposed ‘manager’, suggesting human operators step in to close larger payments.

Researchers warn that AI tools are making crypto scams more scalable and more challenging to detect. Consumers are urged to verify claims on official websites and to avoid sending digital assets in exchange for promised returns.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rwanda and Anthropic sign AI partnership

Anthropic and the Government of Rwanda have signed a three-year Memorandum of Understanding to expand AI deployment across health, education and public sector services in Rwanda. The agreement marks Anthropic’s first multi-sector government partnership in Africa.

In Rwanda’s health system, Anthropic will support national priorities, including efforts to eliminate cervical cancer and reduce malaria and maternal mortality. Rwanda’s Ministry of Health will work with Anthropic to integrate AI tools aligned with national objectives.

Public sector developer teams in Rwanda will gain access to Claude and Claude Code, alongside training, API credits and technical support. The partnership also formalises an education programme launched in 2025 that provided 2,000 Claude Pro licences to educators in Rwanda.

Officials in Rwanda have said the collaboration focuses on capacity development, responsible deployment and local autonomy. Anthropic stated that investment in skills and infrastructure in Rwanda aims to enable safe and independent use of AI by teachers, health workers and public servants.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI revives historic images in New Brighton with Remarkable community engagement

Generative AI is increasingly being used to reinterpret cultural heritage and re-engage communities with their local history. In New Brighton, a creative initiative has digitally restored, colourised, and reanimated archival photographs dating from the Victorian era to the late twentieth century.

The project demonstrates how AI can transform static historical images into moving sequences, making the past more accessible to digital audiences. By combining archival research with creative experimentation, the initiative bridges heritage and contemporary technology.

Public response was immediate and substantial. Within hours of publication, the videos generated tens of thousands of views, hundreds of shares, and extensive social media commentary, reflecting strong community interest.

Beyond numerical engagement, the project prompted residents and former visitors to share personal memories of the pier, fairground, cinemas, and promenade. Organisers described the depth of emotional response as evidence that local identity and civic pride remain deeply rooted.

The initiative forms part of a broader creative revival in New Brighton. Upcoming public art projects, including a large-scale mural celebrating community volunteers, aim to build on this momentum and connect heritage with future regeneration efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia in the AI era highlights essential human oversight

Human-curated knowledge remains central in the AI era, according to the co-founder of Wikipedia. Speaking at the AI Impact Summit 2026, he stressed that editorial judgement, reliable sourcing, and community debate are essential to maintaining trust. AI tools may assist contributors, but oversight and accountability must remain human-led.

Wikipedia has become part of the digital infrastructure underpinning AI systems. Large language models are extensively trained on their openly licensed content, increasing the platform’s responsibility to safeguard accuracy. Wales emphasised that while AI is now embedded in global information systems, it still depends on human-verified knowledge foundations.

Concerns about reliability and misinformation featured prominently in the discussion. AI systems can fabricate convincing but inaccurate details, highlighting the continued importance of journalism and source verification. Wikipedia’s model, requiring citations and scrutinising source credibility, positions it as a safeguard against rapidly generated false content.

The conversation also addressed bias and language diversity. AI models trained predominantly on English-language data risk marginalising other linguistic communities. Wikipedia’s co-founder pointed to the importance of multilingual knowledge ecosystems and inclusive data practices to ensure global representation in both AI development and online information governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The reality behind AI hype

As governments and tech leaders gather at global forums such as the AI Impact Summit in New Delhi, one assumption dominates discussion: the more computing power poured into AI, the better it will become. In his blog ‘‘The elephant in the AI room’: Does more computing power really bring more useful AI?’, Jovan Kurbalija questions whether that belief is as solid as it seems.

For years, the AI race has been driven by the idea that ever-larger models and vast GPU farms are the key to progress. That logic has justified enormous energy consumption and multi-billion-dollar investments in data centres. But Kurbalija argues that bigger is not always better, especially when everyday tasks often require far less computational firepower than frontier models provide.

He points out that most people rely on a limited vocabulary and a small set of reasoning tools in their daily work. Smaller, specialised AI systems can already draft emails, summarise meetings, or classify documents effectively. The push for trillion-parameter models, he suggests, may reflect ambition more than necessity.

There are also technical limits to consider. Adding more computing power can lead to diminishing returns, and some prominent researchers doubt that simply scaling up large language models will lead to human-level intelligence. More hardware, Kurbalija notes, does not automatically solve deeper conceptual challenges in AI design.

The economic picture is equally complex. Training cutting-edge proprietary models can cost hundreds of millions of dollars, while newer open-source systems have been developed at a fraction of that price. If cheaper models can deliver similar performance, questions arise about the sustainability of current spending and whether investors are backing efficiency or hype.

Beyond cost and performance lies a broader ethical issue. Even if massive computing power could eventually produce superintelligent systems, the key question is whether society truly needs them. Kurbalija warns that technological possibilities should not be confused with social desirability, and that innovation without a clear purpose can create new risks.

Rather than escalating an arms race for ever-larger models, the blog calls for a shift toward needs-driven design. Right-sized tools, viable business models, and ethical clarity about AI’s role in society may prove more valuable than raw computing muscle.

In challenging the prevailing narrative, Kurbalija urges policymakers and industry leaders to rethink whether the future of AI depends on scale alone or on smarter priorities.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Government ramps up online safety for children in the UK

The UK government has announced new measures to protect children online, giving parents clearer guidance and support. PM Keir Starmer said no platform will get a free pass, with illegal AI chatbot content targeted immediately.

New powers, to be introduced through upcoming legislation, will allow swift action following a consultation on children’s digital well-being.

Proposed measures include enforcing social media age limits, restricting harmful features like infinite scrolling, and strengthening safeguards against sharing non-consensual intimate images.

Ministers are already consulting parents, children, and civil society groups. The Department for Science, Innovation and Technology launched ‘You Won’t Know until You Ask’ to advise on safety settings, talking to children, and handling harmful content.

Charities such as NSPCC and the Molly Rose Foundation welcomed the announcement, emphasising swift action on age limits, addictive design, and AI content regulation. Children’s feedback will help shape the new rules, aiming to make the UK a global leader in online safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI features disabled on MEP tablets amid European Parliament security concerns

The European Parliament has disabled AI features on the tablets it provides to lawmakers, citing cybersecurity and data protection concerns. Built-in AI tools like writing and virtual assistants have been disabled, while third-party apps remain mostly unaffected.

The decision follows an assessment highlighting that some AI features send data to cloud services rather than processing it locally.

Lawmakers have been advised to take similar precautions on their personal devices. Guidance includes reviewing AI settings, disabling unnecessary features, and limiting app permissions to reduce exposure of work emails and documents.

Officials stressed that these measures are intended to prevent sensitive data from being inadvertently shared with service providers.

The move comes amid broader European scrutiny of reliance on overseas digital platforms, particularly US-based services. Concerns over data sovereignty and laws like the US Cloud Act have amplified fears that personal and sensitive information could be accessed by foreign authorities.

AI tools, which require extensive access to user data, have become a key focus in ongoing debates over digital security in the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Windows 11 gains enterprise 5G management through Ericsson partnership

Ericsson and Microsoft have integrated advanced 5G into Windows 11 to simplify secure enterprise laptop connectivity. The update embeds AI-driven 5G management, enabling IT teams to automate connections and enforce policy-based controls at scale.

The solution combines Microsoft Intune with Ericsson Enterprise 5G Connect, a cloud-based platform that monitors network quality and optimises performance. Enterprises can switch service providers and automatically apply internal connectivity policies.

IT departments can remotely provision eSIMs, prioritise 5G networks, and enforce secure profiles across laptop fleets. Automation reduces manual configuration and ensures consistent compliance across locations and service providers.

The companies say the integration addresses long-standing barriers to adopting cellular-connected PCs, including complexity and fragmented management. Multi-market pilots have preceded commercial availability in the United States, Sweden, Singapore, and Japan.

Additional launches are planned in 2026 across Spain, Germany, and Finland. Executives from both firms describe the collaboration as a step toward AI-ready enterprise devices with secure, always-on connectivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google outlines progress in responsible AI development

Google published its latest Responsible AI Progress Report, showing how AI Principles guide research, product development, and business decisions. Rising model capabilities and adoption have moved the focus from experimentation to real-world industry integration.

Governance and risk management form a central theme of the report, with Google describing a multilayered oversight structure spanning the entire AI lifecycle.

Advanced testing methods, including automated adversarial evaluations and expert review, are used to identify and mitigate potential harms as systems become more personalised and multimodal.

Broader access and societal impact remain key priorities. AI tools are increasingly used in science, healthcare, and environmental forecasting, highlighting their growing role in tackling global challenges.

Collaboration with governments, academia, and civil society is presented as essential for maintaining trust and setting industry standards. Sharing research and tools continues to support responsible AI innovation and broaden its benefits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hyperscale data centres planned under Meta and NVIDIA deal

Meta announced a multiyear partnership with NVIDIA to build large-scale AI infrastructure across on-premises and cloud systems. Plans include hyperscale data centres designed for both training and inference workloads, forming a core part of the company’s long-term AI roadmap.

Deployment will include millions of Blackwell and Rubin GPUs, plus expanded use of NVIDIA CPUs and Spectrum-X networking. According to Mark Zuckerberg, the collaboration is intended to support advanced AI systems and broaden access to high-performance computing capabilities worldwide.

Jensen Huang highlighted the scale of Meta’s AI operations and the role of deep hardware-software integration in improving performance.

Efficiency gains remain a central objective, with Meta increasing the rollout of Arm-based NVIDIA Grace CPUs to improve performance per watt in data centres. Future Vera CPU deployment is being considered to expand energy-efficient computing later in the decade.

Privacy-focused AI development forms another pillar of the partnership. NVIDIA Confidential Computing will first power secure AI features on WhatsApp, with plans to expand across more services as Meta scales AI to billions of users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!