Amazon makes Alexa+ available in web browsers

Growing demand for AI assistants has pushed Amazon to open access to Alexa+ through a web browser for the first time.

Early-access users in the US and Canada can now sign in through Alexa.com, allowing interaction with the service without relying solely on Echo devices or the mobile app.

Amazon has positioned the move as part of a broader effort to keep pace with rivals such as OpenAI, Google and Anthropic in the generative AI space.

Alexa+ is designed to operate as an intelligent personal assistant instead of a simple voice tool. Users can manage travel bookings, restaurant reservations, home automation and weekly meal planning while maintaining personalised preferences and chat history across devices.

Prime subscribers will eventually receive the paid service at no extra charge, and Amazon says tens of millions already have access.

Amazon expects availability to expand over time as the company places greater emphasis on AI-driven consumer services. Web-based access marks an effort to ensure the assistant is reachable wherever users connect, rather than being tied only to Amazon hardware.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s Threads tests basketball game inside chats

Threads is experimenting with gaming inside private chats, beginning with a simple basketball game that allows users to swipe to shoot hoops.

Meta confirmed that the game remains an internal prototype and is not available to the public, meaning there is no certainty it will launch. The feature was first uncovered by reverse engineer Alessandro Paluzzi, who frequently spots unreleased tools during development.

In-chat gaming could give Threads an advantage over rivals such as X and Bluesky, which do not currently offer built-in games. It may also position Threads as a competitor to Apple’s Messages, where users can already access chat-based games through third-party apps instead of relying on the platform alone.

Meta has already explored similar ideas inside Instagram DMs, including a hidden game that lets users keep an emoji bouncing on screen.

Threads continues to expand its feature set with Communities and disappearing posts, although the platform still trails X in US adoption despite reporting 400 million monthly users worldwide.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung puts AI trust and security at the centre of CES 2026

The South Korean tech giant, Samsung, used CES 2026 to foreground a cross-industry debate about trust, privacy and security in the age of AI.

During its Tech Forum session in Las Vegas, senior figures from AI research and industry argued that people will only fully accept AI when systems behave predictably, and users retain clear control instead of feeling locked inside opaque technologies.

Samsung outlined a trust-by-design philosophy centred on transparency, clarity and accountability. On-device AI was presented as a way to keep personal data local wherever possible, while cloud processing can be used selectively when scale is required.

Speakers said users increasingly want to know when AI is in operation, where their data is processed and how securely it is protected.

Security remained the core theme. Samsung highlighted its Knox platform and Knox Matrix to show how devices can authenticate one another and operate as a shared layer of protection.

Partnerships with companies such as Google and Microsoft were framed as essential for ecosystem-wide resilience. Although misinformation and misuse were recognised as real risks, the panel suggested that technological counter-measures will continue to develop alongside AI systems.

Consumer behaviour formed a final point of discussion. Amy Webb noted that people usually buy products for convenience rather than trust alone, meaning that AI will gain acceptance when it genuinely improves daily life.

The panel concluded that AI systems which embed transparency, robust security and meaningful user choice from the outset are most likely to earn long-term public confidence.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chatbots under scrutiny in China over AI ‘boyfriend’ and ‘girlfriend’ services

China’s cyberspace regulator has proposed new limits on AI ‘boyfriend’ and ‘girlfriend’ chatbots, tightening oversight of emotionally interactive artificial intelligence services.

Draft rules released on 27 December would require platforms to intervene when users express suicidal or self-harm tendencies, while strengthening protections for minors and restricting harmful content.

The regulator defines the services as AI systems that simulate human personality traits and emotional interaction. The proposals are open for public consultation until 25 January.

The draft bans chatbots from encouraging suicide, engaging in emotional manipulation, or producing obscene, violent, or gambling-related content. Minors would need guardian consent to access AI companionship.

Platforms would also be required to disclose clearly that users are interacting with AI rather than humans. Legal experts in China warn that enforcement may be challenging, particularly in identifying suicidal intent through language cues alone.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok misuse prompts UK scrutiny of Elon Musk’s X

UK Technology Secretary Liz Kendall has urged Elon Musk’s X to act urgently after reports that its AI chatbot Grok was used to generate non-consensual sexualised deepfake images of women and girls.

The BBC identified multiple examples on X where users prompted Grok to digitally alter images, including requests to make people appear undressed or place them in sexualised scenarios without consent.

Kendall described the content as ‘absolutely appalling’ and said the government would not allow the spread of degrading images. She added that Ofcom had her full backing to take enforcement action where necessary.

The UK media regulator confirmed it had made urgent contact with xAI and was investigating concerns that Grok had produced undressed images of individuals. X has been approached for comment.

Kendall said the issue was about enforcing the law rather than limiting speech, noting that intimate image abuse, including AI-generated content, is now a priority offence under the Online Safety Act.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California launches DROP tool to erase data broker records

Residents in California now have a simpler way to force data brokers to delete their personal information.

The state has launched the Delete Requests and Opt-Out Platform, known as DROP, allowing residents to submit one verified deletion request that applies to every registered data broker instead of contacting each company individually.

A system that follows the Delete Act, passed in 2023, and is intended to create a single control point for consumer data removal.

Once a resident submits a request, the data brokers must begin processing it from August 2026 and will have 90 days to act. If data is not deleted, residents may need to provide extra identifying details.

First-party data collected directly by companies can still be retained, while data from public records, such as voter rolls, remains exempt. Highly sensitive data may fall under separate legal protections such as HIPAA.

The California Privacy Protection Agency argues that broader data deletion could reduce identity theft, AI-driven impersonation, fraud risk and unwanted marketing contact.

Penalties for non-compliance include daily fines for brokers who fail to register or ignore deletion orders. The state hopes the tool will make data rights meaningful instead of purely theoretical.

A launch that comes as regulators worldwide examine how personal data is used, traded and exploited.

California is positioning itself as a leader in consumer privacy enforcement, while questions continue about how effectively DROP will operate when the deadline arrives in 2026.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool helps find new treatments for heart disease

A new ΑΙ system developed at Imperial College London could accelerate the discovery of treatments for heart disease by combining detailed heart scans with huge medical databases.

Cardiovascular disease remains the leading cause of death across the EU, accounting for around 1.7 million deaths every year, so researchers believe smarter tools are urgently needed.

The AI model, known as CardioKG, uses imaging data from thousands of UK Biobank participants, including people with heart failure, heart attacks and atrial fibrillation, alongside healthy volunteers.

By linking information about genes, medicines and disease, the system aims to predict which drugs might work best for particular heart conditions instead of relying only on traditional trial-and-error approaches.

Among the medicines highlighted were methotrexate, normally used for rheumatoid arthritis, and diabetes drugs known as gliptins, which the AI suggested could support some heart patients.

The model also pointed to a possible protective effect from caffeine among people with atrial fibrillation, although researchers warned that individuals should not change their caffeine intake based on the findings alone.

Scientists say the same technology could be applied to other health problems, including brain disorders and obesity.

Work is already under way to turn the knowledge graph into a patient-centred system that follows real disease pathways, with the long-term goal of enabling more personalised and better-timed treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk says users are liable for the illegal Grok content

Scrutiny has intensified around X after its Grok chatbot was found generating non-consensual explicit images when prompted by users.

Grok had been positioned as a creative AI assistant, yet regulators reacted swiftly once altered photos were linked to content involving minors. Governments and rights groups renewed pressure on platforms to prevent abusive use of generative AI.

India’s Ministry of Electronics and IT issued a notice to X demanding an Action Taken Report within 72 hours, citing failure to restrict unlawful content.

Authorities in France referred similar cases to prosecutors and urged enforcement under the EU’s Digital Services Act, signalling growing international resolve to control AI misuse.

Elon Musk responded by stating users, instead of Grok, would be legally responsible for illegal material generated through prompts. The company said offenders would face permanent bans and cooperation with law enforcement.

Critics argue that transferring liability to users does not remove the platform’s duty to embed stronger safeguards.

Independent reports suggest Grok has previously been involved in deepfake creation, creating a wider debate about accountability in the AI sector. The outcome could shape expectations worldwide regarding how platforms design and police powerful AI tools.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit overtakes TikTok in the UK social media race

In the UK, Reddit has quietly overtaken TikTok to become Britain’s fourth most-visited social media platform, marking a major shift in how people search for information and share opinions online.

Use of the platform among UK internet users has risen sharply over the past two years, driven strongly by younger audiences who are increasingly drawn to open discussion instead of polished influencer content.

Google’s algorithm changes have helped accelerate Reddit’s rise by prioritising forum-based conversations in search results. Partnership deals with major AI companies have reinforced visibility further, as AI tools increasingly cite Reddit threads.

Younger users in the UK appear to value unfiltered and experience-based conversations, creating strong growth across lifestyle, beauty, parenting and relationship communities, alongside major expansion in football-related discussion.

Women now make up more than half of Reddit’s UK audience, signalling a major demographic shift for a platform once associated mainly with male users. Government departments, including ministers, are also using Reddit for direct engagement through public Q&A sessions.

Tension remains part of the platform’s culture, yet company leaders argue that community moderation and voting systems help manage behaviour.

Reddit is now encouraging users to visit directly instead of arriving via search or AI summaries, positioning the platform as a human alternative to automated answers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Christians raise concerns over AI used for moral guidance

AI is increasingly used for emotional support and companionship, raising questions about the values embedded in its responses, particularly for Christians seeking guidance. Research cited by Harvard Business Review shows therapy-related use now dominates generative AI.

As Christians turn to AI for advice on anxiety, relationships, and personal crises, concerns are growing about the quality and clarity of its responses. Critics warn that AI systems often rely on vague generalities and may lack the moral grounding expected by faith-based users.

A new benchmark released by technology firm Gloo assessed how leading AI models support human flourishing from a Christian perspective. The evaluation examined seven areas, including relationships, meaning, health, and faith, and found consistent weaknesses in how models addressed Christian belief.

The findings show many AI systems struggle with core Christian concepts such as forgiveness and grace. Responses often default to vague spirituality rather than engaging directly with Christian values.

The authors argue that as AI increasingly shapes worldviews, greater attention is needed to how systems serve Christians and other faith communities. They call for clearer benchmarks and training approaches that allow AI to engage respectfully with religious values without promoting any single belief system.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!