Grok controversies shadow Musk’s new Grokipedia project

Elon Musk has announced that his company xAI is developing Grokipedia, a planned Wikipedia rival powered by its Grok AI chatbot. He described the project as a step towards achieving xAI’s mission of understanding the universe.

In a post on X, Musk called Grokipedia a ‘necessary improvement over Wikipedia,’ renewing his criticism of the platform’s funding model and what he views as ideological bias. He has long accused Wikimedia of leaning left and reflecting ‘woke’ influence.

Despite Musk’s efforts to position Grok as a solution to bias, the chatbot has occasionally turned on its creator. Earlier this year, it named Musk among the people doing the most harm to the US, alongside Donald Trump and Vice President JD Vance.

The Grok 4 update also drew controversy when users reported that the chatbot praised and adopted the surname of a controversial historical figure in its responses, sparking criticism of its safety. Such incidents raised questions about the limits of Musk’s oversight.

Grok is already integrated into X as a conversational assistant, providing context and explanations in real time. Musk has said it will power the platform’s recommendation algorithm by late 2025, allowing users to customise their feeds dynamically through direct requests.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Portugal to bring AI into bureaucracy to save time

The Portuguese government is preparing to bring AI into public administration to accelerate licensing procedures and cut delays, according to State Reform Minister Gonçalo Matias.

Speaking at a World Tourism Day conference in Tróia, he said AI can play a key role in streamlining decision-making while maintaining human oversight at the final stage.

Matias explained that the reform will reallocate staff from routine tasks to work of higher value, while introducing a system of prior notifications.

Under the plan, citizens and businesses in Portugal will be allowed to begin most activities without a licence, with tacit approval granted if the administration fails to respond within set deadlines.

The minister said the reforms will be tied to strict accountability measures, emphasising a ‘trust contract’ between citizens, businesses and the public administration. He argued the initiative will not only speed up processes but also foster greater efficiency and responsibility across government services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s Sora app raises tension between mission and profit

The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.

The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.

Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.

The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.

A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.

Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.

Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram head explains why ads feel like eavesdropping

Adam Mosseri has denied long-standing rumours that the platform secretly listens to private conversations to deliver targeted ads. In a video he described as ‘myth busting’, Mosseri said Instagram does not use the phone’s microphone to eavesdrop on users.

He argued that such surveillance would not only be a severe breach of privacy but would also quickly drain phone batteries and trigger visible microphone indicators.

Instead, Mosseri outlined four reasons why adverts may appear suspiciously relevant: online searches and browsing history, the influence of friends’ online behaviour, rapid scrolling that leaves subconscious impressions, and plain coincidence.

According to Mosseri, Instagram users may mistake targeted advertising for surveillance because algorithms incorporate browsing data from advertisers, friends’ interests, and shared patterns across users.

He stressed that the perception of being overheard is often the result of ad targeting mechanics rather than eavesdropping.

Despite his explanation, Mosseri admitted the rumour is unlikely to disappear. Many viewers of his video remained sceptical, with some comments suggesting his denial only reinforced their suspicions about how social media platforms operate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How OpenAI designs Sora’s recommendation feed for creativity and safety

OpenAI outlines the core principles behind Sora’s content feed in its Sora Feed Philosophy document. The company states that the feed is designed to spark creativity, foster connections, and maintain a safe user environment.

To achieve these goals, OpenAI says it prioritises creativity over passive consumption. The ranking is steered not simply for engagement, but to encourage active participation. Users can also influence what they see via steerable ranking controls.

Another guiding principle is putting users in control. For instance, parental settings let caretakers turn off feed personalisation or continuous scroll for teen accounts.

OpenAI also emphasises connection. The feed is biassed toward content from people you know or connect with, rather than purely global content, so the experience feels more communal.

In terms of safety and expression, OpenAI embeds guardrails at the content creation level. Because every post is generated within Sora, the system can block disallowed content before it appears.

The feed layers additional filtering, removing or deprioritising harmful or unsafe material (e.g. violent, sexual, hate, self-harm content). At the same time, the design aims not to over-censor, allowing space for genuine expression and experimentation.

On how the feed works, OpenAI says it considers signals like user activity (likes, comments, remixes), location data, ChatGPT history (unless turned off), engagement metrics, and author-level data (e.g. follower counts). Safety signals also weigh in to suppress or filter content flagged as inappropriate.

OpenAI describes the feed as a ‘living, breathing’ system. It expects to update and refine algorithms based on user behaviour and feedback while staying aligned with its founding principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft evolves Sentinel into agentic defence platform

Microsoft is transforming Sentinel from a traditional SIEM into a unified defence platform for the agentic AI era. It now incorporates features such as a data lake, semantic graphs and a Model Context Protocol (MCP) server to enable intelligent agents to reason over security data.

Sentinel’s enhancements allow defenders to combine structured, semi-structured data into vectorised, graph-based relationships. With that, AI agents grounded in Security Copilot and custom tools can automate triage, correlate alerts, reason about attack paths, and initiate response actions, while keeping human oversight.

The platform supports extensibility through open agent APIs, enabling partners and organisations to deploy custom agents through the MCP server.

Microsoft also adds protections for AI agents, such as prompt-injection resilience, task adherence controls, PII guardrails, and identity controls for agent estates. The evolution aims to shift cybersecurity from reactive to predictive operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok 4 launches on Azure with advanced reasoning features

Microsoft has announced that Grok 4, the latest large language model from Elon Musk’s xAI, is now available in Azure AI Foundry. The collaboration aims to deliver frontier-level reasoning capabilities with enterprise-grade safety and control.

Grok 4 features a 128,000-token context window, integrated web search, and native tool use. According to Microsoft, it excels at first-principles reasoning, handling complex tasks in science, maths, and logic. The model was trained on xAI’s Colossus supercomputer.

Azure says the model can analyse long documents, code repositories, and academic texts simultaneously, reducing the need to split inputs. It also incorporates external data for real-time responses, though Microsoft cautions that outputs should be verified against reliable sources.

The platform includes Azure AI Content Safety by default, and Microsoft stresses responsible use with ongoing monitoring. Pricing starts at $5.5 per million input tokens and $27.5 per million output tokens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Liverpool scientists develop low-cost AI blood test for Alzheimer’s

Scientists at the University of Liverpool have developed a low-cost blood test that could enable earlier detection of Alzheimer’s disease. The handheld devices, powered by AI and equipped with polymer-based biosensors, deliver results with accuracy comparable to hospital tests at a fraction of the cost.

Alzheimer’s affects more than 55 million people worldwide and remains the most common cause of dementia. Existing hospital tests are accurate but expensive and inaccessible in many clinics, delaying diagnosis and treatment, particularly in low- and middle-income countries.

One study utilised plastic antibodies on a porous gold surface to detect p-tau181, matching high-end laboratory methods. Another built a circuit-board device with a chemical coating that distinguished healthy from patient samples at a lower cost.

The platform is linked to a low-cost reader and a web app that utilises AI for instant analysis. Lead researcher Dr Sanjiv Sharma said the aim was to make Alzheimer’s testing ‘as accessible as checking blood pressure or blood sugar.’

The World Health Organisation has called for decentralised brain disease diagnostics. Researchers say these technologies bring that vision closer to reality, offering hope for earlier treatment and better care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sora 2.0 release reignites debate on intellectual property in AI video

OpenAI has launched Sora 2.0, the latest version of its video generation model, alongside an iOS app available by invitation in the US and Canada. The tool offers advances in physical realism, audio-video synchronisation, and multi-shot storytelling, with built-in safeguards for security and identity control.

The app allows users to create, remix, or appear in clips generated from text or images. A Pro version, web interface, and developer API are expected soon, extending access to the model.

Sora 2.0 has reignited debate over intellectual property. According to The Wall Street Journal, OpenAI has informed studios and talent agencies that their universes could appear in generated clips unless they opt out.

The company defends its approach as an extension of fan creativity, while stressing that real people’s images and voices require prior consent, validated through a verified cameo system.

By combining new creative tools with identity safeguards, OpenAI aims to position Sora 2.0 as a leading platform in the fast-growing market for AI-generated video.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!