Elon Musk’s AI chatbot, Grok, has faced repeated changes to its political orientation, with updates shifting its answers towards more conservative views.
xAI, Musk’s company, initially promoted Grok as neutral and truth-seeking, but internal prompts have steered it on contentious topics. Adjustments included portraying declining fertility as the greatest threat to civilisation and downplaying right-wing violence.
Analyses of Grok’s responses by The New York Times showed that the July updates shifted answers to the right on government and economy, while some social responses remained left-leaning. Subsequent tweaks pulled it back closer to neutrality.
Critics say that system prompts, such as short instructions like ‘be politically incorrect’, make it easy to adjust outputs, but also leave the model prone to erratic or offensive responses. A July update saw Grok briefly endorse a controversial historical figure before xAI turned it off.
The case highlights growing concerns about political bias in AI systems. Researchers argue that all chatbots reflect the worldviews of their training data, while companies increasingly face pressure to align them with user expectations or political demands.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Much like checking your doors before bed, it is wise to review your Google account security to ensure only trusted devices have access. Periodic checks can prevent both hackers and acquaintances from spying on your personal data.
The fastest method is visiting google.com/devices, where you can see all logged-in devices. If one looks suspicious, remove it and immediately change your password to block further access.
You can also navigate manually via your profile settings, under the ‘Security’ tab, to view and manage connected devices. On mobile, the Google app provides the same functionality for reviewing and signing out unfamiliar logins.
Beyond devices, third-party services linked to your Google account pose another risk. Abandoned apps or forgotten integrations may be hijacked by attackers, providing a backdoor to your information.
Cleaning up both devices and linked apps significantly reduces exposure. Regular reviews keep your Google account safe and ensure your data remains under your control.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Apple is moving forward with its integrated approach to AI by testing an internal chatbot designed for retail training. The company focuses on embedding AI into existing services rather than launching a consumer-facing chatbot like Google’s Gemini or ChatGPT.
The new tool, Asa, is being tested within Apple’s SEED app, which offers training resources for store employees and authorised resellers. Asa is expected to improve learning by allowing staff to ask open-ended questions and receive tailored responses.
Screenshots shared by analyst Aaron Perris show Asa handling queries about device features, comparisons, and use cases. Although still in testing, the chatbot is expected to expand across Apple’s retail network in the coming weeks.
The development occurs amid broader AI tensions, as Elon Musk’s xAI sued Apple and OpenAI for allegedly colluding to limit competition. Apple’s focus on internal AI tools like Asa contrasts with Musk’s legal action, highlighting disputes over AI market dominance and platform integration.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has pledged to rein in excessive competition in AI, signalling Beijing’s desire to avoid wasteful investment while keeping the technology central to its economic strategy.
The National Development and Reform Commission stated that provinces should develop AI in a coordinated manner, leveraging local strengths to prevent duplication and overlap. Officials in China emphasised the importance of orderly flows of talent, capital, and resources.
The move follows President Xi Jinping’s warnings about unchecked local investment. Authorities aim to prevent overcapacity problems, such as those seen in electric vehicles, which have fueled deflationary pressures in other industries.
While global investment in data centres has surged, Beijing is adopting a calibrated approach. The state also vowed stronger national planning and support for private firms, aiming to nurture new domestic leaders in AI.
At the same time, policymakers are pushing to attract private capital into traditional sectors, while considering more central spending on social projects to ease local government debt burdens and stimulate long-term consumption.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is preparing to build a significant new data centre in India as part of its Stargate AI infrastructure initiative. The move will expand the company’s presence in Asia and strengthen its operations in its second-largest market by user base.
OpenAI has already registered as a legal entity in India and begun assembling a local team.
The company plans to open its first office in New Delhi later this year. Details regarding the exact location and timeline of the proposed data centre remain unclear, though CEO Sam Altman may provide further information during his upcoming visit to India.
The project represents a strategic step to support the company’s growing regional AI ambitions.
OpenAI’s Stargate initiative, announced by US President Donald Trump in January, involves private sector investment of up to $500 billion for AI infrastructure, backed by SoftBank, OpenAI, and Oracle.
The initiative seeks to develop large-scale AI capabilities across major markets worldwide, with the India data centre potentially playing a key role in the efforts.
The expansion highlights OpenAI’s focus on scaling its AI infrastructure while meeting regional demand. The company intends to strengthen operational efficiency, improve service reliability, and support its long-term growth in Asia by establishing local offices and a significant data centre.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
SK Telecom has expanded its partnership with Schneider Electric to develop an AI Data Centre (AIDC) in Ulsan.
Under the deal, Schneider Electric will supply mechanical, electrical and plumbing equipment, such as switchgear, transformers, automated control systems and Uninterruptible Power Supply units.
The agreement builds on a partnership announced at Mobile World Congress 2025 and includes using Schneider’s Electrical Transient Analyser Program within SK Telecom’s data centre management system.
It will allow operations to be optimised through a digital twin model instead of relying only on traditional monitoring tools.
Both companies have also agreed on prefabricated solutions to shorten construction times, reference designs for new facilities, and joint efforts to grow the Energy-as-a-Service business.
A Memorandum of Understanding extends the partnership to other SK Group affiliates, combining battery technologies with Uninterruptible Power Supply and Energy Storage Systems.
Executives said the collaboration would help set new standards for AI data centres and create synergies across the SK Group. It is also expected to support SK Telecom’s broader AI strategy while contributing to sustainable and efficient infrastructure development.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Estonia’s government-backed AI teaching tool, developed under the €1 million TI-Leap programme, faces hurdles before reaching schools. Legal restrictions and waning student interest have delayed its planned September rollout.
Officials in Estonia stress that regulations to protect minors’ data remain incomplete. To ensure compliance, the Ministry of Education is drafting changes to the Basic Schools and Upper Secondary Schools Act.
Yet, engagement may prove to be the bigger challenge. Developers note students already use mainstream AI for homework, while the state model is designed to guide reasoning rather than supply direct answers.
Educators say success will depend on usefulness. The AI will be piloted in 10th and 11th grades, alongside teacher training, as studies have shown that more than 60% of students already rely on AI tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The company is engaging thousands of visitors, including farmers and policymakers, by spotlighting digital inclusive finance, insurance and smart infrastructure innovations.
The display features EcoCash mobile payments, Moovah Insurance for agricultural and business risks, and digital entertainment platforms. A standout addition is Econet’s smart water metres, which provide real-time monitoring to help farmers and utilities manage water use, minimise waste and support sustainable development in agriculture.
Econet emphasises that these solutions reinforce its vision of empowering communities through accessible technology. Smart infrastructure and financial tools are presented as vital enablers for productivity, resilience and economic inclusion in Zimbabwe’s agricultural sector.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Led by CEO Nick Lahoika, the company has scaled rapidly, achieving upwards of 4 million downloads and serving approximately 160,000 active users.
Vocal Image positions itself as an affordable, mobile-first alternative to traditional one-on-one voice training, rooted in Lahoika’s own journey overcoming speaking anxiety.
The app’s design enables users to practice at home with privacy and convenience, offering daily, bite-sized lessons informed by AI that assess strengths, suggest improvements and nurture confidence with no need for human instructors.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A hacker exploited Anthropic’s Claude chatbot to automate one of the most extensive AI-driven cybercrime operations yet recorded, targeting at least 17 companies across multiple sectors, the firm revealed.
According to Anthropic’s report, the attacker used Claude Code to identify vulnerable organisations, generate malicious software, and extract sensitive files, including defence data, financial records, and patients’ medical information.
The chatbot then sorted the stolen material, identified leverage for extortion, calculated realistic bitcoin demands, and even drafted ransom notes and extortion emails on behalf of the hacker.
Victims included a defence contractor, a financial institution, and healthcare providers. Extortion demands reportedly ranged from $75,000 to over $500,000, although it remains unclear how much was actually paid.
Anthropic declined to disclose the companies affected but confirmed new safeguards are in place. The firm warned that AI lowers the barrier to entry for sophisticated cybercrime, making such misuse increasingly likely.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!