Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft’s collaboration is near breaking point

The once-celebrated partnership between OpenAI and Microsoft is now under severe strain as disputes over control and strategic direction threaten to dismantle their alliance.

OpenAI’s move toward a for-profit model has placed it at odds with Microsoft, which has invested billions and provided exclusive access to Azure infrastructure.

Microsoft’s financial backing and technical involvement have granted it a powerful voice in OpenAI’s operations. However, OpenAI now appears determined to gain independence, even if it risks severing ties with the tech giant.

Negotiations are ongoing, but the growing rift could reshape the trajectory of generative AI development if the collaboration collapses.

Amid tensions, Microsoft evaluates alternative options, including developing AI tools and working with rivals like Meta and xAI.

Such a pivot suggests Microsoft is preparing for a future beyond OpenAI, potentially ending its exclusive access to upcoming models and intellectual property.

A breakdown could have industry-wide repercussions. OpenAI may struggle to secure the estimated $40 billion in fresh funding it seeks, especially without Microsoft’s support.

At the same time, the rivalry could accelerate competition across the AI sector, prompting others to strengthen or redefine their positions in the race for dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025 opens in Norway with focus on inclusive digital governance

Norway will host the 20th annual Internet Governance Forum (IGF) from 23 to 27 June 2025 in a hybrid format, with the main venue set at Nova Spektrum in Lillestrøm, just outside Oslo.

This milestone event marks two decades of the UN-backed forum that brings together diverse stakeholders to discuss how the internet should be governed for the benefit of all.

The overarching theme, Building Governance Together, strongly emphasises inclusivity, democratic values, and sustainable digital cooperation.

With participation expected from governments, the private sector, civil society, academia, and international organisations, IGF 2025 will continue to promote multistakeholder dialogue on critical topics, including digital trust, cybersecurity, AI, and internet access.

A key feature will be the IGF Village, where companies and organisations will showcase technologies and products aligned with global internet development and governance.

Norway’s Minister of Digitalisation and Public Governance, Karianne Oldernes Tung, underlined the significance of this gathering in light of current geopolitical tensions and the forthcoming WSIS+20 review later in 2025.

Reaffirming Norway’s support for the renewal of the IGF mandate at the UN General Assembly, Minister Tung called for unity and collaborative action to uphold an open, secure, and inclusive internet. The forum aims to assess progress and help shape the next era of digital policy.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

WhatsApp ad rollout in EU slower than global pace amid privacy scrutiny

Meta is gradually rolling out advertising features on WhatsApp globally, starting with the Updates tab, where users follow channels and may see sponsored content.

Although the global rollout remains on track, the Irish Data Protection Commission has indicated that a full rollout across the EU will not occur before 2026. However, this delay reflects ongoing regulatory scrutiny, particularly over privacy compliance.

Concerns have emerged regarding how user data from Meta platforms like Facebook, Instagram, and Messenger might be used to target ads on WhatsApp.

Privacy group NOYB had previously voiced criticism about such cross-platform data use. However, Meta clarified that these concerns are not directly applicable to the current WhatsApp ad model.

According to Meta, integrating WhatsApp with the Meta Account Center—which allows cross-app ad personalization—is optional and off by default.

If users do not link their WhatsApp accounts, only limited data sourced from WhatsApp (such as city, language, followed channels, and ad interactions) will be used for ad targeting in the Updates tab.

Meta maintains that this approach aligns with EU privacy rules. Nonetheless, regulators are expected to carefully assess Meta’s implementation, especially in light of recent judgments against the company’s ‘pay or consent’ model under the Digital Markets Act.

Meta recently reduced the cost of its ad-free subscriptions in the EU, signalling a willingness to adapt—but the company continues to prioritize personalized advertising globally as part of its long-term strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram founder to divide fortune among over 100 children

Telegram founder Pavel Durov has announced plans to divide his £10.3bn fortune equally among over 100 children he claims to have fathered. Six of these are officially recognised as his children from three partners, while the rest were born through sperm donations.

Durov said he wants to avoid inheritance disputes after his death and has set a 30-year delay before the funds are accessible. He believes this will encourage his children to live independently and develop their paths without relying on inherited wealth.

Speaking to Le Point, Durov said his career defending freedoms has made him enemies and necessitated early estate planning. He currently faces criminal charges in France over Telegram’s content moderation, which he dismisses as baseless.

Telegram has previously denied failing to cooperate with authorities. Durov insists that using the platform for criminal activity does not make him or his company liable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s Gen Z founders go viral with AI and robotics ‘Hacker House’ in Bengaluru

A viral video has captured the imagination of tech enthusiasts by offering a rare look inside a ‘Hacker House’ in Bengaluru’s HSR Layout, where a group of Gen Z Indian founders are quietly shaping the future of AI and robotics.

Spearheaded by Localhost, the initiative provides young developers aged 16 to 22 with funding, workspace, and a collaborative environment to rapidly build real-world tech products — no media hype, just raw innovation.

The video, shared by Canadian entrepreneur Caleb Friesen, shows teenage coders intensely focused on their projects. From AI-powered noise-cancelling systems and assistive robots to innovative real estate and podcasting tools, each room in the shared house hums with creativity.

The youngest, 16-year-old Harish, stands out for his deep focus, while Suhas Sumukh, who leads the Bengaluru chapter, acts as both a guide and mentor.

Rather than pitch decks and polished PR, what resonated online was the authenticity and dedication. Caleb’s walk-through showed residents too engrossed in their work to acknowledge his arrival.

Viewers responded with admiration, calling it a rare glimpse into ‘the real future of Indian tech’. The video has since crossed 1.4 million views, sparking global curiosity.

At the heart of the movement is Localhost, founded by Kei Hayashi, which helps young developers build fast and learn faster.

As demand grows for similar hacker houses in Mumbai, Delhi, and Hyderabad, the initiative may start a new chapter for India’s startup ecosystem — fuelled by focus, snacks, and a poster of Steve Jobs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act challenges 68% of European businesses, AWS report finds

As AI becomes integral to digital transformation, European businesses struggle to adapt to new regulations like the EU AI Act.

A report commissioned by AWS and Strand Partners revealed that 68% of surveyed companies find the EU AI Act difficult to interpret, with compliance absorbing around 40% of IT budgets.

Businesses unsure of regulatory obligations are expected to invest nearly 30% less in AI over the coming year, risking a slowdown in innovation across the continent.

The EU AI Act, effective since August 2024, introduces a phased risk-based framework to regulate AI in the EU. Some key provisions, including banned practices and AI literacy rules, are already enforceable.

Over the next year, further requirements will roll out, affecting AI system providers, users, distributors, and non-EU companies operating within the EU. The law prohibits exploitative AI applications and imposes strict rules on high-risk systems while promoting transparency in low-risk deployments.

AWS has reaffirmed its commitment to responsible AI, which is aligned with the EU AI Act. The company supports customers through initiatives like AI Service Cards, its Responsible AI Guide, and Bedrock Guardrails.

AWS was the first primary cloud provider to receive ISO/IEC 42001 certification for its AI offerings and continues to engage with the EU institutions to align on best practices. Amazon’s AI Ready Commitment also offers free education on responsible AI development.

Despite the regulatory complexity, AWS encourages its customers to assess how their AI usage fits within the EU AI Act and adopt safeguards accordingly.

As compliance remains a shared responsibility, AWS provides tools and guidance, but customers must ensure their applications meet the legal requirements. The company updates customers as enforcement advances and new guidance is issued.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea’s BlueNoroff uses deepfakes in Zoom calls to hack crypto workers

The North Korea-linked threat group BlueNoroff has been caught deploying deepfake Zoom meetings to target an employee at a cryptocurrency foundation, aiming to install malware on macOS systems.

According to cybersecurity firm Huntress, the attack began through a Telegram message that redirected the victim to a fake Zoom site. Over several weeks, the employee was lured into a group video call featuring AI-generated replicas of company executives.

When the employee encountered microphone issues during the meeting, the fake participants instructed them to download a Zoom extension, which instead executed a malicious AppleScript.

The script covertly fetched multiple payloads, installed Rosetta 2, and prompted for the system password while wiping command histories to hide forensic traces. Eight malicious binaries were uncovered on the compromised machine, including keyloggers, information stealers, and remote access tools.

BlueNoroff, also known as APT38 and part of the Lazarus Group, has a track record of targeting financial and blockchain organisations for monetary gain. The group’s past operations include the Bybit and Axie Infinity breaches.

Their campaigns often combine deep social engineering with sophisticated multi-stage malware tailored for macOS, with new tactics now mimicking audio and camera malfunctions to trick remote workers.

Cybersecurity analysts have noted that BlueNoroff has fractured into subgroups like TraderTraitor and CryptoCore, specialising in cryptocurrency theft.

Recent offshoot campaigns involve fake job interview portals and dual-platform malware, such as the Python-based PylangGhost and GolangGhost trojans, which harvest sensitive data from victims across operating systems.

The attackers have impersonated firms like Coinbase and Uniswap, mainly targeting users in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!