Musk’s chatbot Grok removes offensive content

Elon Musk’s AI chatbot Grok has removed several controversial posts after they were flagged as anti-Semitic and accused of praising Adolf Hitler.

The deletions followed backlash from users on X and criticism from the Anti-Defamation League (ADL), which condemned the language as dangerous and extremist.

Grok, developed by Musk’s xAI company, sparked outrage after stating Hitler would be well-suited to tackle anti-White hatred and claiming he would ‘handle it decisively’. The chatbot also made troubling comments about Jewish surnames and referred to Hitler as ‘history’s moustache man’.

In response, xAI acknowledged the issue and said it had begun filtering out hate speech before posts go live. The company credited user feedback for helping identify weaknesses in Grok’s training data and pledged ongoing updates to improve the model’s accuracy.

The ADL criticised the chatbot’s behaviour as ‘irresponsible’ and warned that such AI-generated rhetoric fuels rising anti-Semitism online.

It is not the first time Grok has been caught in controversy — earlier this year, the bot repeated White genocide conspiracy theories, which xAI blamed on an unauthorised software change.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT quietly tests new ‘Study Together’ feature for education

A few ChatGPT users have noticed a new option called ‘Study Together’ appearing among available tools, though OpenAI has yet to confirm any official rollout. The feature seems designed to make ChatGPT a more interactive educational companion than just delivering instant answers.

Rather than offering direct solutions, the tool prompts users to think for themselves by asking questions, potentially turning ChatGPT into a digital tutor.

Some speculate the mode might eventually allow multiple users to study together in real-time, mimicking a virtual study group environment.

With the chatbot already playing a significant role in classrooms — helping teachers plan lessons or assisting students with homework — the ‘Study Together’ feature might help guide users toward deeper learning instead of enabling shortcuts.

Critics have warned that AI tools like ChatGPT risk undermining education, so it could be a strategic shift to encourage more constructive academic use.

OpenAI has not confirmed when or if the feature will launch publicly, or whether it will be limited to ChatGPT Plus users. When asked, ChatGPT only replied that nothing had been officially announced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman shrugs off Meta poaching, backs Trump, jabs at Musk

OpenAI CEO Sam Altman addressed multiple hot topics during the Sun Valley conference, including Meta’s aggressive recruitment of top AI researchers, his strained relationship with Elon Musk, and a surprising show of support for Donald Trump.

Altman downplayed Meta’s talent raids, saying he had not spoken to Mark Zuckerberg since the Meta CEO lured away three OpenAI researchers with a $100 million signing bonus. All three had worked at OpenAI’s Zurich office, which opened in 2024.

Despite the losses, Altman described the situation as ‘fine’ and ‘good’, suggesting OpenAI’s mission continues to retain top talent.

The OpenAI chief also took a subtle swipe at Meta’s smartglasses, saying he doesn’t like wearable tech and implying his company has no plans to follow suit.

On the topic of Elon Musk, Altman laughed off their rivalry, saying only that Musk’s bust-ups with everybody, and hinting at the long-running tension between the two former co-founders.

Perhaps most notably, Altman expressed disillusionment with the Democratic Party, saying he no longer feels represented by mainstream figures he once supported.

He praised Donald Trump’s focus on AI infrastructure. He even donated $1 million to Trump’s inaugural fund — a gesture reflecting a broader shift among Silicon Valley leaders warming to Trump as his popularity rises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercrime soars as firms underfund defences

Nearly four in ten UK businesses (38 %) do not allocate a dedicated cybersecurity budget, even as cybercrime costs hit an estimated £64 billion over three years.

Smaller enterprises are particularly vulnerable, with 15 % reporting breaches linked to underfunding.

Almost half of organisations (45 %) rely solely on in‑house defences, with only 8 % securing standalone cyber insurance, exposing many to evolving threats.

Common attacks include phishing campaigns, AI‑powered malware and DDoS, yet cybersecurity typically receives just 11 % of IT budgets.

Security professionals call for stronger board‑level involvement and increased collaboration with specialists and regulators.

They caution that businesses risk suffering further financial and reputational damage without proactive budgeting and external expertise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI locks down operations after DeepSeek model concerns

OpenAI has significantly tightened its internal security following reports that DeepSeek may have replicated its models. DeepSeek allegedly used distillation techniques to launch a competing product earlier this year, prompting a swift response.

OpenAI has introduced strict access protocols to prevent information leaks, including fingerprint scans, offline servers, and a policy restricting internet use without approval. Sensitive projects such as its AI o1 model are now discussed only by approved staff within designated areas.

The company has also boosted cybersecurity staffing and reinforced its data centre defences. Confidential development information is now shielded through ‘information tenting’.

These actions coincide with OpenAI’s $30 billion deal with Oracle to lease 4.5 gigawatts of data centre capacity across the United States. The partnership plays a central role in OpenAI’s growing Stargate infrastructure strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Phishing 2.0: How AI is making cyber scams more convincing

Phishing remains among the most widespread and dangerous cyber threats, especially for individuals and small businesses. These attacks rely on deception—emails, texts, or social messages that impersonate trusted sources to trick people into giving up sensitive information.

Cybercriminals exploit urgency and fear. A typical example is a fake email from a bank saying your account is at risk, prompting you to click a malicious link. Even when emails look legitimate, subtle details—like a strange sender address—can be red flags.

In one recent scam, Netflix users received fake alerts about payment failures. The link led to a fake login page where credentials and payment data were stolen. Similar tactics have been used against QuickBooks users, small businesses, and Microsoft 365 customers.

Small businesses are frequent targets due to limited security resources. Emails mimicking vendors or tech companies often trick employees into handing over credentials, giving attackers access to sensitive systems.

Phishing works because it preys on human psychology: trust, fear, and urgency. And with AI, attackers can now generate more convincing content, making detection harder than ever.

Protection starts with vigilance. Always check sender addresses, avoid clicking suspicious links, and enable multi-factor authentication (MFA). Employee training, secure protocols for sensitive requests, and phishing simulations are critical for businesses.

Phishing attacks will continue to grow in sophistication, but with awareness and layered security practices, users and businesses can stay ahead of the threat.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scammers use fake celebrities to steal millions in crypto fraud

Fraudsters increasingly pretend to be celebrities to deceive people into fake cryptocurrency schemes. Richard Lyons lost $10,000 after falling for a scam involving a fake Elon Musk, who used an AI-generated voice and images to make the investment offer appear authentic.

The FBI has highlighted a sharp rise in crypto scams during 2024, with billions lost as fraudsters pose as financial experts or love interests. Many scams involve fake websites that mimic legitimate investment platforms, showing false gains before stealing funds.

Lyons was shown a fake web page indicating his investment had grown to $50,000 before the scam was uncovered.

Experts warn that thorough research and caution are essential when approached online with investment offers. The FBI urges potential investors to consult trusted advisers and avoid sending money to strangers.

Blockchain firms like Lionsgate Network now offer rapid tracing of stolen crypto, although recovery is usually limited to high-value cases.

Lyons described the scam’s impact as devastating, leaving him struggling with everyday expenses. Authorities advise anyone targeted by similar frauds to report promptly for a better chance of recovery and protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Groq opens AI data centre in Helsinki

Groq has opened its first European AI data centre in Helsinki, Finland, in collaboration with Equinix. The facility offers European users fast, secure, and low-latency AI inference services, aiming to improve performance and data governance.

The launch follows Groq’s existing partnership with Equinix, which already includes a site in Dallas. The new centre complements Groq’s global network, including facilities in the US, Canada and Saudi Arabia.

CEO Jonathan Ross stated the centre provides immediate infrastructure for developers building fast at scale. Equinix highlighted Finland’s reliable power and sustainable energy as key factors in the decision to host capacity there.

The data centre supports GroqCloud, delivering over 20 million tokens per second across its network. European businesses are expected to benefit from improved AI performance and operational efficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Court convicts duo in UK crypto fraud worth $2 million

A UK court has sentenced Raymondip Bedi and Patrick Mavanga to a combined 12 years in prison for a cryptocurrency fraud scheme that defrauded 65 victims of around $2 million. Between 2017 and 2019, the pair cold-called investors posing as advisers, which led them to fake crypto websites.

Bedi was sentenced to five years and four months, while Mavanga received six years and six months behind bars. Both operated under CCX Capital and Astaria Group LLP, deliberately bypassing financial regulations to extract illicit gains.

The scam targeted retail investors with little crypto experience, luring them with promises of high profits and misleading sales materials.

Victim impact statements revealed severe financial and emotional consequences. Some lost their life savings or fell into debt, and others developed mental health issues. Mavanga was also found guilty of hiding incriminating evidence during the investigation.

The Financial Conduct Authority (FCA) led the prosecution amid a heavy backlog of crypto fraud cases, exposing regulators’ challenges in enforcing laws.

The court encouraged victims to seek support and highlighted the need for vigilance against similar scams. While the prosecution offers closure for some, the lengthy process underscores the ongoing difficulties in policing the fast-evolving crypto market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lithuania questions legality of Robinhood’s new tokens

Robinhood’s new blockchain tokens, which are tied to firms like SpaceX and OpenAI, are under EU scrutiny, with Lithuania’s central bank reviewing whether the product meets financial rules.

The tokens, introduced on 30 July, allow retail investors to gain exposure to high-profile private firms through digital assets. Although Robinhood offered a promotional giveaway to attract EU users, questions quickly arose over the product’s legal classification and how it was marketed to the public.

OpenAI has publicly stated it has no affiliation with Robinhood and has not authorised share transfers. Robinhood responded by confirming that the tokens do not represent actual equity but provide access to the value of private firms via non-equity instruments.

Regulators are now assessing whether the product meets compliance standards and whether investor information has been presented clearly and accurately. The central bank has requested further details before issuing a formal judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot