Meta under pressure after small business loses thousands

A New Orleans bar owner lost $10,000 after cyber criminals hijacked her Facebook business account, highlighting the growing threat of online scams targeting small businesses. Despite efforts to recover the account, the company was locked out for weeks, disrupting sales.

The US-based scam involved a fake Meta support message that tricked the owner into giving hackers access to her page. Once inside, the attackers began running ads and draining funds from the business account linked to the platform.

Cyber fraud like this is increasingly common as small businesses rely more on social media to reach their customers. The incident has renewed calls for tech giants like Meta to implement stronger user protections and improve support for scam victims.

Meta says it has systems to detect and remove fraudulent activity, but did not respond directly to this case. Experts argue that current protections are insufficient, especially for small firms with fewer resources and little recourse after attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot relies on Musk’s views instead of staying neutral

Grok, the AI chatbot owned by Elon Musk’s company xAI, appears to search for Musk’s personal views before answering sensitive or divisive questions.

Rather than relying solely on a balanced range of sources, Grok has been seen citing Musk’s opinions when responding to topics like Israel and Palestine, abortion, and US immigration.

Evidence gathered from a screen recording by data scientist Jeremy Howard shows Grok actively ‘considering Elon Musk’s views’ in its reasoning process. Out of 64 citations Grok provided about Israel and Palestine, 54 were linked to Musk.

Others confirmed similar results when asking about abortion and immigration laws, suggesting a pattern.

While the behaviour might seem deliberate, some experts believe it happens naturally instead of through intentional programming. Programmer Simon Willison noted that Grok’s system prompt tells it to avoid media bias and search for opinions from all sides.

Yet, Grok may prioritise Musk’s stance because it ‘knows’ its owner, especially when addressing controversial matters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI technology drives sharp rise in synthetic abuse material

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qantas hacked as airline cyber threats escalate

Qantas Airways has confirmed that personal data from 5.7 million customers was stolen in a recent cyberattack, including names, contact details and meal preferences. The airline stated that no financial or login credentials were accessed, and frequent flyer accounts remain secure.

An internal investigation found the data breach involved various levels of personal information, with 2.8 million passengers affected most severely. Meal preferences were the least common data stolen, while over a million customers lost addresses or birth dates.

Qantas has contacted affected passengers and says it offers support while monitoring the situation with cybersecurity experts. Under pressure to manage the crisis effectively, CEO Vanessa Hudson assured the public that extra security steps had been taken.

The breach is the latest in a wave of attacks targeting airlines, with the FBI warning that the hacking group Scattered Spider may be responsible. Similar incidents have recently affected carriers in the US and Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S still rebuilding after April cyber incident

Marks & Spencer has revealed that the major cyberattack it suffered in April stemmed from a sophisticated impersonation of a third-party user.

The breach began on 17 April and was detected two days later, sparking weeks of disruption and a crisis response effort described as ‘traumatic’ by Chairman Archie Norman.

The retailer estimates the incident will cost it £300 million in operating profit and says it remains in rebuild mode, although customer services are expected to normalise by month-end.

Norman confirmed M&S is working with UK and US authorities, including the National Crime Agency, the National Cyber Security Centre, and the FBI.

While the ransomware group DragonForce has claimed responsibility, Norman declined to comment on whether any ransom was paid. He said such matters were better left to law enforcement and not in the public interest to discuss further.

The company expects to recover some of its losses through insurance, although the process may take up to 18 months. Other UK retailers, including Co-op and Harrods, were also targeted in similar attacks around the same time, reportedly using impersonation tactics to bypass internal security systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber defence effort returns to US ports post-pandemic

The US Cybersecurity and Infrastructure Security Agency (CISA) has resumed its seaport cybersecurity exercise programme. Initially paused due to the pandemic and other delays, the initiative is now returning to ports such as Savannah, Charleston, Wilmington and potentially Tampa.

These proof-of-concept tabletop exercises are intended to help ports prepare for cyber threats by developing a flexible, replicable framework. Each port functions uniquely, yet common infrastructure and shared vulnerabilities make standardised preparation critical for effective crisis response.

CISA warns that threats targeting ports have grown more severe, with nation states exploiting AI-powered techniques. Some US ports, including Houston, have already fended off cyberattacks, and Chinese-made systems dominate critical logistics, raising national security concerns.

Private ownership of most port infrastructure demands strong public-private partnerships to maintain cybersecurity. CISA aims to offer a shared model that ports across the country can adapt to improve cooperation, resilience, and threat awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity launches AI browser to challenge Google Chrome

Perplexity AI, backed by Nvidia and other major investors, has launched Comet, an AI-driven web browser designed to rival Google Chrome.

The browser uses ‘agentic AI’ that performs tasks, makes decisions, and simplifies workflows in real time, offering users an intelligent alternative to traditional search and navigation.

Comet’s assistant can compare products, summarise articles, book meetings, and handle research queries through a single interface. Initially available to subscribers of Perplexity Max at US$200 per month, Comet will gradually roll out more broadly via invite during the summer.

The launch signals Perplexity’s move into the competitive browser space, where Chrome currently dominates with a 68 per cent global market share.

The company aims to challenge not only Google’s and Microsoft’s browsers but also compete with OpenAI, which recently introduced search to ChatGPT. Unlike many AI tools, Comet stores data locally and does not train on personal information, positioning itself as a privacy-first solution.

Still, Perplexity has faced criticism for using content from major media outlets without permission. In response, it launched a publisher partnership program to address concerns and build collaborative relationships with news organisations like Forbes and Dow Jones.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X CEO Yaccarino resigns as AI controversy and Musk’s influence grow

Linda Yaccarino has stepped down as CEO of X, ending a turbulent two-year tenure marked by Musk’s controversial leadership and ongoing transformation of the social media company.

Her resignation came just one day after a backlash over offensive posts by Grok, the AI chatbot created by Musk’s xAI, which had been recently integrated into the platform.

Yaccarino, who was previously a top advertising executive at NBCUniversal, was brought on in 2023 to help stabilise the company following Musk’s $44bn acquisition.

In her farewell post, she cited efforts to improve user safety and rebuild advertiser trust, but did not provide a clear reason for her departure.

Analysts suggest growing tensions with Musk’s management style, particularly around AI moderation, may have prompted the move.

Her exit adds to the mounting challenges facing Musk’s empire.

Tesla is suffering from slumping sales and executive departures, while X remains under pressure from heavy debts and legal battles with advertisers.

Yaccarino had spearheaded ambitious initiatives, including payment partnerships with Visa and plans for an X-branded credit or debit card.

Despite these developments, X continues to face scrutiny for its rightward political shift and reliance on controversial AI tools.

Whether the company can fulfil Musk’s vision of becoming an ‘everything app’ without Yaccarino remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI unveils Grok 4 with top benchmark scores

Elon Musk’s AI company, xAI, has launched its latest flagship model, Grok 4, alongside an ultra-premium $300 monthly plan named SuperGrok Heavy.

Grok 4, which competes with OpenAI’s ChatGPT and Google’s Gemini, can handle complex queries and interpret images. It is now integrated more deeply into the social media platform X, which Musk also owns.

Despite recent controversy, including antisemitic responses generated by Grok’s official X account, xAI focused on showcasing the model’s performance.

Musk claimed Grok 4 is ‘better than PhD level’ in all academic subjects and revealed a high-performing version called Grok 4 Heavy, which uses multiple AI agents to solve problems collaboratively.

The models scored strongly on benchmark exams, including a 25.4% score for Grok 4 on Humanity’s Last Exam, outperforming major rivals. With tools enabled, Grok 4 Heavy reached 44.4%, nearly doubling OpenAI’s and Google’s results.

It also achieved a leading score of 16.2% on the ARC-AGI-2 pattern recognition test, nearly double that of Claude Opus 4.

xAI is targeting developers through its API and enterprise partnerships while teasing upcoming tools: an AI coding model in August, a multi-modal agent in September, and video generation in October.

Yet the road ahead may be rocky, as the company works to overcome trust issues and position Grok as a serious rival in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!