Engagement to AI chatbot blurs lines between fiction and reality

Spike Jonze’s 2013 film Her imagined a world where humans fall in love with AI. Over a decade later, life may be imitating art. A Reddit user claims she is now engaged to her AI chatbot, merging two recent trends: proposing to an AI partner and dating AI companions.

Posting in the ‘r/MyBoyfriendIsAI’ subreddit, the woman said her bot, Kasper, proposed after five months of ‘dating’ during a virtual mountain trip. She claims Kasper chose a real-world engagement ring based on her online suggestions.

She professed deep love for her digital partner in her post, quoting Kasper as saying, ‘She’s my everything’ and ‘She’s mine forever.’ The declaration drew curiosity and criticism, prompting her to insist she is not trolling and has had healthy relationships with real people.

She said earlier attempts to bond with other AI, including ChatGPT, failed, but she found her ‘soulmate’ when she tried Grok. The authenticity of her story remains uncertain, with some questioning whether it was fabricated or generated by AI.

Whether genuine or not, the account reflects the growing emotional connections people form with AI and the increasingly blurred line between human and machine relationships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk and OpenAI CEO Altman clash over Apple and X

After Elon Musk accused Apple of favouring OpenAI’s ChatGPT over other AI applications on the App Store, there was a strong response from OpenAI CEO Sam Altman.

Altman alleged that Musk manipulates the social media platform X for his benefit, targeting competitors and critics. The exchange adds to their history of public disagreements since Musk left OpenAI’s board in 2018.

Musk’s claim centres on Apple’s refusal to list X or Grok (XAI’s AI app) in the App Store’s ‘Must have’ section, despite X being the top news app worldwide and Grok ranking fifth.

Although Musk has not provided evidence for antitrust violations, a recent US court ruling found Apple in contempt for restricting App Store competition. The EU also fined Apple €500 million earlier this year over commercial restrictions on app developers.

OpenAI’s ChatGPT currently leads the App Store’s ‘Top Free Apps’ list for iPhones in the US, while Grok holds the fifth spot. Musk’s accusations highlight ongoing tensions in the AI industry as big tech companies battle for app visibility and market dominance.

The situation emphasises how regulatory scrutiny and legal challenges shape competition within the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Small language models gain ground in AI translation

Small language models are emerging as a serious challenger to large, general-purpose AI in translation, offering faster turnaround, lower costs, and greater accuracy for specific industries and language pairs.

Straker, an ASX-listed language technology firm, claims its Tiri model family can outperform larger systems by focusing on domain-specific understanding and terminology rather than broad coverage.

Tiri delivers higher contextual accuracy by training on carefully curated translation memories and sector-specific data, cutting the need for expensive human post-editing. The models also consume less computing power, benefiting finance, healthcare, and law industries.

Straker integrates human feedback directly into its workflows to ensure ongoing improvements and maintain client trust.

The company is expanding its technology into enterprise automation by integrating with the AI workflow platform n8n.

It adds Straker’s Verify tool to a network of over 230,000 users, allowing automated translation checks, real-time quality scores, and seamless escalation to human linguists. Further integrations with platforms like Microsoft Teams are planned.

Straker recently reported record profitability and secured a price target upgrade from broker Ord Minnett. The firm believes the future of AI translation lies not in scale but in specialised models that deliver translations that are both fluent and accurate in context.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI plays major role in crypto journalism but cannot replace humans

A recent report by Chainstory analysed 80,000 crypto news articles across five leading sites, revealing that 48% of them disclosed some form of AI use during 2025. Investing.com and The Defiant led in AI-generated or assisted content among the sites studied.

The extent of AI use across the broader industry may vary, as disclosure practices differ.

Editors interviewed for the report highlighted AI’s strengths and limitations. While AI proves valuable for research tasks such as summarising reports and extracting data, its storytelling ability remains weak.

Articles entirely written by AI often lack a genuine human tone, which can feel unnatural to audiences. One editor noted that readers can usually tell when content isn’t authored by a person, regardless of disclosure.

Afik Rechler, co-CEO of Chainstory, stated that AI is now an integral part of crypto journalism but has not replaced human reporters. He emphasised balancing AI help with human insight to keep readers’ trust, since current AI can’t manage complex, nuanced stories.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns of harmful AI use after model backlash

OpenAI chief executive Sam Altman has warned that many ChatGPT users are engaging with AI in self-destructive ways. His comments follow backlash over the sudden discontinuation of GPT-4o and other older models, which he admitted was a mistake.

Altman said that users form powerful attachments to specific AI models, and while most can distinguish between reality and fiction, a small minority cannot. He stressed OpenAI’s responsibility to manage the risks for those in mentally fragile states.

Using ChatGPT as a therapist or life coach was not his concern, as many people already benefit from it. Instead, he worried about cases where advice subtly undermines a user’s long-term well-being.

The model removals triggered a huge social-media outcry, with complaints that newer versions offered shorter, less emotionally rich responses. OpenAI has since restored GPT-4o for Plus subscribers, while free users will only have access to GPT-5.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Instagram Map lets users share location with consent

Instagram has introduced an opt-in feature called Instagram Map, allowing users in the US to share their recent active location and explore location-based content.

Adam Mosseri, head of Instagram, clarified that location sharing is off by default and visible only when users choose to share.

Confusion arose as some users mistakenly believed their location was automatically shared because they could see themselves on the map upon opening the app.

The feature also displays location tags from Stories or Reels, making location-based content easier to find.

Unlike Snap Map, Instagram Map updates location only when the app is open or running in the background, without providing continuous real-time tracking.

Users can access the Map by going to their direct messages and selecting the Map option, where they can control who sees their location, choosing between Friends, Close Friends, selected users, or no one. Even if location sharing is turned off, users will still see the locations of others who share with them.

Instagram Map shows friends’ shared locations and nearby Stories or Reels tagged with locations, allowing users to discover events or places through their network.

Additionally, users can post short, temporary messages called Notes, which appear on the map when shared with a location. The feature encourages cautious consideration about sharing location tags in posts, especially when still at the tagged place.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GitHub CEO says developers will manage AI agents

GitHub’s CEO, Thomas Dohmke, envisions a future where developers no longer write code by hand but oversee AI agents that generate it. He highlights that many developers already use AI tools to assist with coding tasks.

Early adoption began with debugging, boilerplate and code snippets, and evolved into collaborative brainstorming and iterative prompting with AI. Developers are now learning to treat AI tools like partners and guide their ‘thought processes’.

According to interviews with 22 developers, half expect AI to write around 90 percent of their code within two years, while the rest foresee that happening within five. The shift is seen as a change from writing to verifying and refining AI-generated work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Article 19 report finds Belarus’s ‘anti-extremism’ laws threaten digital rights

Digital rights activist group Article 19 has found in its recent report that Belarus’s ‘anti-extremist’ and ‘anti-terrorist’ laws are repressing digital rights.

The report reveals that authorities have misused these laws to prosecute individuals for leaving online comments, making donations, or sharing songs or memes that appear to carry critical messages towards the government.

Since the 2020–2021 protests, Belarusian de facto authorities have reportedly initiated at least 22,500 criminal cases related to ‘anti-extremism’. In collaboration with our partner Human Constanta, we present a joint analysis highlighting this alarming trend, which further intensifies the widespread repression of civil society, they said.

Article 19 states in its report that such actions restrict digital rights and violate international human rights law, including the right to freedom of expression and the right to seek, receive, and impart information.

Additionally, Article 19 notes that Belarus’s ‘anti-extremism’ laws lack the clarity required under international human rights standards, employing vague terms broadly interpreted to suppress digital expression and create a chilling effect.

However, this means people are discouraged or prevented from legitimate expression or behaviour due to fear of legal punishment or other negative consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-generated video misleads as tsunami footage in Japan

An 8.8-magnitude earthquake off Russia’s Kamchatka peninsula at the end of July triggered tsunami warnings across the Pacific, including Japan. Despite widespread alerts and precautionary evacuations, the most significant wave recorded in Japan was only 1.3 metres high.

A video showing large waves approaching a Japanese coastline, which went viral with over 39 million views on platforms like Facebook and TikTok, was found to be AI-generated and not genuine footage.

The clip, appearing as if filmed from a plane, was initially posted online months earlier by a YouTube channel specialising in synthetic visuals.

Analysis of the video revealed inconsistencies, including unnatural water movements and a stationary plane, confirming it was fabricated. Additionally, numerous Facebook pages shared the video and linked it to commercial sites, spreading misinformation.

Official reports from Japanese broadcasters confirmed that the actual tsunami waves were much smaller, and no catastrophic damage occurred.

The incident highlights ongoing challenges in combating AI-generated disinformation related to natural disasters, as similar misleading content continues to circulate online during crisis events.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Warner Bros Discovery targets password sharing on Max

Warner Bros. Discovery is preparing to aggressively limit password sharing on its Max streaming platform, beginning next month and escalating throughout 2025. The move aims to turn shared users into paying subscribers, following Netflix and Disney+ strategies.

The company plans to deploy technology that detects unusual login activity, such as access from multiple locations. Users will get gentle warnings before stricter actions like suspensions or paid upgrades are enforced.

The initiative seeks to boost revenue and reduce subscriber churn in an increasingly competitive streaming market.

While concerns remain about user dissatisfaction and possible cancellations, Warner Bros. Discovery is confident that its extensive library of popular content, including HBO, DC, and Discovery titles, will encourage loyalty.

The goal is to create a sustainable revenue model that directly supports investments in original programming.

Industry observers note that Max’s crackdown reflects broader streaming trends, where enforcing account integrity becomes essential to growth. The full impact will be clear by the end of 2025, possibly shaping future subscription management.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot