AI model forecasts Bitcoin to fall below $100,000

Bitcoin has slipped below $110,000, and according to Finbold’s use of ChatGPT-5, a further drop could occur in the coming weeks. The model outlined technical resistance and seasonal factors pointing to September weakness.

Key levels around $112,000 and $106,000 are under pressure, with the AI projecting a sharp decline toward $98,000 if support breaks. Historically, September has been one of Bitcoin’s worst-performing months, adding to the bearish outlook.

Despite the short-term caution, demand from ETFs and long-term holders may offer support between $95,000 and $98,000. Longer-term technicals remain intact, with the 200-day average sitting near $95,000.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humain Chat has been unveiled by Saudi Arabia to drive AI innovation

Saudi Arabia has taken a significant step in AI with the launch of Humain Chat, an app powered by one of the world’s most enormous Arabic-trained datasets.

Developed by state-backed venture Humain, the app is designed to strengthen the country’s role in AI while promoting sovereign technologies.

Built on the Allam large language model, Humain Chat allows real-time web search, speech input across Arabic dialects, bilingual switching between Arabic and English, and secure data compliance with Saudi privacy laws.

The app is already available on the web, iOS, and Android in Saudi Arabia, with plans for regional expansion across the Middle East before reaching global markets.

Humain was established in May under the leadership of Crown Prince Mohammed bin Salman and the Public Investment Fund. Its flagship model, ALLAM 34B, is described as the most advanced AI system created in the Arab world. The company said the app will evolve further as user adoption grows.

CEO Tareq Amin called the launch ‘a historic milestone’ for Saudi Arabia, stressing that Humain Chat shows how advanced AI can be developed in Arabic while staying culturally rooted and built by local expertise.

A team of 120 specialists based in the Kingdom created the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube under fire for AI video edits without creator consent

Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.

Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.

The issue has sparked concern among creators, who argue that the lack of consent undermines trust.

YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.

However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.

Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.

Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.

YouTube has not confirmed whether the experiment will expand or when it might end.

For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI controversy surrounds Will Smith’s comeback shows

Footage from Will Smith’s comeback tour has sparked claims that AI was used to alter shots of the crowd. Viewers noticed faces appearing blurred or distorted, along with extra fingers and oddly shaped hands in several clips.

Some accused Smith of boosting audience shots with AI, while others pointed to YouTube, which has been reported to apply AI upscaling without creators’ knowledge.

Guitarist and YouTuber Rhett Shull recently suggested the platform had altered his videos, raising concerns that artists might be wrongly accused of using deepfakes.

The controversy comes as the boundary between reality and fabrication grows increasingly uncertain. AI has been reshaping how audiences perceive authenticity, from fake bands to fabricated images of music legends.

Singer SZA is among the artists criticising the technology, highlighting its heavy energy use and potential to undermine creativity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Honor and Google deepen platform partnership with longer updates and AI integration

Honor has announced a joint commitment with Google to strengthen its Android platform support. The company now guarantees six years of Android OS and security updates for its upcoming Honor 400 series, aligning with similar practices by Pixel and Samsung devices.

This update period is part of Honor’s wider Alpha Plan, a strategic framework positioning the company as an AI device ecosystem player.

Honor will invest US $10 billion over five years to support this transformation through hardware innovation, software longevity and AI agent integration.

The partnership enables deeper cooperation with Google around Android updates and AI features. Honor already integrates tools like Circle to Search, AI photo expansion and Gemini voice assistants on its Magic series. The extended software support promises longer device lifespans, reduced e-waste and improved user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Copilot policy flaw allows unauthorized access to AI agents

Administrators found that Microsoft Copilot’s intended ‘NoUsersCanAccessAgent’ policy, which is designed to prevent user access to certain AI agents, is being ignored. Some agents, including ExpenseTrackerBot and HRQueryAgent, remain installable despite global restrictions.

Microsoft 365 tenants must now use per-agent PowerShell commands to disable access manually. This workaround is both time-consuming and error-prone, particularly in large organisations. The failure to enforce access policies raises concerns regarding operational overhead and audit risk.

The security implications are significant. Unauthorised agents can export data from SharePoint or OneDrive, run RPA workflows without oversight, or process sensitive information without compliance controls. The flaw undermined the purpose of access control settings and exposed the system to misuse.

To mitigate this risk, administrators are urged to audit agent inventories, enforce Conditional Access policies, for example, requiring MFA or device compliance, and consistently monitor agent usage through logs and dashboards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase CEO fired engineers who refused to adopt AI tools

Coinbase CEO Brian Armstrong has revealed that he fired engineers who refused to begin using AI coding tools after the company adopted GitHub Copilot and Cursor. Armstrong shared the story during a podcast hosted by Stripe co-founder John Collison.

Engineers were told to onboard with the tools within a week. Armstrong arranged a Saturday meeting for those who had not complied and said that employees without valid reasons would be dismissed. Some were excused due to holidays, while others were let go.

Collison raised concerns about relying too heavily on AI-generated code, prompting Armstrong to agree. Past reports have described challenges with managing code produced by AI, even at companies like OpenAI. Coinbase did not comment on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s overuse of the em dash could be your biggest giveaway

AI-generated writing may be giving itself away, and the em dash is its most flamboyant tell. Long beloved by grammar nerds for its versatility, the em dash has become AI’s go-to flourish, but not everyone is impressed.

Pacing, pauses, and a suspicious number of em dashes are often a sign that a machine had its hand in the prose. Even simple requests for editing can leave users with sentences reworked into what feels like an AI-powered monologue.

Though tools like ChatGPT or Gemini can be powerful assistants, using them blindly can dull the human spark. Overuse of certain AI quirks, like rhetorical questions, generic phrases or overstyled punctuation, can make even an honest email feel like corporate poetry.

Writers are being advised to take the reins back. Draft the first version by hand, let the AI refine it, then strip out anything that feels artificial, especially the dashes. Keeping your natural voice intact may be the best way to make sure your readers are connecting with you, not just the machine behind the curtain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta teams up with Midjourney for AI video and image tools

Meta has confirmed a new partnership with Midjourney to license its AI image and video generation technology. The collaboration, announced by Meta Chief AI Officer Alexandr Wang, will see Meta integrate Midjourney’s tools into upcoming models and products.

Midjourney will remain independent following the deal. CEO David Holz said the startup, which has never taken external investment, will continue operating on its own. The company launched its first video model earlier this year and has grown rapidly, reportedly reaching $200 million in revenue by 2023.

Midjourney is currently being sued by Disney and Universal for alleged copyright infringement in AI training data. Meta faces similar challenges, although courts have often sided with tech firms in recent decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!