OpenAI’s Sora app raises tension between mission and profit

The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.

The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.

Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.

The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.

A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.

Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.

Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gate Group secures MiCA license in Malta

Gate Group’s Malta-based subsidiary, Gate Technology Ltd, has secured a MiCA license from the Malta Financial Services Authority. The license authorises crypto asset trading and custody services.

Founder Dr. Han underscored compliance as central to operations, praising Malta’s progressive regulatory framework. The move aligns with Gate Group’s focus on transparency and user safety across Europe.

Securing the MiCA license enables Gate Europe to initiate EU passporting for broader regional expansion. CEO Giovanni Cunti outlined plans to strengthen compliance while offering secure, professional services.

Gate Group holds regulatory approvals in jurisdictions like Italy, Hong Kong, and Dubai. Malta’s transparent regulations and innovative environment make it an ideal European base. The company seeks to foster sustainable growth in the region’s crypto ecosystem.

Establishing a foothold in Malta positions Gate Group to leverage the country’s role as a crypto hub. Continued investment will support the local digital economy, ensuring long-term development and regulatory adherence in Europe’s crypto market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Instagram head explains why ads feel like eavesdropping

Adam Mosseri has denied long-standing rumours that the platform secretly listens to private conversations to deliver targeted ads. In a video he described as ‘myth busting’, Mosseri said Instagram does not use the phone’s microphone to eavesdrop on users.

He argued that such surveillance would not only be a severe breach of privacy but would also quickly drain phone batteries and trigger visible microphone indicators.

Instead, Mosseri outlined four reasons why adverts may appear suspiciously relevant: online searches and browsing history, the influence of friends’ online behaviour, rapid scrolling that leaves subconscious impressions, and plain coincidence.

According to Mosseri, Instagram users may mistake targeted advertising for surveillance because algorithms incorporate browsing data from advertisers, friends’ interests, and shared patterns across users.

He stressed that the perception of being overheard is often the result of ad targeting mechanics rather than eavesdropping.

Despite his explanation, Mosseri admitted the rumour is unlikely to disappear. Many viewers of his video remained sceptical, with some comments suggesting his denial only reinforced their suspicions about how social media platforms operate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft boosts productivity with AI-powered subscriptions

Microsoft has enhanced its Microsoft 365 subscriptions by deeply integrating Copilot, its AI assistant, into apps like Word, Excel, and Outlook. A new Microsoft 365 Premium plan, priced at £19.99 monthly, combines advanced AI features with productivity tools.

The plan targets professionals, entrepreneurs, and families seeking to streamline tasks efficiently.

Microsoft 365 Personal and Family subscribers gain higher usage limits for Copilot features like image generation and deep research at no extra cost. Copilot Chat, now available across these apps, assists with drafting, analysis, and automation.

These updates aim to embed AI seamlessly into daily workflows. Samsung Electronics will provide energy-efficient DRAM for OpenAI’s Stargate, meeting a projected demand of 900,000 wafers monthly.

Meanwhile, Microsoft’s Frontier programme offers subscribers access to experimental AI tools, such as Office Agent, enhancing productivity. A global student offer provides free Microsoft 365 Personal for a year.

Fresh icons for Word, Excel, and other apps highlight Microsoft’s AI-driven evolution. Secure workplace AI use, backed by enterprise data protection, ensures compliance and safety. These innovations establish Microsoft 365 as a leader in AI-powered productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft evolves Sentinel into agentic defence platform

Microsoft is transforming Sentinel from a traditional SIEM into a unified defence platform for the agentic AI era. It now incorporates features such as a data lake, semantic graphs and a Model Context Protocol (MCP) server to enable intelligent agents to reason over security data.

Sentinel’s enhancements allow defenders to combine structured, semi-structured data into vectorised, graph-based relationships. With that, AI agents grounded in Security Copilot and custom tools can automate triage, correlate alerts, reason about attack paths, and initiate response actions, while keeping human oversight.

The platform supports extensibility through open agent APIs, enabling partners and organisations to deploy custom agents through the MCP server.

Microsoft also adds protections for AI agents, such as prompt-injection resilience, task adherence controls, PII guardrails, and identity controls for agent estates. The evolution aims to shift cybersecurity from reactive to predictive operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sora 2.0 release reignites debate on intellectual property in AI video

OpenAI has launched Sora 2.0, the latest version of its video generation model, alongside an iOS app available by invitation in the US and Canada. The tool offers advances in physical realism, audio-video synchronisation, and multi-shot storytelling, with built-in safeguards for security and identity control.

The app allows users to create, remix, or appear in clips generated from text or images. A Pro version, web interface, and developer API are expected soon, extending access to the model.

Sora 2.0 has reignited debate over intellectual property. According to The Wall Street Journal, OpenAI has informed studios and talent agencies that their universes could appear in generated clips unless they opt out.

The company defends its approach as an extension of fan creativity, while stressing that real people’s images and voices require prior consent, validated through a verified cameo system.

By combining new creative tools with identity safeguards, OpenAI aims to position Sora 2.0 as a leading platform in the fast-growing market for AI-generated video.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How AI is transforming healthcare and patient management

AI is moving from theory to practice in healthcare. Hospitals and clinics are adopting AI to improve diagnostics, automate routine tasks, support overworked staff, and cut costs. A recent GoodFirms survey shows strong confidence that AI will become essential to patient care and health management.

Survey findings reveal that nearly all respondents believe AI will transform healthcare. Robotic surgery, predictive analytics, and diagnostic imaging are gaining momentum, while digital consultations and wearable monitors are expanding patient access.

AI-driven tools are also helping reduce human errors, improve decision-making, and support clinicians with real-time insights.

Challenges remain, particularly around data privacy, transparency, and the risk of over-reliance on technology. Concerns about misdiagnosis, lack of human empathy, and job displacement highlight the need for responsible implementation.

Even so, the direction is clear: AI is set to be a defining force in healthcare’s future, enabling more efficient, accurate, and equitable systems worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Four new Echo devices debut with Amazon’s next-gen Alexa+

Amazon has unveiled four new Echo devices powered by Alexa+, its next-generation AI assistant. The lineup includes Echo Dot Max, Echo Studio, Echo Show 8, and Echo Show 11, all designed for personalised, ambient AI-driven experiences. Buyers will automatically gain access to Alexa+.

At the core are the new AZ3 and AZ3 Pro chips, which feature AI accelerators, powering advanced models for speech, vision, and ambient interaction. The Echo Dot Max, priced at $99.99, features a two-speaker system with triple the bass, while the Echo Studio, priced at $219.99, adds spatial audio and Dolby Atmos.

The Echo Show 8 and Echo Show 11 introduce HD displays, enhanced audio, and intelligent sensing capabilities. Both feature 13-megapixel cameras that adapt to lighting and personalise interactions. The Echo Show 8 will cost $179.99, while the Echo Show 11 is priced at $219.99.

Beyond hardware, Alexa+ brings deeper conversational skills and more intelligent daily support, spanning home organisation, entertainment, health, wellness, and shopping. Amazon also introduced the Alexa+ Store, a platform for discovering third-party services and integrations.

The Echo Dot Max and Echo Studio will launch on October 29, while the Echo Show 8 and Echo Show 11 arrive on November 12. Amazon positions the new portfolio as a leap toward making ambient AI experiences central to everyday living.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK users lose access to Imgur amid watchdog probe

Imgur has cut off access for UK users after regulators warned its parent company, MediaLab AI, of a potential fine over child data protection.

Visitors to the platform since 30 September have been met with a notice saying that content is unavailable in their region, with embedded Imgur images on other sites also no longer visible.

The UK’s Information Commissioner’s Office (ICO) began investigating the platform in March, questioning whether it complied with data laws and the Children’s Code.

The regulator said it had issued MediaLab with a notice of intent to fine the company following provisional findings. Officials also emphasised that leaving the UK would not shield Imgur from responsibility for any past breaches.

Some users speculated that the withdrawal was tied to new duties under the Online Safety Act, which requires platforms to check whether visitors are over 18 before allowing access to harmful content.

However, both the ICO and Ofcom stated that Imgur decided on a commercial choice. Other MediaLab services, such as Kik Messenger, continue to operate in the UK with age verification measures in place.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!