Grammarly becomes Superhuman with unified AI tools for work

Superhuman, formerly known as Grammarly, is bundling its writing tools, workspace platform, and email client with a new AI assistant suite. The company says the rebrand reflects a push to unify generative AI features that streamline workplace tasks and online communication for subscribers.

Grammarly acquired Coda and Superhuman Mail earlier this year and added Superhuman Go. The bundle arrives as a single plan. Go’s agents brainstorm, gather information, send emails, and schedule meetings to reduce app switching.

Superhuman Mail organises inboxes and drafts replies in your voice. Coda pulls data from other apps into documents, tables, and dashboards. An upcoming update lets Coda act on that data to automate plans and tasks.

CEO Shishir Mehrotra says the aim is ambient, integrated AI. Built on Grammarly’s infrastructure, the tools work in place without prompting or pasting. The bundle targets teams seeking consistent AI across writing, email, and knowledge work.

Analysts will watch brand overlap with the existing Superhuman email app and enterprise pricing. Success depends on trust, data controls, and measurable time savings versus point tools. Rollout specifics, including regions, will follow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Spot the red flags of AI-enabled scams, says California DFPI

The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.

Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.

Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.

Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.

DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity estates push back on Sora as app surges to No.1

OpenAI’s short-video app Sora topped one million downloads in under a week, then ran headlong into a likeness-rights firestorm. Celebrity families and studios demanded stricter controls. Estates for figures like Martin Luther King Jr. sought blocks on unauthorised cameos.

Users showcased hyperreal mashups that blurred satire and deception, from cartoon crossovers to dead celebrities in improbable scenes. All clips are AI-made, yet reposting across platforms spread confusion. Viewers faced a constant real-or-fake dilemma.

Rights holders pressed for consent, compensation, and veto power over characters and personas. OpenAI shifted toward opt-in for copyrighted properties and enabled estate requests to restrict cameos. Policy language on who qualifies as a public figure remains fuzzy.

Agencies and unions amplified pressure, warning of exploitation and reputational risks. Detection firms reported a surge in takedown requests for unauthorised impersonations. Watermarks exist, but removal tools undercut provenance and complicate enforcement.

Researchers warned about a growing fog of doubt as realistic fakes multiply. Every day, people are placed in deceptive scenarios, while bad actors exploit deniability. OpenAI promised stronger guardrails as Sora scales within tighter rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN cybercrime treaty signed in Hanoi amid rights concerns

Around 60 countries signed a landmark UN cybercrime convention in Hanoi, seeking faster cooperation against online crime. Leaders cited trillions in annual losses from scams, ransomware, and trafficking. The pact enters into force after 40 ratifications.

UN supporters say the treaty will streamline evidence sharing, extradition requests, and joint investigations. Provisions target phishing, ransomware, online exploitation, and hate speech. Backers frame the deal as a boost to global security.

Critics warn the text’s breadth could criminalise security research and dissent. The Cybersecurity Tech Accord called it a surveillance treaty. Activists fear expansive data sharing with weak safeguards.

The UNODC argues the agreement includes rights protections and space for legitimate research. Officials say oversight and due process remain essential. Implementation choices will decide outcomes on the ground.

The EU, Canada, and Russia signed in Hanoi, underscoring geopolitical buy-in. Vietnam, being the host, drew scrutiny over censorship and arrests. Officials there cast the treaty as a step toward resilience and stature.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MLK estate pushback prompts new Sora 2 guardrails at OpenAI

OpenAI paused the ability to re-create Martin Luther King Jr. in Sora 2 after Bernice King objected to user videos. Company leaders issued a joint statement with the King estate. New guardrails will govern depictions of historical figures on the app.

OpenAI said families and authorised estates should control how likenesses appear. Representatives can request removal or opt-outs. Free speech was acknowledged, but respectful use and consent were emphasised.

Policy scope remains unsettled, including who counts as a public figure. Case-by-case requests may dominate early enforcement. Transparency commitments arrived without full definitions or timelines.

Industry pressure intensified as major talent agencies opted out of clients. CAA and UTA cited exploitation and legal exposure. Some creators welcomed the tool, showing a split among public figures.

User appetite for realistic cameos continues to test boundaries. Rights of publicity and postmortem controls vary by state. OpenAI promised stronger safeguards while Sora 2 evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI brings conversational edits to Instagram Stories

Instagram is rolling out generative AI editing for Stories, expanding June’s tools with smarter prompts and broader effects. Type what you want removed or changed, and Meta AI does it. Think conversational edits, similar to Google Photos.

New controls include an Add Yours sticker for sharing your custom look with friends. A Presets browser shows available styles at a glance. Seasonal effects launch for Halloween, Diwali, and more.

Restyle Video brings preset effects to short clips, with options to add flair or remove objects. Edits aim to be fast, fun, and reversible. Creativity first, heavy lifting handled by AI.

Text gets a glow-up: Instagram is testing AI restyle for captions. Pick built-ins like ‘chrome’ or ‘balloon,’ or prompt Meta AI for custom styles.

Meta AI hasn’t wowed Instagram users, but this could change sentiment. The pitch: fewer taps, better results, and shareable looks. If it sticks, creating Stories becomes meaningfully easier.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amelia brings heads-up guidance to Amazon couriers

Amazon unveiled ‘Amelia’ AI-powered smart glasses for delivery drivers with a built-in display and camera, paired to a vest with a photo button, now piloting with hundreds of drivers across more than a dozen partners.

Designed for last-mile efficiency, Amelia can auto-shut down when a vehicle moves to prevent distraction, includes a hardware kill switch for the camera and mic, and aims to save about 30 minutes per 8–10-hour shift by streamlining repetitive tasks.

Initial availability is planned for the US market and the rest of North America before global expansion, with Amazon emphasizing that Amelia is custom-built for drivers, though consumer versions aren’t ruled out. Pilots involve real routes and live deliveries to customers.

Amazon also showcased a warehouse robotic arm to sort parcels faster and more safely, as well as an AI orchestration system that ingests real-time and historical data to predict bottlenecks, propose fixes, and keep fulfillment operations running smoothly.

The move joins a broader push into wearables from Big Tech. Unlike Meta’s consumer-oriented Ray-Ban smart glasses, Amelia targets enterprise use, promising faster package location, fewer taps, and tighter integration with Amazon’s delivery workflow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba pushes unified AI with Quark Chat and wearables

Quark, Alibaba’s consumer AI app, has launched an AI Chat Assistant powered by Qwen3 models, merging real-time search with conversational reasoning so users can ask by text or voice, get answers, and trigger actions from a single interface.

On iOS and Android, you can tap ‘assistant’ in the AI Super Box or swipe right to open chat, then use prompts to summarise pages, draft replies, or pull sources, with results easily shared to friends, Stories, or outside the app.

Beyond Q&A, the assistant adds deep search, photo-based problem-solving, and AI writing, while supporting multimodal tasks like photo editing, AI camera, and phone calls. Forthcoming MCP integrations will expand agent execution across Alibaba services.

Quark AI Glasses opened pre-sale in China on October 24 via Tmall with a list price of 4,699 RMB before coupons or memberships, deliveries starting in phases from December, and 1 RMB reservations available on JD.com and Douyin.

Powered by Qwen for hands-free assistance, translation, and meeting transcription, the glasses emphasise lightweight ergonomics, long battery life, and quality imaging, with bundles, accessories, and prescription lens options to broaden fit and daily use.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Large language models mimic human object perception

Recent research shows that large multimodal language models (LLMs) can develop object representations strikingly similar to human cognition. By analysing how these AI models understand and organise concepts, scientists found patterns in the models that mirror neural activity in the human brain.

The study examined embeddings for 1,854 natural objects, derived from millions of text-image pairings. These embeddings capture relationships between objects and were compared with brain scan data from regions like EBA, PPA, RSC and FFA.

Researchers also discovered that multimodal training, which combines text and image data, enhances model’s ability to form these human-like concepts. Findings suggest that large language models can achieve more natural understanding of the world, offering potential improvements in human-AI interaction and future model design.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!