Grammarly invests in email with Superhuman acquisition

Grammarly announced on Tuesday that it has acquired email client Superhuman to expand its AI capabilities within its productivity suite.

Financial details of the deal were not disclosed by either company. Superhuman, founded by Rahul Vohra, Vivek Sodera and Conrad Irwin, has raised over $114 million from investors such as a16z and Tiger Global, with a last valuation of $825 million.

Grammarly CEO Shishir Mehrotra said the acquisition will enable the company to bring enhanced AI collaboration to millions more professionals, adding that email is not just another app but a crucial platform where users spend significant time.

Superhuman’s CEO Rahul Vohra and his team are joining Grammarly, promising to invest further in improving the Superhuman experience and building AI agents that collaborate across everyday communication tools.

Recently, Superhuman introduced AI-powered features like scheduling, replies and email categorisation. Grammarly aims to leverage the technology to build smarter AI agents for email, which remains a top use case for its customers.

The move follows Grammarly’s acquisition of productivity software Coda last year and the promotion of Shishir Mehrotra to CEO.

In May, Grammarly secured $1 billion from General Catalyst through a non-dilutive investment, repaid by a capped percentage of revenue generated using the funds instead of equity.

The Superhuman deal further signals Grammarly’s commitment to integrating AI deeply into professional communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why AI won’t replace empathy at work

AI is increasingly being used to improve how organisations measure and support employee performance and well-being.

According to Dr Serena Huang, founder of Data with Serena and author of The Inclusion Equation, AI provides insights that go far beyond traditional annual reviews or turnover statistics.

AI tools can detect early signs of burnout, identify high-potential staff, and even flag overly controlling management styles. More importantly, they offer the potential to personalise development pathways based on employee needs and aspirations.

Huang emphasises, however, that ethical use is vital. Transparency and privacy must remain central to ensure AI empowers rather than surveils workers. Far from making human skills obsolete, Huang argues that AI increases their value.

With machines handling routine analysis, people are free to focus on complex challenges and relationship-building—critical skills in sales, leadership, and team dynamics. AI can assist, but it is emotional intelligence and empathy that truly drive results.

To ensure data-driven efforts align with business goals, Huang urges companies to ask better questions. Understanding what challenges matter to stakeholders helps ensure that any AI deployment addresses real-world needs. Regular check-ins and progress reviews help maintain alignment.

Rather than fear AI as a job threat, Huang encourages individuals to embrace it as a tool for growth. Staying curious and continually learning can ensure workers remain relevant in an evolving market.

She also highlights the strategic advantage of prioritising employee well-being. Companies that invest in mental health, work-life balance, and inclusion enjoy higher productivity and retention.

With younger workers placing a premium on wellness and values, businesses that foster a caring culture will attract top talent and stay competitive. Ultimately, Huang sees AI not as a replacement for people, but as a catalyst for more human-centric, data-informed workplaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK urged to prepare for agentic AI in government

Agentic AI, a new generation of AI that goes beyond automation to deliver full task orchestration, could change how government operates. Sharon Moore, CTO Public Sector UK at IBM, argues the UK Government must adopt this technology to drive operational efficiency and better public services.

Departments using AI agents have already recorded significant savings, such as 3,300 hours saved in HR tasks by East and North Hertfordshire NHS Trust and 800 hours monthly by a New Jersey agency. IBM itself has cut development costs by billions, showcasing the potential for large-scale productivity gains.

Agentic systems integrate multiple AI models and tools, solving complex problems with minimal human intervention. Unlike traditional chatbots, these systems handle end-to-end tasks and adapt across use cases, from citizen services to legacy software modernisation.

To implement these systems safely, the UK must address risks like data leaks, hallucinations, and compliance failures. Moore emphasises that future governance must shift from overseeing individual models to managing entire AI systems, built on transparency, security, and performance oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Springer machine learning book faces fake citation scandal

A Springer Nature book on machine learning has come under scrutiny after researchers discovered that many of its citations were fabricated or erroneous.

A review of 18 citations in Mastering Machine Learning: From Basics to Advanced revealed that two-thirds either referenced nonexistent papers or misattributed authorship and publication sources.

Several academics whose names were included in the book confirmed they did not write the cited material, while others noted inaccuracies in where their actual work was supposedly published. One researcher was alerted by Google Scholar to multiple fake citations under his name.

Govindakumar Madhavan, the author, has not confirmed whether AI tools were used in producing the content, though his book discusses ethical concerns around AI-generated text.

Springer Nature has acknowledged the issue and is investigating whether the book breached its AI use policies, which require authors to declare AI involvement beyond basic editing.

The incident has reignited concerns about publishers’ quality control, with critics pointing to the increasing misuse of large language models in academic texts. As AI tools become more advanced, ensuring the integrity of published research remains a growing challenge for both authors and editors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Student builds AI app to help farmers tackle crop issues

A student is developing an AI-powered app designed to help farmers detect and address crop problems. Soj Gamayon, a communications technology management student at Ateneo de Manila University, was inspired by his family’s farming struggles and his experiences abroad to build AgriConnect PH.

The app uses smart sensors to monitor conditions such as water levels, moisture, and pests, then sends the data to the cloud where it is analysed by AI. Farmers receive real-time alerts with a colour-coded system indicating the severity of risks, helping them respond before crops are damaged.

Gamayon aims to move farmers from reactive responses to proactive management. With updates available at least twice a day and instant alerts for urgent threats, the system offers timely intervention to reduce losses.

Currently supporting cereal crops like rice and corn, the app is set to expand to vegetables and livestock. While the technology is still in development, Gamayon believes AI can revolutionise agriculture and provide Filipino farmers with better tools for resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI suite expands to help teachers plan and students learn

Google has unveiled a major expansion of its Gemini AI tools tailored for classroom use, launching over 30 features to support teachers and students. These updates include personalised AI-powered lesson planning, content generation, and interactive study guides.

Teachers can now create custom AI tutors, known as ‘Gems’, to assist students with specific academic needs using their own teaching materials. Google’s AI reading assistant is also gaining real-time support features through the Read Along tool in Classroom, enhancing literacy development for younger users.

Students and teachers will benefit from wider access to Google Vids, the company’s video creation app, enabling them to create instructional content and complete multimedia assignments.

Additional features aim to monitor student progress, manage AI permissions, improve data security, and streamline classroom content delivery using new Class tools.

By placing AI directly into the hands of educators, Google aims to offer more engaging and responsive learning, while keeping its tools aligned with classroom goals and policies. The rollout continues Google’s bid to take the lead in the evolving AI-driven edtech space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The cognitive cost of AI: Balancing assistance and awareness

The double-edged sword of AI assistance

The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI has become a ubiquitous companion, helping students with essays and professionals streamline emails.

However, a new study by MIT raises a crucial red flag: excessive reliance on AI may come at the cost of our own mental sharpness. Researchers discovered that frequent ChatGPT users showed significantly lower brain activity, particularly in areas tied to critical thinking and creativity.

The study introduces a concept dubbed ‘cognitive debt,’ a reminder that while AI offers convenience, it may undermine our cognitive resilience if not used responsibly.

MIT’s method: How the study was conducted

The MIT Media Lab study involved 54 participants split into three groups: one used ChatGPT, another used traditional search engines, and the third completed tasks unaided. Participants were assigned writing exercises over multiple sessions while their brain activity was tracked using electroencephalography (EEG).

That method allowed scientists to measure changes in alpha and beta waves, indicators of mental effort. The findings revealed a striking pattern: those who depended on ChatGPT demonstrated the lowest brain activity, especially in the frontal cortex, where high-level reasoning and creativity originate.

Diminished mental engagement and memory recall

One of the most alarming outcomes of the study was the cognitive disengagement observed in AI users. Not only did they show reduced brainwave activity, but they also struggled with short-term memory.

Many could not recall what they had written just minutes earlier because the AI had done most of the cognitive heavy lifting. This detachment from the creative process meant that users were no longer actively constructing ideas or arguments but passively accepting the machine-generated output.

The result? A diminished sense of authorship and ownership over one’s own work.

Homogenised output: The erosion of creativity

The study also noted a tendency for AI-generated content to appear more uniform and less original. While ChatGPT can produce grammatically sound and coherent text, it often lacks the personal flair, nuance, and originality that come from genuine human expression.

Essays written with AI assistance were found to be more homogenised, lacking distinct voice and perspective. This raises concerns, especially in academic and creative fields, where originality and critical thinking are fundamental.

The overuse of AI could subtly condition users to accept ‘good enough’ content, weakening their creative instincts over time.

The concept of cognitive debt

‘Cognitive debt’ refers to the mental atrophy that can result from outsourcing too much thinking to AI. Like financial debt, this form of cognitive laziness builds over time and eventually demands repayment, often in the form of diminished skills when the tool is no longer available.

Typing

Participants who became accustomed to using AI found it more challenging to write without it later on. The reliance suggests that continuous use without active mental engagement can erode our capacity to think deeply, form complex arguments, and solve problems independently.

A glimmer of hope: Responsible AI use

Despite these findings, the study offers hope. Participants who started tasks without AI and only later integrated it showed significantly better cognitive performance.

That implies that when AI is used as a complementary tool rather than a replacement, it can support learning and enhance productivity. By encouraging users to first engage with the problem and then use AI to refine or expand their ideas, we can strike a healthy balance between efficiency and mental effort.

Rather than abstinence, responsible usage is the key to retaining our cognitive edge.

Use it or lose it

The MIT study underscores a critical reality of our AI-driven era: while tools like ChatGPT can boost productivity, they must not become a substitute for thinking itself. Overreliance risks weakening the faculties defining human intelligence—creativity, reasoning, and memory.

The challenge in the future is to embrace AI mindfully, ensuring that we remain active participants in the cognitive process. If we treat AI as a partner rather than a crutch, we can unlock its full potential without sacrificing our own.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MAI-DxO: Microsoft’s New AI diagnoses complex medical cases with 85% accuracy

Microsoft has introduced a new AI-powered diagnostic tool capable of tackling complex medical cases that often baffle expert clinicians. Called MAI-DxO (Microsoft AI Diagnostic Orchestrator), the system has been developed by Microsoft’s AI health unit, founded by DeepMind co-founder Mustafa Suleyman.

When tested on complex real-world cases published in the New England Journal of Medicine, the AI tool correctly diagnosed 85.5%. For comparison, experienced doctors managed to solve only 20% of the same cases without external help.

The tool uses five virtual AI agents, each simulating a medical expert with unique roles, such as choosing tests or proposing hypotheses. The approach, dubbed the ‘chain of debate’, allows for step-by-step reasoning in arriving at diagnoses.

Microsoft trained MAI-DxO using 304 case studies and large language models from leading AI companies, including OpenAI, Google, Meta, and xAI. The AI panel mimics a real-world diagnostic team with significantly faster and more accurate outcomes.

Despite the promising results, Microsoft acknowledges that more validation and regulatory clarity are needed before such tools can be used in clinical practice. The company is currently working with health organisations to test the system further.

The aim is not to replace doctors but to ease their workload by offering a reliable assistant for the most challenging cases. Microsoft says MAI-DxO could represent a significant step toward what it calls ‘medical superintelligence’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta launches AI superintelligence lab to compete with rivals

Meta has launched a new division called Meta Superintelligence Labs to accelerate its AI ambitions and close the gap with rivals such as OpenAI and Google.

The lab will be led by Alexandr Wang, former CEO of Scale AI, following Meta’s $14.3 billion investment in the data-labeling company. Former GitHub CEO Nat Friedman and SSI co-founder Daniel Gross will also hold key roles in the initiative.

Mark Zuckerberg announced the new effort in an internal memo, stating that Meta is now focused on developing superintelligent AI systems capable of matching or even outperforming humans. He described this as the beginning of a new era and reaffirmed Meta’s commitment to leading the field.

The lab’s mission is to push AI to a point where it can solve complex tasks more effectively than current models.

To meet these goals, Meta has been aggressively recruiting AI researchers from top competitors. Reports suggest that OpenAI employees have been offered signing bonuses as high as $100 million to join Meta.

New hires include talent from Anthropic and Google, although Meta has reportedly avoided deeper recruitment from Anthropic due to concerns over culture fit.

Meta’s move comes in response to the lukewarm reception of its Llama 4 model and mounting pressure from more advanced AI products released by competitors.

The company hopes that by combining high-level leadership, fresh talent and massive investment, its new lab can deliver breakthrough results and reposition Meta as a serious contender in the race for AGI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI rock band’s Spotify rise fuels calls for transparency

A mysterious indie rock band called The Velvet Sundown has shot to popularity on Spotify, and may be powered by AI. Their debut track, Dust on the Wind, has racked up over 380,000 plays since 20 June and helped attract more than 470,000 monthly listeners.

The song bears a resemblance to the 1977 Kansas hit Dust in the Wind, prompting suspicion from Reddit users. The band’s profile picture and Instagram photos appear AI-generated, while the band members listed — such as ‘Milo Rains’ and ‘Rio Del Mar’ — have no online trace.

Despite the clues, Spotify does not label the group as AI-generated. Their songs are appearing in curated playlists like Discover Weekly. Only Deezer, a French streaming service, has identified The Velvet Sundown as likely created by generative AI models like Suno or Udio.

Deezer began tagging AI music in June and now detects over 20,000 entirely artificial tracks each day. Another AI band, The Devil Inside, has also gained traction. Their song Bones in the River has over 1.6 million plays on Spotify, but lacks credited creators.

On Deezer, the same track is labelled as AI-generated and linked to Hungarian musician László Tamási — a rare human credit for bot-made music. While Deezer takes a transparent approach, Spotify, Apple Music, and Amazon Music have not announced detection systems or labelling plans.

Deezer CEO Alexis Lanternier said AI is ‘not inherently good or bad,’ but called for transparency to protect artist rights and user trust. Legal battles are already underway. US record labels have sued Suno and Udio for mass copyright infringement, though the companies argue it falls under fair use.

As AI-generated music continues to rise, platforms face increasing pressure to inform users and draw more precise lines between human and machine-made art.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!