Google has upgraded its Deep Research tool with the experimental Gemini 2.5 Pro model, promising major improvements in how users access and process complex information.
Deep Research acts as an AI research assistant capable of scanning hundreds of websites, evaluating content, and producing multi-page reports complete with citations and even podcast-style summaries.
Previously powered by Gemini 2.0 Flash, the new iteration significantly enhances reasoning, planning, and reporting capabilities. Human evaluators in Google’s testing preferred Deep Research’s outputs over those generated by OpenAI’s equivalent by a ratio greater than 2 to 1.
Users also noted clearer analytical thinking and better synthesis of information across sources.
The Gemini 2.5 Pro upgrade is available now to Gemini Advanced subscribers across web, Android, and iOS platforms.
For those using the free version, the Gemini 2.0 Flash model remains accessible in over 150 countries, continuing Google’s push to offer powerful research tools to a wide user base.
For more information on these topics, visit diplomacy.edu.
Nearly 20 years after his AI career scare, screenwriter Ed Bennett-Coles and songwriter Jamie Hartman have developed ARK, a blockchain app designed to safeguard creative work from AI exploitation.
The platform lets artists register ownership of their ideas at every stage, from initial concept to final product, using biometric security and blockchain verification instead of traditional copyright systems.
ARK aims to protect human creativity in an AI-dominated world. ‘It’s about ring-fencing the creative process so artists can still earn a living,’ Hartman told AFP.
The app, backed by Claritas Capital and BMI, uses decentralised blockchain technology instead of centralised systems to give creators full control over their intellectual property.
Launching summer 2025, ARK challenges AI’s ‘growth at all costs’ mentality by emphasising creative journeys over end products.
Bennett-Coles compares AI content to online meat delivery, efficient but soulless, while human artistry resembles a grandfather’s butcher trip, where the experience matters as much as the result.
The duo hopes their solution will inspire industries to modernise copyright protections before AI erodes them completely.
For more information on these topics, visit diplomacy.edu.
Microsoft is testing a major upgrade to its Copilot AI that can view your entire screen instead of just working within the Edge browser.
The new Copilot Vision feature helps users navigate apps like Photoshop and Minecraft by analysing what’s on display and offering step-by-step guidance, even highlighting specific tools instead of just giving verbal instructions.
The feature operates more like a shared Teams screen instead of Microsoft’s controversial Recall snapshot system.
Currently limited to US beta testers, Copilot Vision will eventually highlight interface elements directly on users’ screens. It works on standard Windows PCs instead of requiring specialised Copilot+ hardware, with mobile versions coming to iOS and Android.
Alongside visual assistance, Microsoft is adding document search capabilities. Copilot can now find information within files like Word documents and PDFs instead of just searching by filename.
Both updates will roll out fully in the coming weeks, potentially transforming how users interact with both apps and documents on their Windows devices.
For more information on these topics, visit diplomacy.edu.
Amazon has unveiled Nova Sonic, a new AI model designed to process and generate human-like speech, positioning it as a rival to OpenAI and Google’s top voice assistants. The company claims it outperforms competitors in speed, accuracy, and cost, and it is reportedly 80% cheaper than GPT-4o.
Already powering Alexa+, Nova Sonic excels in real-time conversation, handling interruptions and noisy environments better than legacy AI assistants.
Unlike older voice models, Nova Sonic can dynamically route requests, fetching live data or triggering external actions when needed. Amazon says it achieves a 4.2% word error rate across multiple languages and responds in just 1.09 seconds, faster than OpenAI’s GPT-4o.
Developers can access it via Bedrock, Amazon’s AI platform, using a new streaming API.
The launch signals Amazon’s push into artificial general intelligence (AGI), AI that mimics human capabilities.
Rohit Prasad, head of Amazon’s AGI division, hinted at future models handling images, video, and sensory data. This follows last week’s preview of Nova Act, an AI for browser tasks, suggesting Amazon is accelerating its AI rollout beyond Alexa.
For more information on these topics, visit diplomacy.edu.
AI-driven food company Starday has secured $11 million in Series A funding to support the development and retail expansion of its innovative food brands.
The round was led by Slow Ventures and Equal Ventures, with an additional $3 million credit facility from Silicon Valley Bank. Starday’s total funding now stands at $20 million.
Founded by Chaz Flexman, Lena Kwak, and Lily Burtis, Starday uses AI to identify market gaps and quickly create new food products that cater to evolving consumer preferences.
Its latest offerings, including allergen-free snacks like Habeya Sweet Potato Crackers and All Day chickpea protein crunch, are already available in major United States grocery chains such as Kroger and Hannaford.
With plans to launch 14 new products across its four brands, the company is aiming to redefine the pace and precision of food innovation.
CEO Flexman says the funding will help Starday partner with more retailers and food brands to fill gaps in the market, accelerating the launch of targeted products in fast-growing categories. Backers believe Starday’s data-led model gives it a structural edge in a traditionally slow-moving industry.
For more information on these topics, visit diplomacy.edu.
A new San Francisco-based startup, Deep Cogito, has unveiled its first family of AI models, Cogito 1, which can switch between fast-response and deep-reasoning modes instead of being limited to just one approach.
These hybrid models combine the efficiency of standard AI with the step-by-step problem-solving abilities seen in advanced systems like OpenAI’s o1. While reasoning models excel in fields like maths and physics, they often require more computing power, a trade-off Deep Cogito aims to balance.
The Cogito 1 series, built on Meta’s Llama and Alibaba’s Qwen models instead of starting from scratch, ranges from 3 billion to 70 billion parameters, with larger versions planned.
Early tests suggest the top-tier Cogito 70B outperforms rivals like DeepSeek’s reasoning model and Meta’s Llama 4 Scout in some tasks. The models are available for download or through cloud APIs, offering flexibility for developers.
Founded in June 2024 by ex-Google DeepMind product manager Dhruv Malhotra and former Google engineer Drishan Arora, Deep Cogito is backed by investors like South Park Commons.
The company’s ambitious goal is to develop ‘general superintelligence,’ AI that surpasses human capabilities, rather than merely matching them. For now, the team says they’ve only scratched the surface of their scaling potential.
For more information on these topics, visit diplomacy.edu.
Google DeepMind is enforcing strict non-compete agreements in the United Kingdom, preventing employees from joining rival AI companies for up to a year. The length of the restriction depends on an employee’s seniority and involvement in key projects.
Some DeepMind staff, including those working on Google’s Gemini AI, are reportedly being paid not to work while their non-competes run. The policy comes as competition for AI talent intensifies worldwide.
Employees have voiced concern that these agreements could stall their careers in a rapidly evolving industry. Some are seeking ways around the restrictions, such as moving to countries with less rigid employment laws.
While DeepMind claims the contracts are standard for sensitive work, critics say they may stifle innovation and mobility. The practice remains legal in the UK, even though similar agreements have been banned in the US.
For more information on these topics, visit diplomacy.edu.
IBM has announced the launch of its most advanced mainframe yet, the z17, powered by the new Telum II processor. Designed to handle more AI operations, the system delivers up to 50% more daily inference tasks than its predecessor.
The z17 features a second-generation on-chip AI accelerator and introduces new tools for managing and securing enterprise data. A Spyre Accelerator add-on, expected later this year, will enable generative AI features such as large language models.
More than 100 clients contributed to the development of the z17, which also supports a forthcoming operating system, z/OS 3.2. The OS update is set to enable hybrid cloud data processing and enhanced NoSQL support.
IBM says the z17 brings AI to the core of enterprise infrastructure, enabling organisations to tap into large data sets securely and efficiently, with strong performance across both traditional and AI workloads.
For more information on these topics, visit diplomacy.edu.
Standard Chartered has appointed David Hardoon as its global head of AI enablement, further embedding AI across its operations.
Based in Singapore, he will report to group chief data officer Mohammed Rahim.
Hardoon will lead AI governance and identify areas where AI can enhance productivity, efficiency, and client experiences. His appointment follows the bank’s recent rollout of a generative AI tool to over 70,000 employees across 41 markets.
The bank has been steadily introducing AI-driven tools, including a smart video column to provide insights for clients in Asia. It plans further expansion of its internal AI systems across additional regions.
With more than 20 years of experience in data and AI, including with Singapore’s central bank, Hardoon is expected to guide the responsible and strategic use of AI technologies across Standard Chartered’s global footprint.
For more information on these topics, visit diplomacy.edu.
The recent viral trend of AI-generated Ghibli-style images has taken the internet by storm. Using OpenAI’s GPT-4o image generator, users have been transforming photos, from historic moments to everyday scenes, into Studio Ghibli-style renditions.
A trend like this has caught the attention of notable figures, including celebrities and political personalities, sparking both excitement and controversy.
While some praise the trend for democratising art, others argue that it infringes on copyright and undermines the efforts of traditional artists. The debate intensified when Hayao Miyazaki, the co-founder of Studio Ghibli, became a focal point.
In a 2016 documentary, Miyazaki expressed his disdain for AI in animation, calling it ‘an insult to life itself’ and warning that humanity is losing faith in its creativity.
OpenAI’s CEO, Sam Altman, recently addressed these concerns, acknowledging the challenges posed by AI in art but defending its role in broadening access to creative tools. Altman believes that technology empowers more people to contribute, benefiting society as a whole, even if it complicates the art world.
Miyazaki’s comments and Altman’s response highlight a growing divide in the conversation about AI and creativity. As the debate continues, the future of AI in art remains a contentious issue, balancing innovation with respect for traditional artistic practices.
For more information on these topics, visit diplomacy.edu.