New Google tool helps users rethink their career paths

Google has introduced Career Dreamer, a new AI-powered tool designed to help users discover career possibilities based on their skills, education, and interests. Announced in a blog post, the experiment aims to offer personalised job exploration without the need for multiple searches across different platforms.

The tool creates a ‘career identity statement’ by analysing users’ past and present roles, education, and experiences, which can be used to refine CVs or guide interview discussions. Career Dreamer also provides a visual representation of potential career paths and allows users to collaborate with Gemini, Google’s AI assistant, to draft cover letters or explore further job ideas.

Unlike traditional job search platforms such as LinkedIn or Indeed, Career Dreamer does not link users to actual job postings. Instead, it serves as an exploratory tool to help individuals, whether students, career changers, or military veterans, identify roles that align with their backgrounds. Currently, the experiment is available only in the United States, with no confirmation on future expansion.

For more information on these topics, visit diplomacy.edu.

New AI feature from Superhuman tackles inbox clutter

Superhuman has introduced a new AI-powered feature called Auto Label, designed to automatically categorise emails into groups such as marketing, pitches, social updates, and news. Users can also create custom labels with personalised prompts and even choose to auto-archive certain categories, reducing inbox clutter.

The company developed the tool in response to customer complaints about an increasing number of unwanted marketing and cold emails. While Gmail and Outlook offer spam filtering, Superhuman’s CEO, Rahul Vohra, said their new system aims to provide more precise classification. However, at launch, users cannot edit prompts for existing labels, meaning they must create new ones if adjustments are needed.

Superhuman is also enhancing its reminder system. The app will now automatically surface emails if a response is overdue and can draft AI-generated follow-ups in the user’s writing style. Looking ahead, the company plans to integrate personal knowledge bases, automate replies, and introduce workflow automation, making email management even more seamless.

For more information on these topics, visit diplomacy.edu.

AI’s rapid rise sparks innovation and concern

AI has transformed everyday life, powering everything from social media recommendations to medical breakthroughs. As major tech companies and governments compete to lead in AI development, concerns about ethics, bias, and environmental impact are growing.

AI systems, while capable of learning and processing vast amounts of data, lack human reasoning and empathy. Generative AI, which creates text, images, and music, has raised questions about misinformation, copyright issues, and job displacement.

AI’s influence is particularly evident in the workplace, education, and creative industries. Some experts fear it could worsen financial inequality, with automation threatening millions of jobs.

Writers, musicians, and artists have criticised AI developers for using their work without consent. Meanwhile, AI-generated misinformation has caused controversy, with major companies halting or revising their AI features after errors.

The technology also presents security risks, with deepfakes and algorithmic biases prompting urgent discussions about regulation.

Governments worldwide are introducing policies to manage AI’s risks while encouraging innovation. The European Union has imposed strict controls on AI in sensitive sectors with the AI Act, while China enforces rules ensuring compliance with censorship laws.

The United Kingdom and the United States have formed AI Safety Institutes to evaluate risks, though concerns remain over AI’s environmental impact. The rise of large data centres, which consume vast amounts of energy and water, has sparked debates about sustainability.

Despite these challenges, AI continues to advance, shaping the future in ways that are still unfolding.

For more information on these topics, visit diplomacy.edu.

India faces AI challenge as global race accelerates

China’s DeepSeek has shaken the AI industry by dramatically reducing the cost of developing generative AI models. While global players like OpenAI and Microsoft see potential in India, the country still lacks its own foundational AI model.

The Indian government aims to change this within 10 months by supplying high-end chips to startups and researchers, but experts warn that structural issues in education, research, and policy could hold back progress.

Despite being a major hub for AI talent, India lags behind the United States and China in research, patents, and funding. State-backed AI investments are significantly smaller than those in the two superpowers, and limited private investment further slows progress.

The outsourcing industry, which dominates India’s tech sector, has traditionally focused on services rather than developing AI innovations, leaving startups to bridge the gap.

Some industry leaders believe India can still make rapid advancements by leveraging open-source AI platforms like DeepSeek. However, long-term success will require building a strong research ecosystem, boosting semiconductor production, and securing strategic autonomy in AI.

Without these efforts, experts caution that India may struggle to compete on the global AI stage in the coming years.

For more information on these topics, visit diplomacy.edu.

Lawyers warned about AI misuse in court filings

Warnings about AI misuse have intensified after lawyers from Morgan & Morgan faced potential sanctions for using fake case citations in a lawsuit against Walmart.

The firm’s urgent email to over 1,000 attorneys highlighted the dangers of relying on AI tools, which can fabricate legal precedents and jeopardise professional credibility. A lawyer in the Walmart case admitted to unintentionally including AI-generated errors in court filings.

Courts have seen a rise in similar incidents, with at least seven cases involving disciplinary actions against lawyers using false AI-generated information in recent years. Prominent examples include fines and mandatory training for lawyers in Texas and New York who cited fictitious cases in legal disputes.

Legal experts warn that while AI tools can speed up legal work, they require rigorous oversight to avoid costly mistakes.

Ethics rules demand lawyers verify all case filings, regardless of AI involvement. Generative AI, such as ChatGPT, creates risks by producing fabricated data confidently, sometimes referred to as ‘hallucinations’. Experts point to a lack of AI literacy in the legal profession as the root cause, not the technology itself.

Advances in AI continue to reshape the legal landscape, with many firms adopting the technology for research and drafting. However, mistakes caused by unchecked AI use underscore the importance of understanding its limitations.

Acknowledging this issue, law schools and organisations are urging lawyers to approach AI cautiously to maintain professional standards.

For more information on these topics, visit diplomacy.edu.

Meta announces LlamaCon as it accelerates AI push

Meta has unveiled plans to host its first-ever developer conference dedicated to generative AI, called LlamaCon. Scheduled for April 29, the event will focus on Meta’s open-source AI efforts, particularly its Llama models.

The company aims to share updates that will help developers build new AI-powered applications. Additional details are expected in the coming weeks, with Meta’s broader annual conference, Meta Connect, set for September.

The company has positioned itself as a leader in open-source AI, boasting hundreds of millions of downloads of its Llama models. Major firms, including Goldman Sachs, AT&T, and Accenture, are among those integrating Llama into their services.

However, reports suggest that Meta has been caught off guard by the rapid rise of Chinese AI company DeepSeek, whose latest models may challenge Llama’s dominance. Meta has reportedly launched internal efforts to study DeepSeek’s approach to efficiency and cost reduction.

With a planned $80 billion investment in AI this year, Meta is pushing ahead with new Llama models that could include reasoning, multimodal, and autonomous capabilities. CEO Mark Zuckerberg has expressed confidence in Llama’s potential to become the most widely used AI model.

However, Meta is also facing legal and regulatory challenges, including lawsuits over alleged copyright violations and privacy concerns in the European Union that have delayed some AI launches.

For more information on these topics, visit diplomacy.edu.

AI sizing tools aim to reduce fashion returns

Online fashion retailers are increasingly using artificial intelligence to tackle the costly issue of clothing returns, with up to 30% of purchases being sent back due to sizing problems. A study by McKinsey estimates that each return costs between $21 and $46, significantly affecting profit margins. Many customers order multiple sizes and return those that don’t fit, creating logistical headaches for retailers.

To address this, companies are adopting AI-driven sizing tools. French start-up Fringuant, for instance, uses an algorithm that analyses a shopper’s height, weight, and a quick selfie to predict the best size. Zalando, a German e-commerce giant, has also implemented its own AI-powered tool that guides customers by comparing their body shape with garment dimensions. These technologies are already helping some brands reduce return rates significantly.

Beyond sizing, AI is also improving warehouse operations to prevent shipping mistakes. Smart cameras on order pickers’ trolleys at logistics firms help ensure the right product is selected, while AI-equipped robots track stock levels, reducing errors that lead to returns. As online shopping continues to grow, retailers hope these innovations will streamline processes and boost efficiency.

For more information on these topics, visit diplomacy.edu.

EU delays AI liability directive due to stalled negotiations

The European Commission has removed the AI Liability Directive from its 2025 work program due to stalled negotiations, though lawmakers in the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) have voted to continue working on the proposal. A spokesperson confirmed that IMCO coordinators will push to keep the directive on the political agenda, despite the Commission’s plans to withdraw it. The Legal Affairs committee has yet to make a decision on the matter.

The AI Liability Directive, proposed in 2022 alongside the EU’s AI Act, aimed to address the potential risks AI systems pose to society. While some lawmakers, such as German MEP Axel Voss, criticised the Commission’s move as a ‘strategic mistake,’ others, like Andreas Schwab, called for more time to assess the impact of the AI Act before introducing separate liability rules.

The proposal’s withdrawal has sparked mixed reactions within the European Parliament. Some lawmakers, like Marc Angel and Kim van Sparrentak, emphasised the need for harmonised liability rules to ensure fairness and accountability, while others expressed concern that such rules might not be needed until the AI Act is fully operational. Consumer groups welcomed the proposed legislation, while tech industry representatives argued that liability issues were already addressed under the revamped Product Liability Directive.

For more information on these topics, visit diplomacy.edu.

Google Meet update brings smarter AI-powered notes

Google Meet’s AI-driven note-taking feature is getting a major upgrade with the ability to generate action items from meeting transcripts. The update, powered by Google’s Gemini AI, will automatically identify key tasks, assign deadlines, and designate responsible individuals at the end of each meeting.

Originally launched in August 2024, the AI transcription tool already provides accurate speaker separation and structured summaries in Google Docs. With this latest enhancement, the technology aims to improve productivity by ensuring that key takeaways are actionable and well-organised.

The feature begins rolling out today but at a slower pace than usual, as Google closely monitors its performance and quality. While AI-generated notes can be a helpful time-saver, some users may still prefer manual control over meeting documentation, especially when handling sensitive information.

For more information on these topics, visit diplomacy.edu.

Mira Murati launches AI startup Thinking Machines Lab

Former OpenAI chief technology officer Mira Murati has launched a new AI startup called Thinking Machines Lab, backed by a team of around 30 researchers and engineers from companies such as OpenAI, Meta, and Mistral. The startup aims to create AI systems that encode human values and address a wider range of applications than existing rivals, according to a blog post from the company.

Murati’s new venture demonstrates her ability to attract top talent, with two-thirds of the team made up of former OpenAI employees. Among them are Barret Zoph, a well-known researcher who joined Murati in leaving OpenAI in September, and John Schulman, OpenAI’s co-founder and the startup’s chief scientist. Schulman previously left OpenAI for Anthropic to focus on AI alignment, a key goal of Thinking Machines Lab.

The company’s approach differentiates itself by combining research and product teams in the design process. Thinking Machines Lab plans to contribute to AI alignment research by sharing code, datasets, and model specifications. Murati, now CEO of the startup, has previously played a major role in developing ChatGPT, and her exit from OpenAI reflects a broader trend of high-profile departures amid changes at the company.

For more information on these topics, visit diplomacy.edu.