TikTok lays off staff in trust and safety restructuring

TikTok is reportedly laying off staff from its trust and safety unit, which is responsible for content moderation, as part of a restructuring effort. The layoffs began on Thursday, affecting teams in Asia, Europe, the Middle East, and Africa. Adam Presser, TikTok’s operations head, sent a memo to staff informing them of the decision, though the company has not yet commented on the move.

The layoffs come at a time when TikTok’s future is uncertain. The app, used by nearly half of all Americans, faced a brief outage last month, followed by a law that came into effect in January, requiring its Chinese owner ByteDance to either sell TikTok or face a national security-related ban. TikTok CEO Shou Chew had previously testified before Congress about the company’s trust and safety measures, pledging to invest more than $2 billion in these efforts.

In line with a shift towards AI-driven content moderation, TikTok had already made significant layoffs in October, including staff in Malaysia. The company currently employs 40,000 trust and safety professionals globally, but the full scope of the recent cuts remains unclear.

For more information on these topics, visit diplomacy.edu.

Google unveils virtual AI collaborator for scientists

Google has introduced an AI tool designed to act as a virtual collaborator for biomedical researchers. Tested by Stanford University and Imperial College London, the tool helps scientists analyse large volumes of literature and generate new hypotheses. It uses advanced reasoning to streamline research processes and assist in problem-solving.

DeepMind, Google’s AI unit, has prioritised science in its innovations. The unit’s leader, Demis Hassabis, recently shared a Nobel Prize in Chemistry for groundbreaking AI technology. In an experiment addressing liver fibrosis, the AI tool proposed promising solutions, showing potential to improve expert-generated approaches over time.

The system is not intended to replace scientists but to enhance their work. Google stated that the tool could accelerate scientific advancements, offering new possibilities for tackling complex challenges. Researchers involved in the project highlighted its role in fostering collaboration, rather than diminishing it.

Experts see this development as part of a growing trend in using AI across various industries. Successes like ChatGPT have demonstrated AI’s ability to support tasks ranging from customer service to legal research.

For more information on these topics, visit diplomacy.edu.

Gemini AI now requires separate app on iOS

Google has removed its AI assistant, Gemini, from the main Google app on iOS, encouraging users to download the standalone Gemini app instead. The change, announced via an email to customers, is seen as a strategic move to position Gemini as a direct competitor to AI chatbots like ChatGPT and Claude.

The dedicated Gemini app allows users to interact with the AI assistant through voice and text, integrate it with Google services like Search and YouTube, and access advanced features such as AI-generated summaries and image creation. Those who attempt to use Gemini in the main Google app will now see a message directing them to the App Store.

While the shift may enable Google to roll out new AI features more efficiently, it also risks reducing Gemini’s reach, as some users may not be inclined to download a separate app. The company is also promoting its Google One AI Premium plan through the Gemini app, offering access to its more advanced capabilities.

For more information on these topics, visit diplomacy.edu.

New Google tool helps users rethink their career paths

Google has introduced Career Dreamer, a new AI-powered tool designed to help users discover career possibilities based on their skills, education, and interests. Announced in a blog post, the experiment aims to offer personalised job exploration without the need for multiple searches across different platforms.

The tool creates a ‘career identity statement’ by analysing users’ past and present roles, education, and experiences, which can be used to refine CVs or guide interview discussions. Career Dreamer also provides a visual representation of potential career paths and allows users to collaborate with Gemini, Google’s AI assistant, to draft cover letters or explore further job ideas.

Unlike traditional job search platforms such as LinkedIn or Indeed, Career Dreamer does not link users to actual job postings. Instead, it serves as an exploratory tool to help individuals, whether students, career changers, or military veterans, identify roles that align with their backgrounds. Currently, the experiment is available only in the United States, with no confirmation on future expansion.

For more information on these topics, visit diplomacy.edu.

iPhone 16e features Apple-designed C1 subsystem

Apple has introduced its first custom-designed modem chip, marking a significant step towards reducing reliance on Qualcomm. The new chip, a part of Apple’s C1 subsystem, debuts in the $599 iPhone 16e and will eventually be integrated across other products.

The C1 subsystem includes advanced components like processors and memory, offering better battery life and enhanced artificial intelligence features.

Apple has ensured the modem is globally compatible, testing it with 180 carriers in 55 countries. Executives highlight its ability to prioritise network traffic for smoother performance, setting it apart from competitors.

Modem development is highly complex, with few companies achieving global compatibility. Apple previously relied on Qualcomm but resolved to design its own platform after legal disputes and challenges with alternative suppliers.

The C1 subsystem represents Apple’s strategy to tightly integrate modem technology with its processors for long-term product differentiation.

Apple’s senior hardware executives described the C1 as their most complex creation, combining cutting-edge chipmaking techniques. The new platform underscores Apple’s focus on control and innovation in core technologies.

For more information on these topics, visit diplomacy.edu.

New AI feature from Superhuman tackles inbox clutter

Superhuman has introduced a new AI-powered feature called Auto Label, designed to automatically categorise emails into groups such as marketing, pitches, social updates, and news. Users can also create custom labels with personalised prompts and even choose to auto-archive certain categories, reducing inbox clutter.

The company developed the tool in response to customer complaints about an increasing number of unwanted marketing and cold emails. While Gmail and Outlook offer spam filtering, Superhuman’s CEO, Rahul Vohra, said their new system aims to provide more precise classification. However, at launch, users cannot edit prompts for existing labels, meaning they must create new ones if adjustments are needed.

Superhuman is also enhancing its reminder system. The app will now automatically surface emails if a response is overdue and can draft AI-generated follow-ups in the user’s writing style. Looking ahead, the company plans to integrate personal knowledge bases, automate replies, and introduce workflow automation, making email management even more seamless.

For more information on these topics, visit diplomacy.edu.

AI’s rapid rise sparks innovation and concern

AI has transformed everyday life, powering everything from social media recommendations to medical breakthroughs. As major tech companies and governments compete to lead in AI development, concerns about ethics, bias, and environmental impact are growing.

AI systems, while capable of learning and processing vast amounts of data, lack human reasoning and empathy. Generative AI, which creates text, images, and music, has raised questions about misinformation, copyright issues, and job displacement.

AI’s influence is particularly evident in the workplace, education, and creative industries. Some experts fear it could worsen financial inequality, with automation threatening millions of jobs.

Writers, musicians, and artists have criticised AI developers for using their work without consent. Meanwhile, AI-generated misinformation has caused controversy, with major companies halting or revising their AI features after errors.

The technology also presents security risks, with deepfakes and algorithmic biases prompting urgent discussions about regulation.

Governments worldwide are introducing policies to manage AI’s risks while encouraging innovation. The European Union has imposed strict controls on AI in sensitive sectors with the AI Act, while China enforces rules ensuring compliance with censorship laws.

The United Kingdom and the United States have formed AI Safety Institutes to evaluate risks, though concerns remain over AI’s environmental impact. The rise of large data centres, which consume vast amounts of energy and water, has sparked debates about sustainability.

Despite these challenges, AI continues to advance, shaping the future in ways that are still unfolding.

For more information on these topics, visit diplomacy.edu.

New app replaces paper hospital passports for better accessibility

A new app designed by patients is replacing paper hospital passports to make hospital visits more convenient. Currently in use at Derriford Hospital in Plymouth, UK, the app stores key medical details, including allergies, medications, phobias, and emergency contacts, allowing staff to access critical information quickly.

Jessica, who helped develop the app, highlighted its ease of use, saying it eliminates the need to carry a booklet and makes sharing information with medical staff much simpler.

With nearly 700 users already, there are plans to expand the app to other hospitals in south-west England, and NHS England has expressed interest in its wider rollout.

Consultant Saoirse Read noted that digitalisation ensures staff can still access patient details even if their phone is left at home. The app has been particularly beneficial for neurodivergent patients, helping staff tailor care to individual needs.

By understanding factors such as pain responses and phobias, hospital teams can create personalised care plans, making the experience less stressful for patients.

For more information on these topics, visit diplomacy.edu.

India faces AI challenge as global race accelerates

China’s DeepSeek has shaken the AI industry by dramatically reducing the cost of developing generative AI models. While global players like OpenAI and Microsoft see potential in India, the country still lacks its own foundational AI model.

The Indian government aims to change this within 10 months by supplying high-end chips to startups and researchers, but experts warn that structural issues in education, research, and policy could hold back progress.

Despite being a major hub for AI talent, India lags behind the United States and China in research, patents, and funding. State-backed AI investments are significantly smaller than those in the two superpowers, and limited private investment further slows progress.

The outsourcing industry, which dominates India’s tech sector, has traditionally focused on services rather than developing AI innovations, leaving startups to bridge the gap.

Some industry leaders believe India can still make rapid advancements by leveraging open-source AI platforms like DeepSeek. However, long-term success will require building a strong research ecosystem, boosting semiconductor production, and securing strategic autonomy in AI.

Without these efforts, experts caution that India may struggle to compete on the global AI stage in the coming years.

For more information on these topics, visit diplomacy.edu.

Lawyers warned about AI misuse in court filings

Warnings about AI misuse have intensified after lawyers from Morgan & Morgan faced potential sanctions for using fake case citations in a lawsuit against Walmart.

The firm’s urgent email to over 1,000 attorneys highlighted the dangers of relying on AI tools, which can fabricate legal precedents and jeopardise professional credibility. A lawyer in the Walmart case admitted to unintentionally including AI-generated errors in court filings.

Courts have seen a rise in similar incidents, with at least seven cases involving disciplinary actions against lawyers using false AI-generated information in recent years. Prominent examples include fines and mandatory training for lawyers in Texas and New York who cited fictitious cases in legal disputes.

Legal experts warn that while AI tools can speed up legal work, they require rigorous oversight to avoid costly mistakes.

Ethics rules demand lawyers verify all case filings, regardless of AI involvement. Generative AI, such as ChatGPT, creates risks by producing fabricated data confidently, sometimes referred to as ‘hallucinations’. Experts point to a lack of AI literacy in the legal profession as the root cause, not the technology itself.

Advances in AI continue to reshape the legal landscape, with many firms adopting the technology for research and drafting. However, mistakes caused by unchecked AI use underscore the importance of understanding its limitations.

Acknowledging this issue, law schools and organisations are urging lawyers to approach AI cautiously to maintain professional standards.

For more information on these topics, visit diplomacy.edu.