UCLA is breaking new ground with an AI-developed comparative literature course set to launch in winter 2025. The class, covering literature from the Middle Ages to the 17th century, will feature a textbook, assignments, and teaching assistant (TA) resources generated by Kudu, an AI-powered platform founded by UCLA physics professor Alexander Kusenko. This initiative marks the first use of AI-generated materials in UCLA’s humanities division.
Professor Zrinka Stahuljak, who designed the course, collaborated with Kudu by providing lecture notes, PowerPoint slides, and videos from previous classes. The AI system produced the materials within three to four months, requiring just 20 hours of professor involvement. Kudu’s platform allows students to interact with course content through questions answered strictly within the provided material, ensuring focused and accurate responses.
By streamlining material creation, the approach frees up professors and TAs to engage more closely with students while maintaining consistency in course delivery. UCLA hopes this innovative method will enhance the learning experience and redefine education in the humanities.
The International Committee of the Red Cross (ICRC) has introduced principles for using AI in its operations, aiming to harness the technology’s benefits while protecting vulnerable populations. The guidelines, unveiled in late November, reflect the organisation’s cautious approach amid growing interest in generative AI, such as ChatGPT, across various sectors.
ICRC delegate Philippe Stoll emphasised the importance of ensuring AI tools are robust and reliable to avoid unintended harm in high-stakes humanitarian contexts. The ICRC defines AI broadly as systems that perform tasks requiring human-like cognition and reasoning, extending beyond popular large language models.
Guided by its core principles of humanity, neutrality, and independence, the ICRC prioritises data protection and insists that AI tools address real needs rather than seeking problems to solve. That approach stems from the risks posed by deploying technologies in regions poorly represented in AI training data, as highlighted by a 2022 cyberattack that exposed sensitive beneficiary information.
Collaboration with academia is central to the ICRC’s strategy. Partnerships like the Meditron project with Switzerland’s EPFL focus on AI for clinical decision-making and logistics. These initiatives aim to improve supply chain management and enhance field operations while aligning with the organisation’s principles.
Despite interest in AI’s potential, Stoll cautioned against using off-the-shelf tools unsuited to specific local challenges, underscoring the need for adaptability and responsible innovation in humanitarian work.
Jack Ma, co-founder of Alibaba, made a rare public appearance on Sunday, expressing optimism about the future of Ant Group, the fintech affiliate he also helped establish. Speaking at Ant’s 20th-anniversary celebration, Ma highlighted the transformative potential of AI, stating that the changes driven by AI in the next two decades will surpass current expectations. His remarks, reported by Chinese media outlet 36kr, marked a notable return to the spotlight following his retreat from public life amid regulatory challenges.
Reflecting on Ant Group’s turbulent journey, Ma acknowledged the value of criticism and encouragement in fostering the company’s growth. Ant, the operator of China’s leading mobile payment app Alipay, faced a regulatory crackdown after Ma’s public critique of Chinese regulators in 2020. This led to the cancellation of Ant’s $300 billion IPO, followed by a stringent overhaul of its operations to align with financial regulations. The reforms included Ma relinquishing control of the company in 2023.
Despite these challenges, Ant is charting a path forward, underscored by a leadership transition announced Sunday. President Cyril Han will succeed Eric Jing as CEO starting March 1, 2024. Ma’s renewed confidence in Ant’s potential, especially in the AI era, signals a fresh chapter for the fintech giant as it emerges from years of regulatory scrutiny.
X, owned by Elon Musk, is now offering its AI chatbot, Grok, for free. Users can send up to 10 prompts every two hours and generate ten images during the same period without subscribing. However, certain features, such as analysing more than three images per day, still require a paid subscription.
Previously available only to X Premium members for $8 monthly or $84 annually, Grok’s transition to a freemium model brings it in line with AI offerings like OpenAI’s ChatGPT. The shift follows recent trials of the free version in countries such as New Zealand.
The freemium move coincides with a significant milestone for Grok’s parent company, xAI, which recently raised $6B, bringing its total funding to $12B. With its updated accessibility, Grok aims to broaden its appeal while remaining competitive in the evolving AI market.
The US government has authorised the export of advanced AI chips to a Microsoft-operated facility in the United Arab Emirates. This approval comes as part of Microsoft’s $1.5 billion partnership with Emirati AI firm G42, where the US tech giant holds a minority stake and a board seat. G42 uses Microsoft’s cloud services to support its AI applications.
Concerns arose over potential risks of US AI technology being transferred to China, prompting scrutiny from lawmakers. They sought clarity on G42’s connections to Chinese authorities before permitting the deal to proceed. The export licence requires strict compliance measures, ensuring restricted access to the UAE facility by individuals or organisations from nations under US arms embargoes, including China.
AI-related national security risks, such as the facilitation of weapons development, remain a key issue for US officials. The Biden administration has implemented regulations requiring major AI developers to share system details with the government. G42 has publicly stated its commitment to aligning with international standards in collaboration with US partners and the UAE government.
Ownership ties also add complexity, with G42 partly owned by Abu Dhabi’s sovereign wealth fund and chaired by Sheikh Tahnoon bin Zayed Al Nahyan, the UAE’s national security advisor. The deal underscores a delicate balancing act as Washington navigates strategic and economic interests in the AI sector.
AI startup Perplexity has expanded its publisher partnerships, adding media outlets such as the Los Angeles Times and The Independent. These new partners will benefit from a program that shares ad revenue when their content is referenced on the platform. The initiative also provides publishers with access to Perplexity’s API and analytics tools, enabling them to track content performance and trends.
The program, launched in July, has attracted notable partners from Japan, Spain, and Latin America, including Prisa Media and Newspicks. Existing collaborators include TIME, Der Spiegel, and Fortune. Perplexity highlighted the importance of diverse media representation, stating that the partnerships enhance the accuracy and depth of its AI-powered responses.
Backed by Amazon founder Jeff Bezos and Nvidia, Perplexity aims to challenge Google’s dominance in the search engine market. The company has also begun testing advertising on its platform, seeking to monetise its AI search capabilities.
Perplexity’s growth has not been without challenges. It faces lawsuits from News Corp-owned publishers, including Dow Jones and New York Post, over alleged copyright violations. The New York Times has also issued a cease-and-desist notice, demanding the removal of its content from Perplexity’s generative AI tools.
Cohere, a Canadian AI startup valued at $5.5 billion, is shifting its focus to developing customised AI models for businesses. Co-founder Nick Frosst explained that enterprise users prefer models tailored to specific use cases rather than larger, general-purpose ones. The company aims to refine its approach by prioritising model deployment and customisation over simply increasing model sizes.
Although Cohere will continue building foundation models, it plans to invest in training techniques to improve functionality. The startup has secured over $900 million in funding from major investors like Nvidia, Cisco, and Innovia Capital. Unlike some competitors, Cohere positions itself as an independent player, working with clients such as Oracle and Fujitsu to design models for their unique requirements.
The AI industry, once focused on scaling up models, now faces diminishing returns from increasing model size. As large language model advancements plateau, Cohere’s customised approach offers a more efficient and cost-effective solution. Frosst highlighted that this strategy aligns with the company’s enterprise-centric vision and avoids reliance on speculative breakthroughs in artificial general intelligence.
By concentrating on tailored AI solutions, Cohere aims to enhance real-world applications for its enterprise clients. This strategy positions the startup as a competitive alternative to larger AI labs such as OpenAI and Anthropic.
Google’s newest AI, the PaliGemma 2 model, has drawn attention for its ability to interpret emotions in images, a feature unveiled in a recent blog post. Unlike basic image recognition, PaliGemma 2 offers detailed captions and insights about people and scenes. However, its emotion detection capability has sparked heated debates about ethical implications and scientific validity.
Critics argue that emotion recognition is fundamentally flawed, relying on outdated psychological theories and subjective visual cues that fail to account for cultural and individual differences. Studies have shown that such systems often exhibit biases, with one report highlighting how similar models assign negative emotions more frequently to certain racial groups. Google says it performed extensive testing on PaliGemma 2 for demographic biases, but details of these evaluations remain sparse.
Experts also worry about the risks of releasing this AI technology to the public, citing potential misuse in areas like law enforcement, hiring, and border control. While Google emphasises its commitment to responsible innovation, critics like Oxford’s Sandra Wachter caution that without robust safeguards, tools like PaliGemma 2 could reinforce harmful stereotypes and discriminatory practices. The debate underscores the need for a careful balance between technological advancement and ethical responsibility
A to-do list app, Twos, is rethinking productivity with AI-driven features that go beyond simple task tracking. Instead of just helping users organise tasks, Twos offers actionable suggestions to help complete them. For instance, writing ‘Buy paper napkins’ prompts the app to suggest links to online stores like Amazon or Walmart. Planning a birthday? Twos might remind you to add a calendar event, send a message, or purchase a gift card.
Launched in 2021 by former Google engineer Parker Klein and Joe Steilberg, Twos integrates with 27 apps, including Spotify, Uber Eats, Google Maps, and Ticketmaster. While the app currently leans on US-centric services, plans for better localisation aim to broaden its appeal. Available across Android, iOS, and the web, Twos is free, with optional premium features like custom sorting and templates priced at $2 each.
Beyond task suggestions, Twos introduced an AI assistant for list creation last year, positioning itself in the growing market of AI-powered productivity tools. The app now boasts over 25,000 active users and emphasises intuitive, energy-efficient design. While other apps like Hypelist compete in this space, Twos’ holistic approach could redefine how we manage daily tasks.
Microsoft has introduced Copilot Vision, an AI-powered feature available in a limited US preview for users of Microsoft Edge. This experimental tool, part of the Copilot Labs program, can read web pages to answer user queries, summarise and translate content, and even assist with tasks like finding discounts or offering gaming tips. For example, it can provide recipes from a cooking site or strategic advice during an online chess game.
To address privacy concerns, Microsoft emphasises that Copilot Vision deletes all processed data at the end of each session and does not store information for model training. The feature is initially restricted to a pre-approved list of popular websites, excluding sensitive or paywalled content, though Microsoft plans to expand compatibility over time.
Microsoft’s cautious rollout reflects ongoing efforts to balance innovation with publisher concerns over AI’s use of web data. The company is collaborating with third-party publishers to ensure the tool benefits users without compromising website content or functionality.