Researchers from the Netherlands have developed an AI platform capable of recognising sarcasm. The project was presented at a meeting of the Acoustical Society of America and the Canadian Acoustical Association in Ottawa. Using video clips and text from American sitcoms such as ‘Friends’ and ‘The Big Bang Theory,’ the researchers trained a neural network with the Multimodal Sarcasm Detection Dataset (MUStARD), previously annotated by another research team from the US and Singapore.
After being trained on this data, the AI model successfully detected sarcasm in unlabeled exchanges about 75% of the time. Further improvements using synthetic data have reportedly enhanced this accuracy, though these findings are yet to be published. Notable scenes used for training included moments from ‘The Big Bang Theory’ and ‘Friends’ that exemplified sarcastic interactions.
The research team at the University of Groningen aims to advance their sarcasm detection capabilities. By incorporating visual cues such as facial expressions, they aim to refine the AI’s ability to detect sarcasm more accurately. The project could significantly improve AI assistants’ interactions by enabling them to understand negative or hostile tones in human speech.
Why does it matter?
Sarcasm generally takes the form of an ironic remark, often rooted in humour, that is intended to mock or satirise something. When a speaker is being sarcastic, they say something different than what they actually mean, and that’s why it is hard for a large language machine to detect such nuances in someone’s speech.
The project aligns with similar research initiatives, such as those by the US Department of Defense’s DARPA, which developed an AI model for detecting sarcasm in text. The success of these projects is a significant achievement since it underscores the importance of understanding sarcasm in human communication, which could enhance the development of more nuanced and compelling AI systems.
Meta Platforms, the parent company of Facebook, announced that it will discontinue its Workplace app, a platform geared towards work-related communications. The social media platform made this decision as it shifted its focus towards developing AI and metaverse technologies. The Workplace app will be phased out for customers starting in June 2026, although Meta will continue to utilise it internally as a messaging board until August 2025, according to a statement from the company.
A spokesperson for Meta stated that they are discontinuing Workplace to focus on building AI and metaverse technologies that they believe will fundamentally reshape the way they work. Over the next two years, Workplace customers will have the option to transition to Zoom’s Workvivo product, which Meta has designated as its preferred migration partner. Workplace was initially launched in 2016 to cater to businesses, offering features such as multi-company groups and shared spaces to facilitate collaboration among employees from different organizations.
Why does it matter?
The discontinuation of Workplace aligns with Meta’s strategic emphasis on advancing AI and metaverse technologies, which it views as integral to the future of digital communication. The strategic change of business direction has raised concerns about escalating costs that could potentially impact the company’s growth trajectory. Despite the discontinuation of Workplace, Meta has assured customers that billing and payment arrangements will remain unchanged until August of this year. Currently, Workplace offers a core plan priced at $4 per user per month, with additional add-ons available starting from $2 per user per month, with monthly bills calculated based on the number of billable users unless a fixed plan is in place.
TikTok is experimenting with an enhanced search feature, utilising generative AI to provide more comprehensive search results. Dubbed ‘search highlights’, this feature showcases AI-generated snippets at the top of certain search result pages, offering users a glimpse into the full response by clicking through. Initial tests reveal AI-generated responses for queries ranging from recipes to product recommendations, such as ‘best laptops 2024’.
Powered by ChatGPT, TikTok’s AI search results aim to surface relevant content based on user queries. However, the feature has limitations, with not all searches yielding AI-generated answers. Some search queries display results generated by ChatGPT alongside a broader ‘search highlights’ feature.
The birth of the new search tool, backed by AI, reflects TikTok’s ongoing efforts to enhance its in-app search functionality, aligning with users’ habits, particularly among younger demographics. Recognising TikTok’s role as a search destination for recommendations and information, the platform previously experimented with integrating Google Search results and direct links to external websites like Wikipedia and IMDb. By incorporating AI-driven results prominently, TikTok aims to cater to users’ preferences and further solidify its position as a go-to platform for discovering diverse content.
Anthropic, an AI startup backed by tech giants Google and Amazon.com, announced the European release of its generative AI chatbot, Claude, set for Tuesday. The new AI tool places Claude in direct competition with Microsoft-backed OpenAI’s ChatGPT, renowned for its record-breaking 100 million monthly active users just two months post-launch.
While Claude has been accessible online in several countries, this marks its debut availability via the web and iPhones throughout the EU, including non-EU nations like Switzerland and Iceland. Additionally, European businesses can opt for the ‘Claude Team’ plan for €28 ($30.21) per month before value-added tax.
Anthropic, founded by former OpenAI executives and siblings Dario and Daniela Amodei, recently entered the corporate AI market fray by launching a business-oriented version of Claude. This business-oriented tool potentially pits Anthropic against its own backers, Amazon and Google, as they vie for a share of the burgeoning AI business spending.
Dario Amodei, Anthropic’s CEO and co-founder, emphasised Claude’s widespread utility, saying that millions of people worldwide are already using Claude to do things like accelerate scientific processes, enhance customer service or refine their writing. With its European rollout, Anthropic anticipates further innovation and adoption across diverse sectors and businesses.
AI is poised to drastically reshape the global labour market, according to International Monetary Fund Managing Director Kristalina Georgieva. She likened its impact to a ‘tsunami’, projecting that 60% of jobs in advanced economies and 40% worldwide will be affected within the next two years. Georgieva emphasised the urgency of preparing individuals and businesses for this imminent transformation, speaking at an event organised by the Swiss Institute of International Studies in Zurich.
While AI adoption promises significant gains in productivity, Georgieva warned against potential downsides, including the proliferation of misinformation and the exacerbation of societal inequality. She highlighted the recent vulnerabilities of the world economy, citing shocks like the 2020 global pandemic and the ongoing conflict in Ukraine. Despite these challenges, she noted a resilience in the global economy, with no imminent signs of a widespread recession.
Addressing concerns about inflation, Swiss National Bank Chairman Thomas Jordan emphasised progress in Switzerland’s inflation management. With inflation reaching 1.4% in April, remaining within the SNB’s target range for the 11th consecutive month, Jordan expressed optimism about maintaining price stability in the coming years. However, he acknowledged lingering uncertainties surrounding future economic trends.
In a groundbreaking demonstration of technological advancement, two US Air Force fighter jets recently engaged in a dogfight over California, with one jet piloted by a human and the other by AI. The AI-piloted jet, named Vista, showcased the Air Force’s strides in AI technology, which dates back to the 1950s but continues to evolve.
The US is in a race with China to maintain superiority in AI and its integration into weapon systems, raising concerns about the potential for future wars fought primarily by machines. Despite assurances from officials that direct human intervention will always be required on the US side, questions linger about adversaries’ intentions and the need for rapid deployment of US capabilities.
AI’s military history traces back to the 1960s and 1970s when systems like the Navy’s Aegis missile defence were developed, which employed if/then rule sets for autonomous decision-making. However, big advancements came in 2012 with the ability of computers to analyse big data and generate their own rule sets, marking a significant milestone dubbed AI’s ‘big bang.’
Why does it matter?
Numerous AI projects are underway across the Pentagon, including enhancing communication between pilots and air operations centres and developing AI-based navigation systems independent of GPS satellites. Safety remains a top priority as AI learns and adapts, with extensive precautions taken to ensure the accuracy and reliability of AI-driven systems. Despite challenges, AI technology promises to revolutionise military operations, offering enhanced capabilities and strategic advantages in future conflicts.
The US and China are set to meet in Geneva on Tuesday to discuss advanced AI, with US officials underscoring that Washington’s policies would not be open for negotiation, despite exploring ways to address risks associated with the technology. President Joe Biden’s administration aims to engage China on various fronts to minimise miscommunication between the two countries, with AI being a focal point. Earlier discussions between US Secretary of State Antony Blinken and China’s Foreign Minister Wang Yi in Beijing laid the groundwork for these formal bilateral talks on AI.
Highlighting concerns over China’s rapid deployment of AI across multiple sectors, including civilian, military, and national security, US officials stress the need for direct communication to address security implications for the US and its allies. However, they clarified that talks with Beijing do not involve promoting technical collaboration or negotiating technology protection policies.
Despite competing interests in shaping AI rules, both the US and China hope to explore areas where mutual agreements can enhance global safety. Tarun Chhabra from the US National Security Council and Seth Center from the State Department will lead the discussions with Chinese officials, focusing on critical AI risks. Meanwhile, US Senate Majority Leader Chuck Schumer intends to issue recommendations on addressing AI risks in the coming weeks, emphasising the need for proactive legislation to navigate the competitive landscape with China and regulate AI advancements effectively.
Microsoft Corp. is set to embark on a €4 billion ($4.3 billion) venture to construct cloud and AI infrastructure in France, marking its latest major commitment to AI technology. The American tech giant aims to train a million individuals and bolster 2,500 startups in this European country by 2027, as stated in an official announcement. Earlier this year, Microsoft unveiled a strategic partnership and a €15 million investment into Mistral AI, a Paris-based startup competing in the AI domain alongside OpenAI.
France has emerged as a focal point for AI development, drawing substantial national funds and support from local magnates for initiatives like Mistral and Kyutai, a newly formed AI research nonprofit. Microsoft’s announcement aligns with President Emmanuel Macron’s ‘Choose France’ summit, which seeks to entice foreign enterprises and position France as a financial epicentre within the EU. Concurrently, another American tech titan, Amazon.com Inc., has pledged €1.2 billion towards infrastructure and computing, with 56 projects slated for announcement during the event, according to the Elysee.
Microsoft is investing significantly in its Azure cloud platform and associated AI tools. Following a €3.2 billion investment in Germany in February, the company injected $1.5 billion into the Abu Dhabi-based AI firm G42 in April. However, Microsoft’s endeavours in the cloud and AI sectors have attracted intensified antitrust scrutiny, particularly concerning its investments exceeding $10 billion in OpenAI.
OpenAI is sparking debate by considering the possibility of allowing users to generate explicit content, including pornography, using its AI-powered tools like ChatGPT and DALL-E. While maintaining a ban on deepfakes, OpenAI’s proposal has raised concerns among campaigners who question its commitment to producing ‘safe and beneficial’ AI. The company sees potential for ‘not-safe-for-work’ (NSFW) content creation but stresses the importance of responsible usage and adherence to legal and ethical standards.
The proposal, outlined in a document discussing OpenAI’s AI development practices, aims to initiate discussions about the boundaries of content generation within its products. Joanne Jang, an OpenAI employee, stressed the need for maximum user control while ruling out deepfake creation. Despite acknowledging the importance of discussions around sexuality and nudity, OpenAI maintains strong safeguards against deepfakes and prioritises protecting users, particularly children.
Critics, however, have accused OpenAI of straying from its mission statement of developing safe and beneficial AI by delving into potentially harmful commercial endeavours like AI erotica. Concerns about the spread of AI-generated pornography have been underscored by recent incidents, prompting calls for tighter regulation and ethical considerations in the tech sector. While OpenAI’s policies prohibit sexually explicit content, questions remain about the effectiveness of safeguards and the company’s approach to handling sensitive content creation.
Why does it matter?
As discussions unfold, stakeholders, including lawmakers, experts, and campaigners, closely scrutinise OpenAI’s proposal and its potential implications for online safety and ethical AI development. With growing concerns about the misuse of AI technology, the debate surrounding OpenAI’s stance on explicit content generation highlights broader challenges in balancing innovation, responsibility, and societal well-being in the digital age.
A bipartisan group of lawmakers introduced a bill to strengthen the Biden administration’s ability to regulate the export of AI models, focusing on protecting US technology from potential misuse by foreign competitors. Sponsored by both Republicans and Democrats, the bill proposes granting the Commerce Department explicit authority to control AI exports deemed risky to national security and to prohibit collaboration between Americans and foreigners on such systems.
The bill points to strengthening legal oversight due to the pressing need to protect US AI technology from hostile exploitation. The emerging concerns are advanced AI models, which can process vast amounts of data and generate content that adversaries could exploit for cyber attacks or even the development of biological weapons.
While the Commerce Department and the White House have yet to comment on the bill, reports suggest that the US is gearing up to implement export controls on proprietary AI models to counter threats China and Russia pose. Current US laws make it challenging to regulate the export of open-source AI models, which are freely accessible. The legal measure would, therefore, streamline regulations, particularly regarding open-source AI, and grant the Commerce Department enhanced oversight over AI systems if approved.
Why does it matter?
The introduction of this bill is set against the backdrop of intensifying global competition in AI development. China, for instance, heavily relies on open-source models like Meta Platforms’ ‘Llama’ series. Recent revelations about using these models by Chinese AI firms have raised concerns about intellectual property and security risks. Furthermore, Microsoft’s significant investment in a UAE-based AI firm, G42, has sparked a debate over the implications of deepening ties between Gulf states and China, leading to security agreements between the US, UAE, and Microsoft.