A new AI tool created by Google DeepMind, called the ‘Habermas Machine,’ could help reduce culture war divides by mediating between different viewpoints. The system takes individual opinions and generates group statements that reflect both majority and minority perspectives, aiming to foster greater agreement.
Developed by researchers, including Professor Chris Summerfield from the University of Oxford, the AI system has been tested in the United Kingdom with more than 5,000 participants. It was found that the statements created by AI were often rated higher in clarity and quality than those written by human mediators, increasing group consensus by eight percentage points on average.
The Habermas Machine was also used in a virtual citizens’ assembly on topics such as Brexit and universal childcare. It was able to produce group statements that acknowledged minority views without marginalising them, but the AI approach does have its critics.
Some researchers argue that AI-mediated discussions don’t always promote empathy or give smaller minorities enough influence in shaping the final statements. Despite these concerns, the potential for AI to assist in resolving social disagreements remains a promising development.
Elon Musk’s xAI has officially launched its API for Grok, the company’s generative AI model, though it’s currently in a limited form. The API, which provides access to a single model called ‘grok-beta,’ is priced at $5 per million input tokens and $15 per million output tokens. Tokens are small data segments, and while the API mentions both Grok 2 and Grok mini, it remains unclear which model ‘grok-beta’ is based on. The API also supports function calls, which allows the AI to interact with external tools like databases, with potential plans to add image analysis capabilities.
Musk’s xAI was founded last year and has made waves with Grok, known for its edgy and provocative responses, in contrast to more conservative AI models like ChatGPT. Available to X Premium+ users at $16 per month, Grok has already integrated into the X platform, generating images through Flux and summarising news, although often inaccurately. There are plans to expand Grok’s features to enhance search, post analytics, and other functions on X.
While xAI is racing to compete with AI giants like OpenAI and Anthropic, it has attracted significant financial backing, raising $6 billion from major investors, including Andreessen Horowitz and Sequoia Capital. Musk claims that data from X gives xAI an advantage and recent privacy policy changes allow the company to train its models using user posts. There’s also a broader vision to leverage data from Musk’s other ventures, such as Tesla and SpaceX, to enhance AI technologies across these companies.
However, not everyone is on board with Musk’s ambitious plans. Tesla shareholders have filed lawsuits, arguing that xAI is siphoning resources and talent away from Tesla. The startup faces environmental concerns at its Memphis data centre, where unauthorised turbines have been linked to smog issues. xAI plans to upgrade the facility next year, pending regulatory approval from the Tennessee Valley Authority.
ByteDance, the parent company of TikTok, has dismissed an intern for what it described as “maliciously interfering” with the training of one of its AI models. The Chinese tech giant clarified that while the intern, who was part of the advertising technology team, had no experience with ByteDance’s AI Lab, some reports circulating on social media and other platforms have exaggerated the incident’s impact.
ByteDance stated that the interference did not disrupt its commercial operations or its large language AI models. It also denied claims that the damage exceeded $10 million or affected an AI training system powered by thousands of graphics processing units (GPUs). The company highlighted that the intern was fired in August, and it has since notified their university and relevant industry bodies.
As one of the leading tech firms in AI development, ByteDance operates popular platforms like TikTok and Douyin. The company continues to invest heavily in AI, with applications including its Doubao chatbot and a text-to-video tool named Jimeng.
Samsung is taking its commitment to security up a notch by expanding its blockchain technology to cover a wider range of AI-powered home appliances. The South Korean tech giant announced that its Knox Matrix framework, originally designed for mobile devices and televisions, will now protect home devices using a ‘Trust Chain.’ This private blockchain system enables connected devices to monitor each other for potential security issues, keeping users informed in case of any threats.
In addition to blockchain-based security, Samsung is introducing ‘Cross Platform’ technology, ensuring consistent protection across devices, regardless of the operating system. The company also aims to improve privacy with its ‘Credential Sync,’ which encrypts and synchronises user data for enhanced safety.
Expected to roll out these new features next year, Samsung will integrate biometric authentication, allowing users to log into apps with fingerprints instead of passwords. The move builds on the company’s previous blockchain ventures, including its Samsung Blockchain Wallet and Blockchain Keystore.
The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.
The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.
The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.
A London-based company, Synthesia, known for its lifelike AI video technology, is under scrutiny after its avatars were used in deepfake videos promoting authoritarian regimes. These AI-generated videos, featuring people like Mark Torres and Connor Yeates, falsely showed their likenesses endorsing the military leader of Burkina Faso, causing distress to the models involved. Despite the company’s claims of strengthened content moderation, many affected models were unaware of their image’s misuse until journalists informed them.
In 2022, actors like Torres and Yeates were hired to participate in Synthesia’s AI model shoots for corporate projects. They later discovered their avatars had been used in political propaganda, which they had not consented to. This caused emotional distress, as they feared personal and professional damage from the fake videos. Despite Synthesia’s efforts to ban accounts using its technology for such purposes, the harmful content spread online, including on platforms like Facebook.
UK-based Synthesia has expressed regret, stating it will continue to improve its processes. However, the long-term impact on the actors remains, with some questioning the lack of safeguards in the AI industry and warning of the dangers involved when likenesses are handed over to companies without adequate protections.
IBM unveiled its latest AI model, known as ‘Granite 3.0,’ on Monday, targeting businesses eager to adopt generative AI technology. The company aims to stand out from its competitors by offering these models as open-source, a different approach from firms like Microsoft, which charge clients for access to their AI models. IBM’s open-source strategy promotes accessibility and flexibility, allowing businesses to customise and integrate these models as needed.
Alongside the Granite 3.0 models, IBM provides a paid service called Watsonx, which assists companies in running these models within their data centres once they are customised. This service gives enterprises more control over their AI solutions, enabling them to tailor and optimise the models for their specific needs while maintaining privacy and data security within their infrastructure.
The Granite models are already available for commercial use through the Watsonx platform. In addition, select models from the Granite family will be accessible on Nvidia’s AI software stack, allowing businesses to incorporate these models using Nvidia’s advanced tools and resources. IBM collaborated closely with Nvidia, utilising its H100 GPUs, a leading technology in the AI chip market, to train these models. Dario Gil, IBM’s research director, highlighted that the partnership with Nvidia is central to delivering powerful and efficient AI solutions for enterprises looking to stay ahead in a rapidly evolving technological landscape.
Meta, the owner of Facebook, announced a partnership with Blumhouse Productions, known for hit horror films like ‘The Purge’ and ‘Get Out,’ to test its new generative AI video model, Movie Gen. This follows the recent launch of Movie Gen, which can produce realistic video and audio clips based on user prompts. Meta claims that this tool could compete with offerings from leading media generation startups like OpenAI and ElevenLabs.
Blumhouse has chosen filmmakers Aneesh Chaganty, The Spurlock Sisters, and Casey Affleck to experiment with Movie Gen, with Chaganty’s film set to appear on Meta’s Movie Gen website. In a statement, Blumhouse CEO Jason Blum emphasised the importance of involving artists in the development of new technologies, noting that innovative tools can enhance storytelling for directors.
This partnership highlights Meta’s aim to connect with the creative industries, which have expressed hesitance toward generative AI due to copyright and consent concerns. Several copyright holders have sued companies like Meta, alleging unauthorised use of their works to train AI systems. In response to these challenges, Meta has demonstrated a willingness to compensate content creators, recently securing agreements with actors such as Judi Dench, Kristen Bell, and John Cena for its Meta AI chatbot.
Meanwhile, Microsoft-backed OpenAI has been exploring potential partnerships with Hollywood executives for its video generation tool, Sora, though no deals have been finalised yet. In September, Lions Gate Entertainment announced a collaboration with another AI startup, Runway, underscoring the increasing interest in AI partnerships within the film industry.
Hiya, a US-based company specialising in fraud and spam detection, has introduced a new Chrome browser extension to identify AI-generated deepfake voices. The tool offers free access to anyone concerned about the growing risk of voice manipulation.
The Deepfake Voice Detector analyses video and audio streams, sampling audio in just one second to determine whether a voice is genuine or artificially generated. Hiya’s technology relies on AI algorithms it integrated following the acquisition of Loccus.ai in July.
With deepfakes becoming increasingly difficult to spot, the company aims to help users stay ahead of potential misuse. Hiya president Kush Parikh emphasised the importance of launching the tool ahead of the US elections in November to address the rising threat.
A survey of 2,000 individuals conducted by Hiya revealed that one in four people encountered audio deepfakes between April and July this year. Personal voice calls emerged as the primary risk factor (61%), followed by exposure on platforms like Facebook (22%) and YouTube (17%).
A1 Austria, Eurofiber, and Quantcom have joined forces to develop a high-speed dark-fibre network connecting Frankfurt and Vienna, marking a significant advancement in European telecommunications. Scheduled for completion in December 2025, this ambitious project aims to deliver an ultra-low-latency infrastructure essential for meeting modern telecommunications’s growing demands.
By collaborating, these three providers are not only bolstering their technical capabilities but are also ensuring that the network will support a wide array of critical applications, including cloud services, media broadcasting, AI, and machine learning (ML). Furthermore, the network’s low latency will significantly enhance connectivity for key industries across Europe, making it a vital asset for telecommunications companies, fixed network operators, and global enterprises.
Ultimately, this new fibre network is poised to serve as a critical backbone for the region’s digital ecosystem, facilitating seamless communication and data exchange. As a result, it is expected to have a substantial economic impact by connecting various industries and enabling high-performance connectivity, thereby acting as a catalyst for growth across multiple sectors.
Moreover, this initiative addresses the current demand for faster and more reliable data transfer and lays the groundwork for a more robust digital infrastructure in Europe, thereby fostering innovation and economic development in the years to come.