A new phenomenon in the digital world has taken the internet by storm: AI-generated cats like Chubby are captivating millions with their peculiar and often heart-wrenching stories. Videos featuring these virtual felines, crafted by AI, depict them in bizarre and tragic situations, garnering immense views and engagement on platforms like TikTok and YouTube. Chubby, a rotund ginger cat, has become particularly iconic, with videos of his misadventures, from shoplifting to being jailed, resonating deeply with audiences across the globe.
These AI-generated cat stories are not just popular; they are controversial, blurring the line between art and digital spam. Content creators are leveraging AI tools to produce these videos rapidly, feeding social media algorithms that favour such content, which often leads to virality. Despite criticisms of the quality and intent behind this AI-generated content, it is clear that these videos are striking a chord with viewers, many of whom find themselves unexpectedly moved by the fictional plights of these digital cats.
The surge in AI-generated cat videos raises questions about the future of online content and the role of AI in shaping what we consume. While some see it as a disturbing trend, others argue that it represents a new form of digital art, with creators like Charles, the mastermind behind Chubby, believing that AI can indeed produce compelling and emotionally resonant material. The popularity of these videos, particularly those with tragic endings, suggests that there is a significant demand for this type of content.
As AI continues to evolve and integrate further into social media, the debate over the value and impact of AI-generated content is likely to intensify. Whether these videos will remain a staple of internet culture or fade as a passing trend remains to be seen. For now, AI-generated cats like Chubby are at the forefront of a fascinating and complex intersection between technology, art, and human emotion.
In the world of video game development, the rise of AI has sparked concern among performers who fear it could threaten their jobs. Motion capture actors like Noshir Dalal, who perform the physical movements that bring game characters to life, worry that AI could be used to replicate their performances without their consent, potentially reducing job opportunities and diminishing the value of their work.
Dalal, who has played characters in the most popular video games like ‘Star Wars Jedi: Survivor,’ highlights the physical toll and skill required in motion capture work. He argues that AI could allow studios to bypass hiring actors for new projects by reusing data from past performances. The concern is central to the ongoing strike by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), which represents video game performers and other media professionals. The union is demanding stronger protections against unregulated AI use in the industry.
Why does this matter?
AI’s ability to generate new animations and voices based on existing data is at the heart of the issue. While studios argue that they have offered meaningful AI protections, performers remain sceptical. They worry that the use of AI could lead to ethical dilemmas, such as their likenesses being used in ways they do not endorse, as seen in the controversy surrounding game modifications that use AI to create inappropriate content.
Video game companies have offered wage increases and other benefits as negotiations continue, but the debate over AI protections remains unresolved. Performers like Dalal and others argue that AI could strip away the artistry and individuality that actors bring to their roles without strict controls, leaving them vulnerable to exploitation. The outcome of this dispute could set a precedent for how AI is regulated in the entertainment industry, impacting the future of video game development and beyond.
Nvidia Research has introduced a new generative AI model called StormCast that promises to significantly enhance the accuracy of short-range weather forecasting, particularly for extreme weather events. This advancement in the meteorology field could mark a bigger shift in forecasting, providing more precise predictions that could save lives and protect property.
StormCast is the first AI model capable of simulating small-scale weather phenomena, such as thunderstorms and flash floods, with improved accuracy compared to existing models. It operates at the mesoscale level, allowing it to predict how storms will develop, intensify, and dissipate, offering an edge over traditional methods like the high-resolution rapid refresh (HRRR) model used in the US.
Thanks to its generative AI capabilities, Nvidia’s model is faster and more efficient, producing detailed forecasts in minutes rather than hours. The prediction rapidity allows it to be used in ensemble forecasting, where multiple runs with slightly different data provide a more reliable prediction or highlight potential changes in weather patterns.
While AI-driven models like StormCast transform weather prediction, experts caution against abandoning traditional physics-based models entirely. Nvidia’s approach involves integrating AI with established methods to ensure the reliability and accuracy of forecasts.
Nvidia is collaborating with The Weather Company and Colorado State University to test and refine StormCast, which has the potential for broader application in the future. As AI continues to evolve, the impact on local weather forecasting is expected to grow, offering new ways to predict and respond to weather hazards.
The growing demand for data centres, driven by the AI boom, is leading to a significant increase in water consumption, particularly for cooling the computing equipment. In Virginia, home to the world’s largest concentration of data centres, water usage surged by nearly two-thirds between 2019 and 2023, rising from 1.13 billion to 1.85 billion gallons.
The trend, mirrored globally, raises concerns about sustainability. Microsoft, a key player in the data centre industry, reported that 42% of the water it used in 2023 came from regions experiencing water stress. Google, which operates some of the largest data centres, revealed that 15% of its freshwater withdrawals occurred in areas with high water scarcity.
Although many data centres use closed-loop systems to recycle water, a significant portion is still lost due to the need for humidity control, especially in dry regions. Humidified air is essential to prevent static electricity, which can damage sensitive computer equipment.
The increasing water consumption by data centres underscores the environmental challenges posed by the rapid expansion of AI and digital infrastructure, prompting concerns about the sustainability of such practices.
Prince Harry, alongside his wife Meghan, emphasised the need for caution regarding artificial intelligence during their visit to Colombia. Speaking at a panel in Bogota, the Duke of Sussex expressed concerns about AI’s impact on society, highlighting the fear and uncertainty surrounding the technology. He also pointed to social media’s role in creating division, warning that misinformation is driving a wedge between people.
The couple arrived in Colombia at the invitation of Vice President Francia Marquez. During their visit, they engaged with students at a local school and enjoyed a traditional dance performance, showcasing their support for Colombian culture. Harry’s remarks on AI were part of a broader conversation about the challenges posed by new technologies and their influence on social dynamics.
Harry and Meghan, founders of the Archewell Foundation, are expected to continue their tour with a visit to Cali, where they will participate in the Petronio Alvarez festival, celebrating Afro-Colombian music and culture. Their visit reflects a commitment to addressing global issues such as cyber-bullying, online violence, and discrimination.
Vice President Marquez thanked the couple for their visit, acknowledging their efforts to forge connections and work on pressing global challenges. The royal couple’s engagement in Colombia underscores their ongoing dedication to social causes and global humanitarian efforts.
Donald Trump has shared AI-generated images on social media, showing Taylor Swift fans endorsing his presidential campaign. The images, which are clearly fake, have sparked controversy, particularly since Swift has not publicly supported any candidates in the 2024 US election.
Trump, however, embraced the images, responding with ‘I accept!’ on his platform. The posts were also shared by an account that reposts his content on X (formerly Twitter). Despite their obvious fabrication, the posts have drawn significant attention online.
( @realDonaldTrump – Truth Social Post ) ( Donald J. Trump – Aug 18, 2024, 3:50 PM ET )
— Donald J. Trump 🇺🇸 TRUTH POSTS (@TruthTrumpPosts) August 18, 2024
Taylor Swift, who endorsed Joe Biden in the last election, has not commented on these fake images. Her history with AI-generated content has been fraught, including deepfake videos that once led to a temporary ban on her searches on X.
Swift’s potential legal actions against AI content providers remain a topic of interest. However, the source of these recent fake posts remains unknown, raising concerns about the use of AI in political propaganda.
Researchers at the University of Auckland’s Sports Performance Research Institute New Zealand have used machine learning to delve into athletic recovery. They tracked 43 endurance athletes, gathering extensive data on sleep, diet, heart-rate variability, and workout routines. The study revealed that while certain factors like sleep quality and muscle soreness broadly influence recovery, the most effective predictors vary from person to person.
For instance, sleep data might be a strong indicator for one athlete, while for another, protein intake and muscle soreness could be more relevant. A simpler model using just a few variables performed nearly as well as more complex ones, emphasising that not all factors are equally important for every athlete. However, the effectiveness of predictions significantly improved when tailored to individual data.
The study also examined heart-rate variability (HRV) but found that predicting HRV changes based on controllable factors, like training load and diet, proved challenging. Although HRV is often used as a gauge for readiness to train, the researchers concluded that its predictive value might be limited.
Ultimately, the research underscores the importance of personalised recovery strategies. While broad patterns exist, the best approach to recovery seems to hinge on understanding the unique factors that impact each athlete individually.
Klaxon AI, a start-up based in Peterborough, has received £50,000 in funding from the UK’s innovation agency, Innovate UK, to develop a new tool that allows small businesses to create computer-generated podcast adverts. The new system, expected to launch in January, will enable companies to produce 30-second podcast ads in just a few minutes, providing a cost-effective alternative to traditional advertising methods.
Co-founder Arup Biswas expressed excitement over the funding, noting that the tool will be ‘incredibly cheap’ and accessible to small businesses that typically cannot afford podcast advertising. The system will allow users to input a few words about their business or provide specific text, with AI generating the audio advert.
The service will cost about £50 for businesses to download their ad, or they can opt to use it for free on Klaxon AI’s network of podcasts. The funding is part of a broader £30 million investment by Innovate UK in high-potential businesses within the creative sector.
SK Telecom and Nokia have announced a strategic partnership to implement AI-driven fibre sensing technology to enhance network reliability in South Korea. The collaboration, formalised through a memorandum of understanding, plans to roll out the innovative technology across SK Telecom’s national fixed network by the end of 2024.
The primary goal is proactively monitoring and detecting environmental changes that could impact optical cables, addressing issues before they escalate into significant disruptions. The fibre sensing technology will utilise advanced AI and machine learning techniques to monitor various environmental factors, including earthquakes, climate fluctuations, and disruptions from nearby construction activities. By continuously analysing data from SK Telecom’s commercial networks, the system aims to identify potential threats to network stability early on.
The proactive approach is designed to minimise damage from line breaks and prevent service interruptions, ensuring uninterrupted connectivity for customers. The integration of these advanced technologies allows for real-time monitoring and analysis, which is crucial for maintaining the resilience of network infrastructure. Ryu Jeong-hwan, Head of Infrastructure Strategy Technology at SK Telecom, emphasised the importance of this collaboration in accelerating the adoption of AI technologies.
He noted that this partnership prepares SK Telecom for the evolving AI landscape, positioning it as a leader in innovative network solutions. Similarly, John Harrington, President of Nokia Asia Pacific, expressed enthusiasm about integrating Nokia’s sensing technology into automated networks, highlighting their commitment to providing stable services by proactively addressing potential issues.
California is set to vote on SB 1047, a bill designed to prevent catastrophic harm from AI systems. The bill targets large AI models—those costing over $100 million to train and using immense computing power—requiring their developers to implement strict safety protocols. These include emergency shut-off mechanisms and third-party audits. The Frontier Model Division (FMD) will oversee compliance and enforce penalties for violations.
Supporters of the bill, including State Senator Scott Wiener and prominent AI researchers, contend that preemptive regulation is essential to safeguard against potential AI disasters. They believe it’s crucial to establish regulations before serious incidents occur. The bill is expected to be approved by the Senate and is now awaiting a decision from Governor Gavin Newsom.
If passed, SB 1047 would not take effect immediately, with the FMD scheduled to be established by 2026. The bill is anticipated to face legal challenges from various stakeholders who are concerned about its implications for the tech industry.