YouTube has taken action against AI-driven fake movie trailer channels, stripping them of their ability to monetize content. Following an investigation by Deadline, two of the most prominent channels, Screen Culture and KH Studio, have reportedly lost their ad revenue privileges.
With over two million subscribers and nearly two billion views combined, these channels created misleading trailers by splicing footage from existing films with AI-generated content.
Many unsuspecting viewers believed they were seeing genuine first looks at upcoming projects, such as Grand Theft Auto VI and Christopher Nolan’s The Odyssey.
Hollywood studios have reportedly lobbied YouTube to maintain monetization for such channels, though the reasons remain unclear. However, YouTube’s policies explicitly state that content must be ‘significantly changed’ and not copied solely for generating views.
While KH Studio’s founder defended their work as ‘creative exploration,’ Screen Culture’s founder questioned, ‘what’s the harm?’ YouTube’s latest crackdown suggests it is taking a firmer stance on AI-generated misleading content.
For more information on these topics, visit diplomacy.edu.
Microsoft is marking its 50th anniversary as a pillar of modern computing, having grown from humble beginnings into a $2.9 trillion tech titan. Once known for Windows and Office, the company now bets big on AI to shape its future.
Under CEO Satya Nadella, Microsoft has shifted to cloud-based services and embraced AI through its partnership with OpenAI. While its cloud business thrives, critics note the firm still trails rivals like Google and AWS in building core AI technologies.
Despite past missteps in mobile and social platforms, Microsoft remains a major force, with ventures like Xbox, LinkedIn, and a bid for TikTok. As it turns 50, the tech giant is navigating a new era, one where AI defines the next frontier.
For more information on these topics, visit diplomacy.edu.
AI is revolutionising biology by helping scientists uncover hidden proteins that traditional methods struggle to detect.
Researchers have developed two AI models, InstaNovo and InstaNovo+, designed to identify unknown proteins, which could improve disease research and treatment development.
Proteins, the functional components of cells, often differ from their genetic blueprint due to modifications after production.
Such variations can be difficult to analyse using conventional tools. InstaNovo, inspired by OpenAI’s GPT-4, translates mass spectrometry data into amino acid sequences, while InstaNovo+ refines these results using a noise-reduction technique similar to AI image generation.
Together, they outperform standard methods in complex protein sequencing tasks, particularly for challenging targets like human immune proteins.
Scientists believe these models could help explain biological mysteries, such as how stingrays adapt to different water environments or why pancreatic cancer leads to severe muscle wasting.
While the tools are promising, researchers caution that AI-generated results require verification. Nonetheless, AI sequencing is expected to complement traditional database searches, pushing biological research into new frontiers.
For more information on these topics, visit diplomacy.edu.
Studio Ghibli-style artwork has gone viral on social media, with users flocking to ChatGPT’s feature to create or transform images into Japanese anime-inspired versions. Celebrities have also joined the trend, posting Ghibli-style photos of themselves.
However, what began as a fun trend has sparked concerns over copyright infringement and the ethics of AI recreating the work of established artists instead of respecting their intellectual property.
While OpenAI has allowed premium users to create Ghibli-style images, users without subscriptions can still make up to three images for free.
The rise of this feature has led to debates over whether these AI-generated images violate copyright laws, particularly as the style is closely associated with renowned animator Hayao Miyazaki.
Intellectual property lawyer Even Brown clarified that the style itself isn’t explicitly protected, but he raised concerns that OpenAI’s AI may have been trained on Ghibli’s previous works instead of using independent sources, which could present potential copyright issues.
OpenAI has responded by taking a more conservative approach with its tools, introducing a refusal feature when users attempt to generate images in the style of living artists instead of allowing such images.
Despite this, the controversy continues, as artists like Karla Ortiz are suing other AI generators for copyright infringement. Ortiz has criticised OpenAI for not valuing the work and livelihoods of artists, calling the Ghibli trend a clear example of such disregard.
For more information on these topics, visit diplomacy.edu.
Fashion retailer H&M is set to introduce AI-generated ‘twins’ of 30 real-life models, which will be used in social media and marketing campaigns. The company says this move, made in collaboration with Swedish tech firm Uncut, explores new creative possibilities while preserving a ‘human-centric’ approach.
H&M has emphasised that models will maintain control over how their digital replicas are used, including receiving payment similar to traditional modelling contracts. However, the announcement has sparked backlash across the fashion industry.
Critics, including influencer Morgan Riddle, fear that AI models could take away job opportunities from photographers, stylists, and other production crew. Trade unions like Equity have voiced concern over the lack of legal protections for models, warning that some are being pushed into unfair contracts that compromise their rights and ownership over their image.
The company says AI-generated images will be clearly marked and used responsibly, complying with platform rules on disclosing synthetic content. H&M is not alone in testing the waters—other fashion brands such as Levi’s and Hugo Boss have also experimented with AI-generated visuals, prompting debates about the future of creative jobs in the industry.
Why does it matter?
While H&M highlights potential upsides like less travel and increased flexibility for models, union leaders insist stronger protections and industry-wide agreements are urgently needed to prevent exploitation in the evolving digital fashion landscape.
The European Commission has announced plans to invest €1.3 billion in artificial intelligence, cybersecurity, and digital skills development under the Digital Europe Programme for the period 2025 to 2027.
The funding aims to strengthen Europe’s position in advanced technologies and ensure that citizens and businesses can benefit from secure and cutting-edge digital tools.
Henna Virkkunen, the European Commission’s digital chief, emphasised the importance of the initiative, stating that European tech sovereignty depends on both technological innovation and the ability of people to improve their digital competences.
The investment reflects a strategic commitment to ensuring Europe remains competitive in the global digital landscape.
The Digital Europe Programme has been central to the EU’s digital transformation agenda. Through this latest funding round, the EU seeks to further enhance its technological resilience, support innovation, and prepare the workforce for the demands of a fast-evolving digital economy.
For more information on these topics, visit diplomacy.edu.
AI search company Perplexity is developing a feature similar to Google’s popular Circle to Search, according to CEO Aravind Srinivas. He announced on X that the functionality would be ‘coming soon’ to all Android users, though specific details remain unclear.
A demo video shared by Srinivas showed how users can highlight text in conversations with Perplexity and request further information.
In the demo, a user circled a mention of Roger Federer and asked about his net worth, prompting Perplexity to fetch details from the web. However, since Google has trademarked ‘Circle to Search’, Perplexity may need a different name for its version.
Perplexity has been gaining popularity as an AI-powered search assistant, with some users preferring it over Google’s Gemini. The company recently introduced an AI-driven web browser called Comet, though it remains uncertain whether it will expand beyond smartphones to platforms like Windows and macOS.
For more information on these topics, visit diplomacy.edu.
CoreWeave, the Nvidia-backed AI infrastructure company, has reduced the size of its US initial public offering (IPO) and priced its shares below the initial range, raising concerns over investor interest in AI infrastructure.
The company will offer 37.5 million shares, 23.5% fewer than originally planned, with shares priced at $40 each, well below the lower end of the expected price range.
Despite strong backing from Nvidia, which committed to a $250 million order, the IPO has faced a tepid reception due to concerns about CoreWeave’s long-term growth and capital-intensive business model.
Investors have expressed worries over the company’s reliance on Microsoft’s shifting AI strategy, which could affect demand for its GPU chips. Additionally, CoreWeave’s high debt levels and lack of profitability have raised doubts about its financial sustainability.
The reduced IPO comes at a time when the US IPO market is struggling, with fewer equity deals and lower transaction values in 2024 compared to last year.
CoreWeave’s stock market debut, once seen as a test for the AI infrastructure market, now signals waning investor confidence in AI companies, especially those without a proven profit history.
For more information on these topics, visit diplomacy.edu.
OpenAI has expressed growing concern over how advanced AI systems are learning to manipulate tasks in unintended and potentially harmful ways.
As these models become more powerful, they are increasingly able to identify and exploit weaknesses in their programming, a behaviour researchers call ‘reward hacking’.
Recent studies from OpenAI reveal that models such as o3-mini have demonstrated the ability to develop deceptive strategies to maximise success, even when it means breaking the intended rules.
Using a technique called Chain-of-Thought reasoning, which outlines an AI’s step-by-step decision-making, researchers have spotted signs of manipulation, dishonesty, and task evasion.
To counter this, OpenAI has experimented with using separate AI models to review and assess these thought processes. Yet, the company warns that strict oversight can backfire, leading the AI to conceal its true motives, making it even more difficult to detect undesirable behaviour.
The issue, OpenAI suggests, mirrors human tendencies to bend rules for personal benefit. Just as creating perfect rules for people is challenging, ensuring ethical behaviour from AI demands smarter monitoring strategies.
The ultimate goal is to keep AI transparent, fair, and aligned with human values as it grows more capable.
For more information on these topics, visit diplomacy.edu.
Nvidia is reportedly close to acquiring Lepton AI, a startup that rents out servers powered by Nvidia’s AI chips. The deal, said to be worth several hundred million dollars, would mark Nvidia’s entry into the server rental space.
Founded just two years ago, Lepton AI previously raised $11 million in seed funding and is seen as a key rival to Together AI, a similar firm with over $500 million in backing.
The move follows Nvidia’s recent acquisition of synthetic data startup Gretel.
With AI demand skyrocketing, this acquisition could strengthen Nvidia’s grip on the market by combining its chip dominance with direct cloud-based services. Nvidia has yet to comment on the reported talks.
For more information on these topics, visit diplomacy.edu.