Meta’s AI research VP Joelle Pineau announces departure

Joelle Pineau, the Vice President of AI research at Meta, announced she will be leaving the company by the end of May, after nearly eight years with the organisation.

Pineau, who joined Meta in 2017, has overseen key AI initiatives, including the FAIR research unit, PyTorch, and the Llama AI models.

In a LinkedIn post, Pineau reflected on her time at Meta, mentioning the creation of groundbreaking AI projects such as PyTorch, FAISS, and Roberta.

She expressed gratitude for the opportunity to work alongside top AI researchers, with the aim of accelerating innovation through open-source contributions.

Pineau, also a professor at McGill University, stated that after her departure, she plans to take some time to reflect before pursuing new ventures. Her departure comes as Meta intensifies its focus on AI, including the recent launch of its Meta AI chatbot in Europe.

For more information on these topics, visit diplomacy.edu.

Ghibli trend as proof of global dependence on AI: A phenomenon that overloaded social networks and systems

It is rare to find a person in this world (with internet access) who has not, at least once, consulted AI about some dilemma, idea, or a simple question.

The wide range of information and rapid response delivery has led humanity to embrace a ‘comfort zone’, allowing machines to reason for them, and recently, even to create animated photographs.

This brings us to a trend that, within just a few days, managed to spread across the planet through almost all meridians – the Ghibli style emerged spontaneously on social networks. When people realised they could obtain animated versions of their favourite photos within seconds, the entire network became overloaded.

 Art, Painting, Person, Computer, Computer Hardware, Computer Keyboard, Electronics, Hardware, Face, Head, Cartoon, Pc, Book, Publication, Yuriko Yamaguchi

Since there was no brake mechanism, reactions from leading figures were inevitable, with Sam Altman, CEO of OpenAI, speaking out.

He stated that the trend had surpassed all expectations and that servers were ‘strained’, making the Ghibli style available only to ChatGPT users subscribed to Plus, Pro, and Team versions.

Besides admiring AI’s incredible ability to create iconic moments within seconds, this phenomenon also raises the issue of global dependence on artificial intelligence.

Why are we all so in love with AI?

The answer to this question is rather simple, and here’s why. Imagine being able to finally transform your imagination into something visible and share all your creations with the world. It doesn’t sound bad, does it?

This is precisely where AI has made its breakthrough and changed the world forever. Just as Ghibli films have, for decades, inspired fans with their warmth and nostalgia, AI technology has created something akin to the digital equivalent of those emotions.

People are now creating and experiencing worlds that previously existed only in their minds. However, no matter how comforting it sounds, warnings are often raised about maintaining a sense of reality to avoid ‘falling into the clutches’ of a beautiful virtual world.

Balancing innovation and simplicity

Altman warned about the excessive use of AI tools, stating that even his employees are sometimes overwhelmed by the progress of artificial intelligence and the innovations it releases daily.

As a result, people are unable to adapt as quickly as AI, with information spreading faster than ever before.

However, there are also frequent cases of misuse, raising the question – where is the balance?

The culture of continuous production has led to saturation but also a lack of reflection. Perhaps this very situation will bring about the much-needed pause and encourage people to take a step back and ‘think more with their own heads’.

Ghibli is just one of many: How AI trends became mainstream

AI has been with us for a long time, but it was not as popular until major players like OpenAI, Gemini, Azure, and many others appeared. The Ghibli trend is just one of many that have become part of pop culture in recent years.

Since 2018, we have witnessed deepfake technologies, where various video clips, due to their ability to accurately recreate faces in entirely different contexts, flood social networks almost daily.

AI-generated music and audio recordings have also been among the most popular trends promoted over the past four years because they are ‘easy to use’ and offer users the feeling of creating quality content with just a few clicks.

There are many other trends that have captured the attention of the global public, such as the Avatar trend (Lensa AI), generated comics and stories (StoryAI and ComicGAN), while anime-style generators have actually existed since 2022 (Waifu Labs).

Are we really that lazy or just better organised?

The availability of AI tools at every step has greatly simplified everyday life. From applications that assist in content creation, whether written or in any other format.

For this reason, the question arises – are we lazy, or have we simply decided to better organise our free time?

This is a matter for each individual, and the easiest way to examine is to ask yourself whether you have ever consulted AI about choosing a film or music, or some activity that previously did not take much energy.

AI offers quick and easy solutions, which is certainly an advantage. However, on the other hand, excessive use of technology can lead to a loss of critical thinking and creativity.

Where is the line between efficiency and dependence if we rely on algorithms for everything? That is an answer each of us will have to find at some point.

A view on AI overload: How can we ‘break free from dependence’?

The constant reliance on AI and the comfort it provides after every prompt is appealing, but abusing it leads to a completely different extreme.

The first step towards ‘liberation’ is to admit that there is a certain level of over-reliance, which does not mean abandoning AI altogether.

Understanding the limitations of technology can definitely be the key to returning to essential human values. Digital ‘detox’ implies creative expression without technology.

Can we use technology without it becoming the sole filter through which we see the world? After all, technology is a tool, not a dominant factor in decision-making in our lives.

Ghibli trend enthusiasts – the legendary Hayao Miyazaki does not like AI

The founder of Studio Ghibli, Hayao Miyazaki, recently reacted to the trend that has overwhelmed the world. The creator of famous works such as Princess Mononoke, Howl’s Moving Castle, Spirited Away, My Neighbour Totoro, and many others is vehemently opposed to the use of AI.

Known for his hand-drawn approach and whimsical storytelling, Miyazaki has addressed ethical issues, considering that trends and the mass use of AI tools are trained on large amounts of data, including copyrighted works.

Besides criticising the use of AI in animation, he believes that such tools cannot replace the human touch, authenticity, and emotions conveyed through the traditional creation process.

For Miyazaki, art is not just a product but a reflection of the artist’s soul – something machines, no matter how advanced, cannot truly replicate.

For more information on these topics, visit diplomacy.edu.

Alphawave acquisition eyed by arm for AI advancements

Arm Holdings, owned by SoftBank, recently considered acquiring UK-based semiconductor IP supplier Alphawave to bolster its artificial intelligence processor technology.

The focus was on Alphawave’s ‘serdes’ technology, essential for rapid data transfer in AI applications requiring interconnected chips.

Despite initial discussions, Arm decided against pursuing the acquisition. Alphawave had been exploring a sale after attracting interest from Arm and other potential buyers.

Alphawave’s joint venture in China, WiseWave, added complexity to the potential deal due to national security concerns raised by US officials.

For more information on these topics, visit diplomacy.edu.

AI-powered brain implant turns thoughts into words in real-time

A brain implant powered by AI has enabled a paralysed woman to speak almost instantly, offering new hope for those who have lost their ability to communicate. Developed by researchers in California, the experimental system translates brain signals into speech in real-time.

Ann, a 47-year-old who lost her voice after a stroke 18 years ago, previously used a brain-computer interface (BCI) with an eight-second delay.

The latest model, published in Nature Neuroscience, reduces that time to just 80 milliseconds, allowing more natural conversations. Scientists trained the system using deep learning and reconstructed Ann’s voice from past recordings.

Although the vocabulary remains limited, the breakthrough marks a major step towards real-world applications. Researchers believe with proper funding, the technology could become widely available within a decade, helping many regain their voice.

For more information on these topics, visit diplomacy.edu.

Guangdong eyes global role in AI and robotics

Guangdong is stepping up efforts to become a world leader in AI and robotics by offering generous subsidies to attract start-ups and top tech talent.

The province will grant up to 50 million yuan to major AI manufacturing hubs and millions more to smaller firms and developers.

Officials also plan to fund five open-source communities and ten industrial applications of AI each year, with up to 8 million yuan in support for each.

Local tech giants like Huawei and Tencent are expected to play a key role in the ecosystem.

The move follows the rise of AI firm DeepSeek in neighbouring province of China, Zhejiang, whose founder hails from Guangdong.

The government hopes to replicate that success at home by turning the region into a centre for innovation and global competitiveness.

For more information on these topics, visit diplomacy.edu.

AI technology sparks debate in Hollywood

Hollywood is grappling with AI’s increasing role in filmmaking, with executives, actors, and developers exploring the technology’s potential. At a recent event, industry leaders discussed AI-generated video, heralded as the biggest breakthrough since the advent of sound in cinema.

Despite its growing presence, AI’s impact remains controversial, especially after recent strikes from actors and writers seeking protection from AI exploitation.

AI technology is making its way into movies and TV shows, with Oscar-nominated films like Emilia Perez and The Brutalist using AI for voice alterations and actor de-aging. AI’s capacity to generate scripts, animation, and even actors has led to fears of job displacement, particularly for background actors.

However, proponents like Bryn Mooser of Moonvalley argue that AI can empower filmmakers, especially independent creators, to produce high-quality content at a fraction of traditional costs.

While Hollywood is still divided on AI’s potential, several tech companies, including OpenAI and Google, are lobbying for AI models to access copyrighted art to fuel their development, claiming it’s vital for national security.

The push has met resistance from filmmakers who fear it could undermine the creative industry, which provides millions of jobs. Despite the opposition, AI’s role in filmmaking is rapidly expanding, and its future remains uncertain.

Some in the industry believe AI, if used correctly, can enhance creativity by allowing filmmakers to create worlds and narratives beyond their imagination. However, there is a push to ensure that artists remain central to this transformation, and that AI’s role in cinema respects creators’ rights and protections.

As AI technology evolves, Hollywood faces a critical choice: embrace it responsibly instead of the risk of being overtaken by powerful tech companies.

For more information on these topics, visit diplomacy.edu.

Amazon unveils Nova Act to enhance AI capabilities

Amazon has launched Nova Act, a general-purpose AI agent capable of controlling web browsers to perform simple tasks. Along with the new agent, Amazon is releasing the Nova Act SDK, enabling developers to create agent prototypes.

The tool will also power key features of the upcoming Alexa+ upgrade, a generative AI-enhanced version of Amazon’s voice assistant.

Developed by Amazon’s AGI lab, Nova Act is designed to automate tasks such as ordering food or making reservations. Although the model is currently a research preview, Amazon claims Nova Act outperforms competitors like OpenAI’s Operator and Anthropic’s Computer Use in internal tests.

The toolkit, available on nova.amazon.com, allows developers to integrate AI agents into applications that can navigate websites, fill forms, and interact with digital content.

Despite its early stage, Nova Act is seen as a significant step in the development of superintelligent AI, with Amazon’s AGI lab aiming to make AI agents reliable and effective across various tasks.

Instead of AI agents from other companies that have faced challenges like slow response times and error-prone performance, Amazon hopes that Nova Act will address these issues, potentially providing a competitive edge in the AI market.

The success of Nova Act could also play a crucial role in the success of Alexa+ and Amazon’s broader AI strategy.

For more information on these topics, visit diplomacy.edu.

Runway expands AI video capabilities with Gen-4

Runway has unveiled Gen-4, its most advanced AI-powered video generator yet, promising superior character consistency, realistic motion, and world understanding.

The model is now available to individual and enterprise users, allowing them to generate dynamic videos using visual references and text-based instructions.

Backed by investors such as Google and Nvidia, Runway faces fierce competition from OpenAI and Google in the AI video space. The company has differentiated itself by securing Hollywood partnerships and investing heavily in AI-generated filmmaking.

However, it remains tight-lipped about its training data, raising concerns over copyright issues.

Runway is currently embroiled in a lawsuit from artists accusing the company of training its models on copyrighted works instead of getting permission. The company claims fair use as a defence.

Meanwhile, it is reportedly seeking new funding at a $4 billion valuation, with hopes of reaching $300 million in annual revenue. As AI video tools advance, concerns grow over their impact on jobs in the entertainment industry, with thousands of positions at risk.

For more information on these topics, visit diplomacy.edu.

Apple expands AI features with new update

Apple Intelligence is expanding with new features, including Priority Notifications, which highlight time-sensitive alerts for users. This update is part of iOS 18.4, iPadOS 18.4, and macOS Sequoia 15.4, rolling out globally.

The AI suite is now available in more languages and has launched in the EU for iPhone and iPad users.

Additional improvements include a new Sketch style in Image Playground and the ability to generate ‘memory movies’ on Mac using simple text descriptions. Vision Pro users in the US can now access Apple Intelligence features like Writing Tools and Genmoji.

Apple’s AI rollout has been gradual since its introduction at WWDC last year, with features arriving in stages.

The update also brings fresh emojis, child safety enhancements, and the debut of Apple News+ Food, further expanding Apple’s digital ecosystem.

For more information on these topics, visit diplomacy.edu.

Nonprofits receive $10 million boost from Google for AI training

Google.org has announced a $10 million grant initiative aimed at helping nonprofits integrate AI into their operations.

Community foundations in Atlanta, Austin, Columbia, New York City, and San Francisco will distribute the grants, providing nonprofits with tailored AI support to enhance their work.

However, this funding forms part of a broader commitment by Google to improve AI adoption across various sectors.

The initiative includes a generative AI accelerator programme and an AI Opportunity Fund that aims to invest nearly $100 million in AI training and integration programmes for nonprofits.

Over the last year, 20 organisations have benefited from these funds, developing and piloting AI curricula to build practical skills within their communities.

According to Maggie Johnson, Vice President and Global Head of Google.org, recipients report that AI helps them achieve goals in a third of the time and at nearly half the cost.

A six-month-long AI accelerator programme has already provided training to 21 nonprofits, impacting more than 30 million people through AI-powered solutions.

The funding aims to enhance operational efficiency across sectors such as education, health, and workforce readiness.

Organisations like the Tech:NYC Foundation’s Decoded Futures project and Project Evident are leading efforts to promote equitable and responsible AI use, encouraging collaboration between tech leaders and nonprofits.

Nonprofits supported by Google’s funding include global organisations like the World Bank and local initiatives such as Climate Ride and Erika’s Lighthouse.

The funding is expected to drive AI literacy, streamline operations, and enhance the impact of organisations working with limited resources.

Project Evident’s managing director, Sarah Di Troia, emphasised the importance of nonprofits engaging with AI to remain relevant and influential in the evolving technological landscape.

For more information on these topics, visit diplomacy.edu.