Meta spokesperson sentenced to six years in Russia

A military court in Moscow has reportedly sentenced Meta Platforms spokesperson Andy Stone to six years in prison in absentia for ‘publicly defending terrorism.’ This ruling comes amid Russia’s crackdown on Meta, which was designated as an extremist organisation in the country, resulting in the banning of Facebook and Instagram in 2022 due to Russia’s conflict with Ukraine.

Meta has yet to comment on the reported sentencing of Stone, who serves as the company’s communications director. Stone himself was unavailable for immediate response following the court’s decision. Stone’s lawyer, Valentina Filippenkova, indicated they intend to appeal the verdict, expressing a request for acquittal.

The Russian interior ministry initiated a criminal investigation against Stone late last year, although the specific charges were not disclosed then. According to state investigators, Stone’s online comments allegedly defended ‘aggressive, hostile, and violent actions’ against Russian soldiers involved in what Russia terms its ‘special military operation’ in Ukraine.

Why does it matter?

Stone’s sentencing underscores Russia’s stringent stance on online content related to its military activities in Ukraine, extending repercussions to individuals associated with Meta Platforms. The circumstances also reflect the broader context of heightened scrutiny and legal actions against perceived dissent and criticism within Russia’s digital landscape.

Meta shifts away from politics ahead of 2024 US election

In a significant shift ahead of the Trump-Biden rematch, Meta is distancing itself from politics after years of positioning as a key player in political discourse. The company has reduced the visibility of political content on Facebook and Instagram, imposed new rules on political advertisers, and downsized the team responsible for engaging with politicians and campaigns. The shift reshaped digital outreach strategies for the 2024 US election and could transform political communication on social media platforms.

Meta’s retreat from politics follows years of controversy and public scrutiny, including outrage over Russian interference in the 2016 presidential race and the role of social media in the 6 January 2021 attack on the US Capitol. The company’s efforts to minimise political content in users’ news feeds reflect a broader trend away from news and politics on social media platforms. This shift has impacted major news outlets, with significant declines in user engagement observed across platforms.

As Meta redefines its approach to political content, political campaigns adapt their strategies to navigate this new landscape. The Biden campaign has increased its social media presence to drive engagement, while Trump has turned to alternative platforms like Truth Social. However, both parties recognise the continued importance of Facebook as a vital tool for reaching voters despite the platform’s evolving restrictions on political advertising and content.

Why does it matter?

The changing dynamics of political communication on social media raise concerns about access to information and the role of tech companies in shaping public discourse. With political content increasingly marginalised on platforms like Facebook and Instagram, questions arise about how voters will stay informed about key issues during elections. As campaigns adjust to Meta’s evolving policies, the impact on democratic discourse and the dissemination of political information remains a topic of debate and scrutiny.

Apple removes WhatsApp and Threads from China app store

Apple has removed the Meta-owned apps WhatsApp and Threads from its app store in China, complying with orders from the country’s internet regulator, the Cyberspace Administration, citing national security concerns. According to Apple, the move was made in accordance with local laws, despite any disagreement. The Chinese government allegedly found content on WhatsApp and Threads regarding China’s president, Xi Jinping, that violated cybersecurity laws, though specifics were unclear.

This action intensifies the technology dispute between the US and China, with Apple and Meta caught in the middle. In the US, lawmakers are considering a bill that would compel ByteDance to divest its popular video app TikTok, citing national security risks due to its ties to China. Meanwhile, the White House is tightening restrictions on Beijing’s access to advanced technologies and American financing.

Apple, reliant on China for a significant portion of its revenue, has complied with Beijing’s demands in the past, including blocking various apps and establishing a data centre to store Chinese users’ iCloud data. As tensions persist, Apple has started diversifying its supply chain, reducing its dependence on Chinese manufacturing.

While Meta’s fallout from China may be less direct, the company faces challenges elsewhere, particularly in its strained relationship with Apple over privacy and data tracking issues. In the US, efforts to address concerns over TikTok’s ownership and data handling are gaining momentum, with legislation being packaged alongside other bills related to foreign aid.

Meta launches Llama 3 to challenge OpenAI

Meta Platforms launched its latest large language model, Llama 3, and a real-time image generator designed to update pictures as users type prompts. The development aims to catch up with the generative AI market leader, OpenAI. The models are set to be integrated into Meta’s virtual assistant, Meta AI, which the company claims to be the most advanced among its free-to-use counterparts. Performance comparisons highlight its reasoning, coding, and creative writing capabilities against competitors like Google and Mistral AI.

Meta is giving prominence to its updated Meta AI assistant within its various platforms, positioning it to compete more directly with OpenAI’s ChatGPT. The assistant will feature prominently in Meta’s Facebook, Instagram, WhatsApp, and Messenger apps, along with a standalone website offering various functionalities, from creating vacation packing lists to providing homework help.

The development of Llama 3 is part of Meta’s efforts to challenge OpenAI’s leading position in generative AI. The company has openly released its Llama models for developers, aiming to disrupt rivals’ revenue plans with powerful free options. However, critics have raised safety concerns about unscrupulous actors’ potential misuse of such models.

While Llama 3 currently outputs only text, future versions will incorporate multimodal capabilities, generating text and images. Meta CEO Mark Zuckerberg emphasised the performance of Llama 3 versions against other free models, indicating a growing performance gap between free and proprietary models. The company aims to address previous issues with understanding context by leveraging high-quality data and significantly increasing the training data for Llama 3.

Meta oversight board reviews handling of sexually explicit AI-generated images

Meta Platforms’ Oversight Board is currently examining how the company handled two AI-generated sexually explicit images of female celebrities that circulated on Facebook and Instagram. The board, which operates independently but is funded by Meta, aims to evaluate Meta’s policies and enforcement practices surrounding AI-generated pornographic content. To prevent further harm, the board did not disclose the names of the celebrities depicted in the images.

Advancements in AI technology have led to an increase in fabricated content online, particularly explicit images and videos portraying women and girls. This surge in ‘deepfakes’ has posed significant challenges for social media platforms in combating harmful content. Earlier this year, Elon Musk’s social media platform X faced difficulties managing the spread of false explicit images of Taylor Swift, prompting temporary restrictions on related searches.

The Oversight Board highlighted two specific cases: one involving an AI-generated nude image resembling an Indian public figure shared on Instagram and another depicting a nude woman resembling an American public figure in a Facebook group for AI creations. Meta initially removed the latter image for violating its bullying and harassment policy but left the former image up until the board selected it for review.

In response to the board’s scrutiny, Meta acknowledged the cases and committed to implementing the board’s decisions. The prevalence of AI-generated explicit content underscores the need for clearer policies and stricter enforcement measures by tech companies to address the growing issue of ‘deepfakes’ online.

Meta temporarily suspends Threads in Türkiye

Meta Platforms Inc. announced that it will temporarily suspend its social networking app Threads in Türkiye starting 29 April to comply with an interim order from the Turkish Competition Authority. The decision, detailed in a blog post on Monday, aims to address concerns related to data sharing between Instagram and Threads as the competition watchdog investigates potential market dominance abuses by Meta. Despite this move, Meta reassured users that the shutdown of Threads in Türkiye will not affect other Meta services like Facebook, Instagram, or WhatsApp within the country or Threads in other global locations.

The Turkish Competition Authority initiated an investigation into Meta in December over possible competition law violations stemming from the integration of Instagram with Threads. The interim order, which restricts data merging between the two platforms, will remain effective until the authority reaches a final decision. Meta expressed disagreement with this decision, asserting its compliance with Turkish legal requirements and indicating plans to appeal the ruling.

Threads, Meta’s microblogging venture launched in July 2023, aimed to expand beyond Instagram’s media-centric format by offering a predominantly text-based social platform where users could share photos, links, and short videos. While Threads quickly gained traction in the US and over 100 other countries, its European debut was delayed until December 2023 due to stringent privacy regulations in the region. Despite this setback, Meta remains committed to navigating regulatory challenges while advancing its diverse social networking offerings.

New OpenAI and Meta AI models close to human-like reasoning

Meta and OpenAI are close to unveiling advanced AI models that can reason and plan, according to a Financial Times report. OpenAI’s COO, Brad Lightcap, hinted at the upcoming release of GPT-5, which will make significant progress in solving ‘hard problems’ of reasoning.

Yann LeCun, Meta’s chief AI scientist, and Joelle Pineau, VP of AI Research, envision AI agents capable of complex, multi-stage operations. The enhanced reasoning should enable the AI models to ‘search over possible answers,’ ‘plan sequences of actions,’ and model out the outcomes and consequences before execution.

Why does it matter?


Meta is getting ready to launch Llama 3 in various model sizes optimized for different apps and devices, including WhatsApp and Ray-Ban smart glasses. OpenAI is less open about its plans for GPT-5, but Lightcap expressed optimism about the model’s potential to reason.


Getting AI models to reason and plan is a critical step towards reaching artificial general intelligence (AGI). Multiple definitions of AGI exist, but it can be simply described as a sort of AI capable of performing at or beyond human levels on a broad range of activities.

Some scientists and experts have expressed concerns about building technology that will outperform human abilities. AI godfathers Yoshua Bengio, and Geoffrey Hinton have even warned us against the threats to humanity posed by AI. Both Meta and OpenAI claim to be aiming for AGI, which could be worth trillions for the company that achieves it.

Meta tests features to protect teens on Instagram

Meta, Instagram’s parent company, has announced plans to trial new features aimed at protecting teens by blurring messages containing nudity. This initiative is part of Meta’s broader effort to address concerns surrounding harmful content on its platforms. The tech giant faces increasing scrutiny in the US and Europe amid allegations that its apps are addictive and contribute to mental health issues among young people.

The proposed protection feature for Instagram’s direct messages will utilise on-device machine learning to analyse images for nudity. It will be enabled by default for users under 18, with Meta urging adults to activate it as well. Notably, the nudity protection feature will operate even in end-to-end encrypted chats, ensuring privacy while maintaining safety measures.

Meta is also developing technology to identify accounts potentially involved in sextortion scams and is testing new pop-up messages to warn users who may have interacted with such accounts. These efforts come after Meta’s previous announcements regarding increased content restrictions for teens on Facebook and Instagram, particularly concerning sensitive topics like suicide, self-harm, and eating disorders.

Why does it matter?

The company’s actions follow legal challenges, including a lawsuit filed by 33 US states alleging that Meta misled the public about the dangers of its platforms. The European Commission has also requested information on Meta’s measures to protect children from illegal and harmful content in Europe. As Meta continues to navigate regulatory and public scrutiny, its focus on enhancing safety features underscores the ongoing debate surrounding social media’s impact on mental health and well-being, especially among younger users.

Meta boosts AI chip power for enhanced performance

Meta is gearing up for the next leap in AI chip technology, promising enhanced power and faster training for its ranking models. The Meta Training and Inference Accelerator (MTIA) aims to optimise training efficiency and streamline reasoning tasks, particularly for ranking and recommendation algorithms. In a recent announcement, Meta emphasised MTIA’s pivotal role in its long-term strategy to fortify AI infrastructure for current and future technological advancements, aligning with existing technology setups and forthcoming GPU developments.

The company’s commitment to custom silicon extends beyond computational power, encompassing memory bandwidth, networking, and capacity enhancements. Initially unveiled in May 2023 with a focus on data centres, MTIA v1 was slated for a 2025 release. However, Meta was surprised by revealing that both MTIA iterations are already in production, indicative of accelerated progress in their chip development roadmap.

While MTIA currently specialises in training ranking and recommendation algorithms, Meta envisions expanding its capabilities to include generative AI training, such as with its Llama language models. The forthcoming MTIA chip boasts significant upgrades, featuring 256MB memory on-chip and operating at 1.3GHz, compared to its predecessor’s 128MB and 800GHz configuration. Early performance tests indicate a threefold improvement across evaluated models, reflecting Meta’s strides in chip optimisation.

Why does it matter?

Meta’s pursuit mirrors a broader trend among AI companies, with players like Google, Microsoft, and Amazon venturing into custom chip development to meet escalating computing demands. The competitive landscape underscores the need for tailored solutions to efficiently power AI models. As the industry witnesses unprecedented growth in chip demand, market leaders like Nvidia stand poised for substantial valuation, highlighting the critical role of custom chips in driving AI innovation.

AI giants OpenAI, Google, Meta and Mistral unveil new LLMs in rapid succession

Three major players in the AI field, OpenAI, Google, and Mistral, have unveiled new versions of their cutting-edge AI models within 12 hours, signalling a burst of innovation anticipated for the summer. Meta’s Nick Clegg hinted at the imminent release of Meta’s Llama 3 at an event in London, while Google swiftly followed with the launch of its Gemini Pro 1.5, a sophisticated large language model with a limited free usage tier. Shortly after, OpenAI introduced its milestone model, GPT-4 Turbo, which, like Gemini Pro 1.5, supports multimodal input, including images.

In France, Mistral, a startup founded by former Meta AI team members, debuted Mixtral 8x22B, a frontier AI model released as a 281GB download file, following an open-source philosophy. While this approach is criticised for potential risks due to a lack of oversight, it reflects a trend towards democratising access to AI models beyond the control of tech giants like Meta and Google.

Experts caution that the prevailing approach centred on large language models (LLMs) might be reaching its limitations. Meta’s chief AI scientist, Yann LeCun, challenges the notion of imminent artificial general intelligence (AGI) and emphasises the need for AI systems capable of reasoning and planning beyond language manipulation. LeCun advocates for a shift towards ‘objective-driven’ AI to achieve truly superhuman capabilities, thereby highlighting the ongoing evolution and challenges in the AI landscape.