Meta launches Llama 3 to challenge OpenAI

Meta Platforms launched its latest large language model, Llama 3, and a real-time image generator designed to update pictures as users type prompts. The development aims to catch up with the generative AI market leader, OpenAI. The models are set to be integrated into Meta’s virtual assistant, Meta AI, which the company claims to be the most advanced among its free-to-use counterparts. Performance comparisons highlight its reasoning, coding, and creative writing capabilities against competitors like Google and Mistral AI.

Meta is giving prominence to its updated Meta AI assistant within its various platforms, positioning it to compete more directly with OpenAI’s ChatGPT. The assistant will feature prominently in Meta’s Facebook, Instagram, WhatsApp, and Messenger apps, along with a standalone website offering various functionalities, from creating vacation packing lists to providing homework help.

The development of Llama 3 is part of Meta’s efforts to challenge OpenAI’s leading position in generative AI. The company has openly released its Llama models for developers, aiming to disrupt rivals’ revenue plans with powerful free options. However, critics have raised safety concerns about unscrupulous actors’ potential misuse of such models.

While Llama 3 currently outputs only text, future versions will incorporate multimodal capabilities, generating text and images. Meta CEO Mark Zuckerberg emphasised the performance of Llama 3 versions against other free models, indicating a growing performance gap between free and proprietary models. The company aims to address previous issues with understanding context by leveraging high-quality data and significantly increasing the training data for Llama 3.

Meta oversight board reviews handling of sexually explicit AI-generated images

Meta Platforms’ Oversight Board is currently examining how the company handled two AI-generated sexually explicit images of female celebrities that circulated on Facebook and Instagram. The board, which operates independently but is funded by Meta, aims to evaluate Meta’s policies and enforcement practices surrounding AI-generated pornographic content. To prevent further harm, the board did not disclose the names of the celebrities depicted in the images.

Advancements in AI technology have led to an increase in fabricated content online, particularly explicit images and videos portraying women and girls. This surge in ‘deepfakes’ has posed significant challenges for social media platforms in combating harmful content. Earlier this year, Elon Musk’s social media platform X faced difficulties managing the spread of false explicit images of Taylor Swift, prompting temporary restrictions on related searches.

The Oversight Board highlighted two specific cases: one involving an AI-generated nude image resembling an Indian public figure shared on Instagram and another depicting a nude woman resembling an American public figure in a Facebook group for AI creations. Meta initially removed the latter image for violating its bullying and harassment policy but left the former image up until the board selected it for review.

In response to the board’s scrutiny, Meta acknowledged the cases and committed to implementing the board’s decisions. The prevalence of AI-generated explicit content underscores the need for clearer policies and stricter enforcement measures by tech companies to address the growing issue of ‘deepfakes’ online.

Meta temporarily suspends Threads in Türkiye

Meta Platforms Inc. announced that it will temporarily suspend its social networking app Threads in Türkiye starting 29 April to comply with an interim order from the Turkish Competition Authority. The decision, detailed in a blog post on Monday, aims to address concerns related to data sharing between Instagram and Threads as the competition watchdog investigates potential market dominance abuses by Meta. Despite this move, Meta reassured users that the shutdown of Threads in Türkiye will not affect other Meta services like Facebook, Instagram, or WhatsApp within the country or Threads in other global locations.

The Turkish Competition Authority initiated an investigation into Meta in December over possible competition law violations stemming from the integration of Instagram with Threads. The interim order, which restricts data merging between the two platforms, will remain effective until the authority reaches a final decision. Meta expressed disagreement with this decision, asserting its compliance with Turkish legal requirements and indicating plans to appeal the ruling.

Threads, Meta’s microblogging venture launched in July 2023, aimed to expand beyond Instagram’s media-centric format by offering a predominantly text-based social platform where users could share photos, links, and short videos. While Threads quickly gained traction in the US and over 100 other countries, its European debut was delayed until December 2023 due to stringent privacy regulations in the region. Despite this setback, Meta remains committed to navigating regulatory challenges while advancing its diverse social networking offerings.

New OpenAI and Meta AI models close to human-like reasoning

Meta and OpenAI are close to unveiling advanced AI models that can reason and plan, according to a Financial Times report. OpenAI’s COO, Brad Lightcap, hinted at the upcoming release of GPT-5, which will make significant progress in solving ‘hard problems’ of reasoning.

Yann LeCun, Meta’s chief AI scientist, and Joelle Pineau, VP of AI Research, envision AI agents capable of complex, multi-stage operations. The enhanced reasoning should enable the AI models to ‘search over possible answers,’ ‘plan sequences of actions,’ and model out the outcomes and consequences before execution.

Why does it matter?


Meta is getting ready to launch Llama 3 in various model sizes optimized for different apps and devices, including WhatsApp and Ray-Ban smart glasses. OpenAI is less open about its plans for GPT-5, but Lightcap expressed optimism about the model’s potential to reason.


Getting AI models to reason and plan is a critical step towards reaching artificial general intelligence (AGI). Multiple definitions of AGI exist, but it can be simply described as a sort of AI capable of performing at or beyond human levels on a broad range of activities.

Some scientists and experts have expressed concerns about building technology that will outperform human abilities. AI godfathers Yoshua Bengio, and Geoffrey Hinton have even warned us against the threats to humanity posed by AI. Both Meta and OpenAI claim to be aiming for AGI, which could be worth trillions for the company that achieves it.

Meta tests features to protect teens on Instagram

Meta, Instagram’s parent company, has announced plans to trial new features aimed at protecting teens by blurring messages containing nudity. This initiative is part of Meta’s broader effort to address concerns surrounding harmful content on its platforms. The tech giant faces increasing scrutiny in the US and Europe amid allegations that its apps are addictive and contribute to mental health issues among young people.

The proposed protection feature for Instagram’s direct messages will utilise on-device machine learning to analyse images for nudity. It will be enabled by default for users under 18, with Meta urging adults to activate it as well. Notably, the nudity protection feature will operate even in end-to-end encrypted chats, ensuring privacy while maintaining safety measures.

Meta is also developing technology to identify accounts potentially involved in sextortion scams and is testing new pop-up messages to warn users who may have interacted with such accounts. These efforts come after Meta’s previous announcements regarding increased content restrictions for teens on Facebook and Instagram, particularly concerning sensitive topics like suicide, self-harm, and eating disorders.

Why does it matter?

The company’s actions follow legal challenges, including a lawsuit filed by 33 US states alleging that Meta misled the public about the dangers of its platforms. The European Commission has also requested information on Meta’s measures to protect children from illegal and harmful content in Europe. As Meta continues to navigate regulatory and public scrutiny, its focus on enhancing safety features underscores the ongoing debate surrounding social media’s impact on mental health and well-being, especially among younger users.

Meta boosts AI chip power for enhanced performance

Meta is gearing up for the next leap in AI chip technology, promising enhanced power and faster training for its ranking models. The Meta Training and Inference Accelerator (MTIA) aims to optimise training efficiency and streamline reasoning tasks, particularly for ranking and recommendation algorithms. In a recent announcement, Meta emphasised MTIA’s pivotal role in its long-term strategy to fortify AI infrastructure for current and future technological advancements, aligning with existing technology setups and forthcoming GPU developments.

The company’s commitment to custom silicon extends beyond computational power, encompassing memory bandwidth, networking, and capacity enhancements. Initially unveiled in May 2023 with a focus on data centres, MTIA v1 was slated for a 2025 release. However, Meta was surprised by revealing that both MTIA iterations are already in production, indicative of accelerated progress in their chip development roadmap.

While MTIA currently specialises in training ranking and recommendation algorithms, Meta envisions expanding its capabilities to include generative AI training, such as with its Llama language models. The forthcoming MTIA chip boasts significant upgrades, featuring 256MB memory on-chip and operating at 1.3GHz, compared to its predecessor’s 128MB and 800GHz configuration. Early performance tests indicate a threefold improvement across evaluated models, reflecting Meta’s strides in chip optimisation.

Why does it matter?

Meta’s pursuit mirrors a broader trend among AI companies, with players like Google, Microsoft, and Amazon venturing into custom chip development to meet escalating computing demands. The competitive landscape underscores the need for tailored solutions to efficiently power AI models. As the industry witnesses unprecedented growth in chip demand, market leaders like Nvidia stand poised for substantial valuation, highlighting the critical role of custom chips in driving AI innovation.

AI giants OpenAI, Google, Meta and Mistral unveil new LLMs in rapid succession

Three major players in the AI field, OpenAI, Google, and Mistral, have unveiled new versions of their cutting-edge AI models within 12 hours, signalling a burst of innovation anticipated for the summer. Meta’s Nick Clegg hinted at the imminent release of Meta’s Llama 3 at an event in London, while Google swiftly followed with the launch of its Gemini Pro 1.5, a sophisticated large language model with a limited free usage tier. Shortly after, OpenAI introduced its milestone model, GPT-4 Turbo, which, like Gemini Pro 1.5, supports multimodal input, including images.

In France, Mistral, a startup founded by former Meta AI team members, debuted Mixtral 8x22B, a frontier AI model released as a 281GB download file, following an open-source philosophy. While this approach is criticised for potential risks due to a lack of oversight, it reflects a trend towards democratising access to AI models beyond the control of tech giants like Meta and Google.

Experts caution that the prevailing approach centred on large language models (LLMs) might be reaching its limitations. Meta’s chief AI scientist, Yann LeCun, challenges the notion of imminent artificial general intelligence (AGI) and emphasises the need for AI systems capable of reasoning and planning beyond language manipulation. LeCun advocates for a shift towards ‘objective-driven’ AI to achieve truly superhuman capabilities, thereby highlighting the ongoing evolution and challenges in the AI landscape.

Meta confirms the launch of Llama 3

Meta has confirmed its imminent release of Llama 3, the next iteration of its large language model set to power generative AI assistants. The announcement at an event in London aligns with reports speculating on Meta’s impending launch, indicating a strategic move to enhance its AI offerings.

According to Nick Clegg, Meta’s president of global affairs, the rollout of Llama 3 is slated to begin within the next month. Meta’s Chief Product Officer, Chris Cox, stressed the need to integrate Llama 3 across multiple Meta products, marking a significant step in expanding its AI capabilities.

Meta’s endeavours in AI have been influenced by the success of OpenAI’s ChatGPT, prompting the company to intensify efforts to catch up with competitors. Llama 3, described as broader in scope compared to its predecessors, aims to address criticisms of previous versions regarding limitations in functionality. The new model is expected to offer improved accuracy in answering questions and handle various, including potentially controversial ones, to engage users effectively.

Why does it matter?

While Meta embraces an open-source approach with its Llama models, signalling with developer preferences, it remains cautious in other aspects of generative AI. The company refrains from releasing Emu, its image generation tool, citing concerns about latency, safety, and usability. Despite the company’s advancements in AI technology, notable figures within Meta express scepticism about the future of generative AI, favouring alternative approaches like joint embedding predicting architecture (JEPA) championed by Yann LeCun, Meta’s chief AI scientist.

Malaysia urges Meta and TikTok to monitor harmful content

Malaysia has called upon social media giants Facebook operator Meta and short video platform TikTok to intensify monitoring efforts on their platforms due to a surge in harmful content, as reported by the government. In the first quarter of 2024 alone, authorities referred 51,638 cases to these platforms for further action, a significant increase from the 42,904 cases recorded last year. While specific details on the reported content were not disclosed, the move aims to combat disseminating harmful material online, particularly concerning sensitive topics like race, religion, and royalty.

According to statements from Malaysian regulatory bodies and police, the plea to Meta and TikTok also encompassed the need to address content indicative of coordinated inauthentic behaviour, financial scams, and illegal online gambling. Sensitivity surrounding race and religion in Malaysia, a predominantly Muslim nation with significant ethnic Chinese and Indian populations, underpins the urgency of the government’s call. Additionally, Malaysia’s legal framework includes statutes prohibiting seditious remarks or insults directed at its monarchy, adding further weight to the push for online content regulation.

Why does it matter?

In recent months, Malaysia has been ramping up its scrutiny of online content amid accusations of a wavering commitment to safeguarding free speech within Prime Minister Anwar Ibrahim’s administration. Despite refutations from the government regarding allegations of stifling diverse viewpoints, the government emphasises the necessity of protecting users from online harm. Meta and TikTok had previously implemented record restrictions on social media posts and accounts in Malaysia during the first half of 2023, coinciding with an uptick in government requests for content removal, as revealed by data from both companies published last year.

Meta to label AI-generated content instead of removing it

Meta Platforms Inc., the parent company of Facebook and Instagram, has announced changes to its content policies regarding AI-generated content. Under the new policy, Meta will no longer remove misleading AI-generated content but will instead label it to provide transparency. This shift in approach aims to address concerns about misleading content without outright removal.

Previously, Meta’s policy targeted ‘manipulated media’ that could mislead viewers into thinking someone in a video said something they did not. Now, the content policy extends to digitally altered images, videos, or audio as the company will employ fact-checking and labelling to inform users about the nature of the content they encounter on its platforms.

The policy was revised in February after Meta’s Oversight Board criticised the previous approach as ‘incoherent’. The board recommended using labels instead of removal for AI-generated content, and Meta has agreed with this perspective, emphasising the importance of transparency and additional context in handling such content.

Why does it matter?

Starting in May, AI-generated Meta-platform content will be labelled ‘Made with AI’ to indicate its origin. This policy change is particularly significant given the upcoming US elections, with Meta acknowledging the need for clear labelling of AI-generated posts, including those created using competitors’ technology.

Meta’s shift in content moderation policy reflects a broader trend toward transparency in dealing with AI-generated content across social media platforms. By implementing labelling instead of removal, Meta aims to provide users with more information about the nature of the online content.