Meta tests features to protect teens on Instagram

Meta, Instagram’s parent company, has announced plans to trial new features aimed at protecting teens by blurring messages containing nudity. This initiative is part of Meta’s broader effort to address concerns surrounding harmful content on its platforms. The tech giant faces increasing scrutiny in the US and Europe amid allegations that its apps are addictive and contribute to mental health issues among young people.

The proposed protection feature for Instagram’s direct messages will utilise on-device machine learning to analyse images for nudity. It will be enabled by default for users under 18, with Meta urging adults to activate it as well. Notably, the nudity protection feature will operate even in end-to-end encrypted chats, ensuring privacy while maintaining safety measures.

Meta is also developing technology to identify accounts potentially involved in sextortion scams and is testing new pop-up messages to warn users who may have interacted with such accounts. These efforts come after Meta’s previous announcements regarding increased content restrictions for teens on Facebook and Instagram, particularly concerning sensitive topics like suicide, self-harm, and eating disorders.

Why does it matter?

The company’s actions follow legal challenges, including a lawsuit filed by 33 US states alleging that Meta misled the public about the dangers of its platforms. The European Commission has also requested information on Meta’s measures to protect children from illegal and harmful content in Europe. As Meta continues to navigate regulatory and public scrutiny, its focus on enhancing safety features underscores the ongoing debate surrounding social media’s impact on mental health and well-being, especially among younger users.

Meta boosts AI chip power for enhanced performance

Meta is gearing up for the next leap in AI chip technology, promising enhanced power and faster training for its ranking models. The Meta Training and Inference Accelerator (MTIA) aims to optimise training efficiency and streamline reasoning tasks, particularly for ranking and recommendation algorithms. In a recent announcement, Meta emphasised MTIA’s pivotal role in its long-term strategy to fortify AI infrastructure for current and future technological advancements, aligning with existing technology setups and forthcoming GPU developments.

The company’s commitment to custom silicon extends beyond computational power, encompassing memory bandwidth, networking, and capacity enhancements. Initially unveiled in May 2023 with a focus on data centres, MTIA v1 was slated for a 2025 release. However, Meta was surprised by revealing that both MTIA iterations are already in production, indicative of accelerated progress in their chip development roadmap.

While MTIA currently specialises in training ranking and recommendation algorithms, Meta envisions expanding its capabilities to include generative AI training, such as with its Llama language models. The forthcoming MTIA chip boasts significant upgrades, featuring 256MB memory on-chip and operating at 1.3GHz, compared to its predecessor’s 128MB and 800GHz configuration. Early performance tests indicate a threefold improvement across evaluated models, reflecting Meta’s strides in chip optimisation.

Why does it matter?

Meta’s pursuit mirrors a broader trend among AI companies, with players like Google, Microsoft, and Amazon venturing into custom chip development to meet escalating computing demands. The competitive landscape underscores the need for tailored solutions to efficiently power AI models. As the industry witnesses unprecedented growth in chip demand, market leaders like Nvidia stand poised for substantial valuation, highlighting the critical role of custom chips in driving AI innovation.

AI giants OpenAI, Google, Meta and Mistral unveil new LLMs in rapid succession

Three major players in the AI field, OpenAI, Google, and Mistral, have unveiled new versions of their cutting-edge AI models within 12 hours, signalling a burst of innovation anticipated for the summer. Meta’s Nick Clegg hinted at the imminent release of Meta’s Llama 3 at an event in London, while Google swiftly followed with the launch of its Gemini Pro 1.5, a sophisticated large language model with a limited free usage tier. Shortly after, OpenAI introduced its milestone model, GPT-4 Turbo, which, like Gemini Pro 1.5, supports multimodal input, including images.

In France, Mistral, a startup founded by former Meta AI team members, debuted Mixtral 8x22B, a frontier AI model released as a 281GB download file, following an open-source philosophy. While this approach is criticised for potential risks due to a lack of oversight, it reflects a trend towards democratising access to AI models beyond the control of tech giants like Meta and Google.

Experts caution that the prevailing approach centred on large language models (LLMs) might be reaching its limitations. Meta’s chief AI scientist, Yann LeCun, challenges the notion of imminent artificial general intelligence (AGI) and emphasises the need for AI systems capable of reasoning and planning beyond language manipulation. LeCun advocates for a shift towards ‘objective-driven’ AI to achieve truly superhuman capabilities, thereby highlighting the ongoing evolution and challenges in the AI landscape.

Meta confirms the launch of Llama 3

Meta has confirmed its imminent release of Llama 3, the next iteration of its large language model set to power generative AI assistants. The announcement at an event in London aligns with reports speculating on Meta’s impending launch, indicating a strategic move to enhance its AI offerings.

According to Nick Clegg, Meta’s president of global affairs, the rollout of Llama 3 is slated to begin within the next month. Meta’s Chief Product Officer, Chris Cox, stressed the need to integrate Llama 3 across multiple Meta products, marking a significant step in expanding its AI capabilities.

Meta’s endeavours in AI have been influenced by the success of OpenAI’s ChatGPT, prompting the company to intensify efforts to catch up with competitors. Llama 3, described as broader in scope compared to its predecessors, aims to address criticisms of previous versions regarding limitations in functionality. The new model is expected to offer improved accuracy in answering questions and handle various, including potentially controversial ones, to engage users effectively.

Why does it matter?

While Meta embraces an open-source approach with its Llama models, signalling with developer preferences, it remains cautious in other aspects of generative AI. The company refrains from releasing Emu, its image generation tool, citing concerns about latency, safety, and usability. Despite the company’s advancements in AI technology, notable figures within Meta express scepticism about the future of generative AI, favouring alternative approaches like joint embedding predicting architecture (JEPA) championed by Yann LeCun, Meta’s chief AI scientist.

Malaysia urges Meta and TikTok to monitor harmful content

Malaysia has called upon social media giants Facebook operator Meta and short video platform TikTok to intensify monitoring efforts on their platforms due to a surge in harmful content, as reported by the government. In the first quarter of 2024 alone, authorities referred 51,638 cases to these platforms for further action, a significant increase from the 42,904 cases recorded last year. While specific details on the reported content were not disclosed, the move aims to combat disseminating harmful material online, particularly concerning sensitive topics like race, religion, and royalty.

According to statements from Malaysian regulatory bodies and police, the plea to Meta and TikTok also encompassed the need to address content indicative of coordinated inauthentic behaviour, financial scams, and illegal online gambling. Sensitivity surrounding race and religion in Malaysia, a predominantly Muslim nation with significant ethnic Chinese and Indian populations, underpins the urgency of the government’s call. Additionally, Malaysia’s legal framework includes statutes prohibiting seditious remarks or insults directed at its monarchy, adding further weight to the push for online content regulation.

Why does it matter?

In recent months, Malaysia has been ramping up its scrutiny of online content amid accusations of a wavering commitment to safeguarding free speech within Prime Minister Anwar Ibrahim’s administration. Despite refutations from the government regarding allegations of stifling diverse viewpoints, the government emphasises the necessity of protecting users from online harm. Meta and TikTok had previously implemented record restrictions on social media posts and accounts in Malaysia during the first half of 2023, coinciding with an uptick in government requests for content removal, as revealed by data from both companies published last year.

Meta to label AI-generated content instead of removing it

Meta Platforms Inc., the parent company of Facebook and Instagram, has announced changes to its content policies regarding AI-generated content. Under the new policy, Meta will no longer remove misleading AI-generated content but will instead label it to provide transparency. This shift in approach aims to address concerns about misleading content without outright removal.

Previously, Meta’s policy targeted ‘manipulated media’ that could mislead viewers into thinking someone in a video said something they did not. Now, the content policy extends to digitally altered images, videos, or audio as the company will employ fact-checking and labelling to inform users about the nature of the content they encounter on its platforms.

The policy was revised in February after Meta’s Oversight Board criticised the previous approach as ‘incoherent’. The board recommended using labels instead of removal for AI-generated content, and Meta has agreed with this perspective, emphasising the importance of transparency and additional context in handling such content.

Why does it matter?

Starting in May, AI-generated Meta-platform content will be labelled ‘Made with AI’ to indicate its origin. This policy change is particularly significant given the upcoming US elections, with Meta acknowledging the need for clear labelling of AI-generated posts, including those created using competitors’ technology.

Meta’s shift in content moderation policy reflects a broader trend toward transparency in dealing with AI-generated content across social media platforms. By implementing labelling instead of removal, Meta aims to provide users with more information about the nature of the online content.

Meta removes millions of pieces of harmful content in India

Meta released its monthly report, revealing that it removed over 13.8 million pieces of harmful content from Facebook and over 4.8 million pieces from Instagram in India during February. The actions were taken across multiple policies to maintain community standards and safeguard user experience.

The report highlighted Meta’s response to user reports, with Facebook receiving 18,512 reports through the Indian grievance mechanism in February. Meta provided users with tools to resolve issues in approximately half of the cases, demonstrating its commitment to addressing user concerns promptly.
The company emphasised its compliance with India’s IT Rules 2021, which require digital platforms with over 5 million users to publish monthly compliance reports. These reports detail the number of content pieces acted upon, including removals or warnings, in line with platform standards.

Why does it matter?

In January, Meta’s content moderation efforts resulted in the removal of over 17.8 million pieces of content from Facebook and over 4.8 million pieces from Instagram, underscoring the ongoing challenge of maintaining a safe and healthy online environment amidst evolving user behaviours and content trends.

Turkey imposes provisional restriction on Meta amid market abuse probe

Turkey’s competition authority has enacted a provisional restriction on Meta, limiting data exchange between Instagram and Threads during an ongoing market abuse investigation. The interim measure now will be maintained until a definitive ruling is made.

The regulator had initiated the probe into Meta back in December due to potential competition law breaches and significant market damage from the data merging of Instagram and Threads. The regulator stated that the company’s data sharing communication across Facebook, Instagram, and WhatsApp lacked clarity and sufficient information. Additionally, the user prompts for data sharing approval were seen as inadequate for addressing competition issues.

Previously, on a separate matter, the Turkish authority had also imposed a daily fine of $148,000 on Meta for its data sharing notification practices.

Meta passes in-app ‘Apple tax’ to advertisers

Meta plans to capitalize on the discontent among advertisers in its own conflict with Apple regarding in-app purchase fees by announcing its intention to transfer the 30% service charge imposed by Apple to its own customers. Starting later this month, advertisers who wish to promote a post in the Facebook or Instagram iOS app will now be billed through Apple where this additional charge will be applied.

Meta offers an alternative for advertisers to avoid the additional charge imposed by Apple by paying to boost posts from the web on Facebook or Instagram, accessible through both desktop and mobile browsers. However, it recognizes that customers may not perceive this as a convenient option since in-app purchases are the most convenient way to transact on Apple’s devices. Therefore, those who opt for in-app purchases will now incur higher costs.

By passing on the burden of Apple’s commission to advertisers, Meta hopes to garner public support and, ultimately, influence lawmakers and regulators to bring about a change in Apple’s business practices. The current commission rates and the introduction of the ‘core technology fee’ have also faced criticism from companies such as Epic and Spotify.

UK calls for mandatory identity verification on Meta’s marketplace to combat shoplifting

The UK’s National Police Chiefs’ Council (NPCC), has called on Meta to implement mandatory identity and location verification on Meta’s marketplace platform. The NPCC Chief Superintendent Alex Goss, believes that online platforms such as Meta need to take a more proactive approach in combating shoplifting by considering criminality when designing their platforms.

Shoplifting of high-value items, such as alcohol, steak, and cosmetics, continues to be a significant problem. Thieves target these items due to their value and demand on the market. Goss’s call for Meta to enforce identity and location verification aims to deter potential shoplifters and make it harder for them to anonymously sell stolen goods on the platform.

Under the plan, police forces are now prioritising shoplifting incidents and attending the location where a suspect is being held by store staff. This indicates that shoplifting has become a growing concern, requiring immediate attention and stronger preventive measures.