Meta prepares to launch ads on Threads app in early 2024

Meta Platforms is gearing up to introduce advertising to its Threads app early next year, aiming to tap into a new revenue stream while competing with X (formerly Twitter). The Information reported that a limited number of advertisers will be allowed to publish ads on Threads starting in January, with the initiative spearheaded by Instagram’s advertising team. Threads, which launched in July 2023 amidst the upheaval at X under Elon Musk’s ownership, has rapidly grown to 275 million monthly active users, as announced by CEO Mark Zuckerberg in October.

Despite the app’s quick expansion, Meta remains cautious about its immediate profitability. CFO Susan Li, during a recent post-earnings call, indicated that Threads is not expected to be a significant revenue driver by 2025. She emphasised that the company is prioritising consumer value, and monetisation features are not yet a primary focus. A Meta spokesperson echoed this sentiment, confirming that Threads currently has no ads or monetisation strategies.

The timing for the introduction of ads on Threads could be opportune, given the instability at X. Since Elon Musk‘s acquisition of X, the platform has experienced disruptions and a decline in ad revenue, as some advertisers feared their brands could appear alongside controversial or harmful content. Musk’s management style and significant policy changes prompted many brands to reconsider ad spending on the site. Notably, X has taken legal action against a global advertising alliance and some major companies, accusing them of conspiring to boycott the platform and contributing to revenue losses.

Meta‘s plans to monetise Threads come as it seeks to entice disillusioned advertisers from X. However, the company is carefully balancing the need to develop Threads as a welcoming and user-friendly environment while exploring advertising opportunities. The rollout of ads and additional features is set to shape how Threads evolves as a major social media contender in the years to come.

EU hits Meta with $800M antitrust fine

Meta, the parent company of Facebook, has been fined nearly 800M by the European Union for anti-competitive practices related to its Marketplace feature. The European Commission accused the tech giant of abusing its dominant position by tying Marketplace to Facebook’s social network, forcing exposure to the service and disadvantaging competitors.

This marks the first time the EU has penalised Meta for breaching competition laws, though the company has faced previous fines for privacy violations. The investigation found that Meta unfairly used data from competitors advertising on Facebook and Instagram to benefit its own Marketplace, giving it an edge that rivals couldn’t match.

Meta rejected the claims, arguing that the decision lacks evidence of harm to competition or consumers. While the company pledged to comply with the EU’s order to cease the conduct, it plans to appeal the ruling. The case highlights ongoing EU scrutiny of Big Tech, with Meta facing additional investigations on issues like privacy, child safety, and election integrity.

Ireland intensifies regulation on digital platforms to curb terrorist content

The Irish media regulator, Coimisiún na Meán, has mandated that online platforms TikTok, X, and Meta must take decisive steps to prevent the spread of terrorist content on their services, giving them three months to report on their progress.

This action follows notifications from EU authorities under the Terrorist Content Online Regulation. If the platforms fail to comply, the regulator can impose fines of up to four percent of their global revenue.

This decision aligns with Ireland’s broader enforcement of digital laws, including the Digital Services Act (DSA) and a new online safety code. The DSA has already prompted investigations, such as the European Commission’s probe into X last December, and Ireland’s new safety code will impose binding content moderation rules for video-sharing platforms with European headquarters in Ireland. These initiatives aim to curb the spread of harmful and illegal content on major social media platforms.

Meta to give European users more control over personalised ads

Meta Platforms announced it will soon give Instagram and Facebook users in Europe the option to receive less personalised ads. The decision comes in response to pressure from EU regulators and aims to address concerns about data privacy and targeted advertising. Instead of highly tailored ads, users will be shown adverts based on general factors like age, gender, and location, as well as the content they view in a given session.

The move aligns with the European Union‘s push to regulate major tech companies, supported by legislation like the Digital Markets Act (DMA), which was introduced earlier this year to promote fair competition and enhance user privacy. Additionally, Meta will offer a 40% price reduction on ad-free subscriptions for European customers.

The changes follow a recent ruling by Europe’s highest court, which supported privacy activist Max Schrems and ruled that Meta must limit the use of personal data from Facebook for advertising purposes. Meanwhile, the European Union is set to fine Apple under these new antitrust rules, marking a significant step in the enforcement of stricter regulations for Big Tech.

AI-generated profile pics spotted on Instagram

Instagram may soon let users create AI-generated profile pictures directly within the app, according to new findings by developer Alessandro Paluzzi. A screenshot Paluzzi shared on Threads suggests users will see an option to ‘Create an AI profile picture’ while updating their profile image. This addition hints at Instagram’s push toward integrating AI more closely with user experiences.

Meta appears to be exploring similar AI-powered features across its platforms, including WhatsApp and Facebook. The company has made strides with its Llama AI models, designed to generate creative images from text prompts. Meta AI’s capabilities are already visible on WhatsApp, where a test feature has allowed some users to create images from scratch, though its rollout has been slow.

For now, Instagram users are limited to using avatars generated from actual images. An AI-generated option would offer a more creative and flexible way to personalise their profiles, adding a fresh layer of expression through custom images generated by prompts.

Meta has not confirmed any launch date for this feature on Instagram or other apps. While the latest Instagram beta does not yet include it, more updates are expected, and users could soon find themselves with a new tool for designing unique profile pictures.

EU and UK universities begin metaverse classes

Universities across the EU and UK are set to introduce metaverse-based courses, where students can attend classes in digital replicas of their campuses. Meta, the company behind Facebook and Instagram, announced the launch of Europe’s first ‘metaversities,’ immersive digital twins of real university campuses. With the help of Meta’s VR partner VictoryXR, students can explore campus grounds, work on projects, and participate in simulations from their VR headsets or PCs, offering a more interactive experience than traditional video calls.

Several institutions are embracing the metaverse: the UK’s University of Leeds started metaverse courses in theater this fall, while Spain’s University of the Basque Country will introduce virtual physiotherapy and anatomy classes by February 2025. In Germany, schools in Hannover will launch immersive classes by the start of the 2025 school year. VictoryXR, which has collaborated with over 130 campuses worldwide, sees these “digital twin” campuses as ideal for field trips, group experiments, and real-time assignments.

Meta has provided VR headsets to educators at numerous universities in the US and UK, including Imperial College London, to encourage innovative teaching in fields such as science and language arts. According to Meta, these metaversities mark a ‘significant leap forward’ in education, creating interactive and engaging learning environments.

Australia plans to ban social media for children under 16

The Australian government has announced plans to introduce a ban on social media access for children under 16, with legislation expected to pass by late next year. Prime Minister Anthony Albanese described the move as part of a world-leading initiative to combat the harms social media inflicts on children, particularly the negative impact on their mental and physical health. He highlighted concerns over the influence of harmful body image content for girls and misogynistic material directed at boys.

Australia is also testing age-verification systems, such as biometrics and government ID, to ensure that children cannot access social media platforms. The new legislation will not allow exemptions, including for children with parental consent or those with pre-existing accounts. Social media platforms will be held responsible for preventing access to minors, rather than placing the burden on parents or children.

The proposed ban includes major platforms such as Meta’s Instagram and Facebook, TikTok, YouTube, and X (formerly Twitter). While some digital industry representatives, like the Digital Industry Group, have criticised the plan, arguing it could push young people toward unregulated parts of the internet, Australian officials stand by the measure, emphasising the need for strong protections against online harm.

This move positions Australia as a leader in regulating children’s access to social media, with no other country implementing such stringent age-verification methods. The new rules will be introduced into parliament this year and are set to take effect 12 months after ratification.

Ex-Meta exec to oversee robotics and hardware at OpenAI

Caitlin Kalinowski, previously Meta’s head of augmented reality (AR) glasses, has announced she will join OpenAI to lead its robotics and consumer hardware initiatives. Kalinowski, who managed Meta’s AR glasses and VR goggles divisions, is expected to leverage her expertise in hardware to advance OpenAI’s efforts in robotics and develop consumer-focused AI products. She will focus on bringing AI into the physical world through collaborative projects and new technology partnerships.

This move is part of OpenAI’s growing commitment to hardware. Recently, OpenAI teamed up with Jony Ive’s LoveFrom to design a consumer AI device aimed at creating a computing experience that minimises social disruption. OpenAI has also resumed hiring robotics engineers after a previous shift away from hardware, reflecting a renewed interest in integrating its AI models into physical applications.

Kalinowski joins at a time when several companies, including Apple, are beginning to integrate OpenAI’s AI models into consumer technology. With the addition of Kalinowski, OpenAI aims to bring advanced AI functionality into robotics and consumer devices, promising transformative new products.

US Supreme Court set to review Facebook and Nvidia securities fraud cases

The United States Supreme Court will soon consider whether Meta’s Facebook and Nvidia can avoid federal securities fraud lawsuits in two separate cases that may limit investors’ ability to sue corporations. The tech giants are challenging lawsuits following decisions from the Ninth Circuit Court of Appeals, which allowed class actions accusing them of misleading investors to move forward. The cases will examine the power of private plaintiffs to enforce securities laws amid recent rulings that have weakened federal regulatory authority.

The Facebook case involves allegations from a group of investors, led by Amalgamated Bank, who claim the social media giant misled shareholders about a 2015 data breach linked to Cambridge Analytica, which impacted over 30 million users. Facebook argues that its disclosures on potential risks were adequate and forward-looking. Nvidia’s case, brought by Swedish investment firm E. Ohman JFonder AB, alleges that the company understated the role of crypto-related sales in its revenue growth in 2017 and 2018, misinforming investors about the volatility in its business.

Observers say these cases could further empower businesses by limiting legal risks from private litigation, especially as the US Securities and Exchange Commission (SEC) faces resource limitations. With recent Supreme Court rulings constraining regulatory bodies, private securities lawsuits may become an increasingly critical tool for investors. David Shargel, a legal expert, notes that as agencies’ enforcement powers weaken, the role of private litigation to hold companies accountable may expand.

Meta supports national security with Llama AI for US agencies

Meta is expanding the reach of its AI models, making its Llama AI series available to US government agencies and private sector partners involved in national security projects. Partnering with firms like Lockheed Martin, Oracle, and Scale AI, Meta aims to assist government teams and contractors with applications such as intelligence gathering and computer code generation for defence needs.

Although Meta’s policies generally restrict using Llama for military purposes, the company is making an exception for these government partners. This decision follows concerns over foreign misuse of the technology, particularly after reports revealed that researchers affiliated with China’s military had used an earlier Llama model without authorisation for intelligence-related applications.

The choice to integrate open AI like Llama into defence remains controversial. Critics argue that AI’s data security risks and its tendency to generate incorrect outputs make it unreliable in military contexts. Recent findings from the AI Now Institute caution that AI tools could be misused by adversaries due to data vulnerabilities, potentially putting sensitive information at risk.

Meta maintains that open AI can accelerate research and enhance security, though US military adoption remains limited. While some big tech employees oppose military-linked projects, Meta emphasises its commitment to strengthening national security while safeguarding its technology from unauthorised foreign use.