Ireland intensifies regulation on digital platforms to curb terrorist content

The Irish media regulator, Coimisiún na Meán, has mandated that online platforms TikTok, X, and Meta must take decisive steps to prevent the spread of terrorist content on their services, giving them three months to report on their progress.

This action follows notifications from EU authorities under the Terrorist Content Online Regulation. If the platforms fail to comply, the regulator can impose fines of up to four percent of their global revenue.

This decision aligns with Ireland’s broader enforcement of digital laws, including the Digital Services Act (DSA) and a new online safety code. The DSA has already prompted investigations, such as the European Commission’s probe into X last December, and Ireland’s new safety code will impose binding content moderation rules for video-sharing platforms with European headquarters in Ireland. These initiatives aim to curb the spread of harmful and illegal content on major social media platforms.

Meta to give European users more control over personalised ads

Meta Platforms announced it will soon give Instagram and Facebook users in Europe the option to receive less personalised ads. The decision comes in response to pressure from EU regulators and aims to address concerns about data privacy and targeted advertising. Instead of highly tailored ads, users will be shown adverts based on general factors like age, gender, and location, as well as the content they view in a given session.

The move aligns with the European Union‘s push to regulate major tech companies, supported by legislation like the Digital Markets Act (DMA), which was introduced earlier this year to promote fair competition and enhance user privacy. Additionally, Meta will offer a 40% price reduction on ad-free subscriptions for European customers.

The changes follow a recent ruling by Europe’s highest court, which supported privacy activist Max Schrems and ruled that Meta must limit the use of personal data from Facebook for advertising purposes. Meanwhile, the European Union is set to fine Apple under these new antitrust rules, marking a significant step in the enforcement of stricter regulations for Big Tech.

AI-generated profile pics spotted on Instagram

Instagram may soon let users create AI-generated profile pictures directly within the app, according to new findings by developer Alessandro Paluzzi. A screenshot Paluzzi shared on Threads suggests users will see an option to ‘Create an AI profile picture’ while updating their profile image. This addition hints at Instagram’s push toward integrating AI more closely with user experiences.

Meta appears to be exploring similar AI-powered features across its platforms, including WhatsApp and Facebook. The company has made strides with its Llama AI models, designed to generate creative images from text prompts. Meta AI’s capabilities are already visible on WhatsApp, where a test feature has allowed some users to create images from scratch, though its rollout has been slow.

For now, Instagram users are limited to using avatars generated from actual images. An AI-generated option would offer a more creative and flexible way to personalise their profiles, adding a fresh layer of expression through custom images generated by prompts.

Meta has not confirmed any launch date for this feature on Instagram or other apps. While the latest Instagram beta does not yet include it, more updates are expected, and users could soon find themselves with a new tool for designing unique profile pictures.

EU and UK universities begin metaverse classes

Universities across the EU and UK are set to introduce metaverse-based courses, where students can attend classes in digital replicas of their campuses. Meta, the company behind Facebook and Instagram, announced the launch of Europe’s first ‘metaversities,’ immersive digital twins of real university campuses. With the help of Meta’s VR partner VictoryXR, students can explore campus grounds, work on projects, and participate in simulations from their VR headsets or PCs, offering a more interactive experience than traditional video calls.

Several institutions are embracing the metaverse: the UK’s University of Leeds started metaverse courses in theater this fall, while Spain’s University of the Basque Country will introduce virtual physiotherapy and anatomy classes by February 2025. In Germany, schools in Hannover will launch immersive classes by the start of the 2025 school year. VictoryXR, which has collaborated with over 130 campuses worldwide, sees these “digital twin” campuses as ideal for field trips, group experiments, and real-time assignments.

Meta has provided VR headsets to educators at numerous universities in the US and UK, including Imperial College London, to encourage innovative teaching in fields such as science and language arts. According to Meta, these metaversities mark a ‘significant leap forward’ in education, creating interactive and engaging learning environments.

Australia plans to ban social media for children under 16

The Australian government has announced plans to introduce a ban on social media access for children under 16, with legislation expected to pass by late next year. Prime Minister Anthony Albanese described the move as part of a world-leading initiative to combat the harms social media inflicts on children, particularly the negative impact on their mental and physical health. He highlighted concerns over the influence of harmful body image content for girls and misogynistic material directed at boys.

Australia is also testing age-verification systems, such as biometrics and government ID, to ensure that children cannot access social media platforms. The new legislation will not allow exemptions, including for children with parental consent or those with pre-existing accounts. Social media platforms will be held responsible for preventing access to minors, rather than placing the burden on parents or children.

The proposed ban includes major platforms such as Meta’s Instagram and Facebook, TikTok, YouTube, and X (formerly Twitter). While some digital industry representatives, like the Digital Industry Group, have criticised the plan, arguing it could push young people toward unregulated parts of the internet, Australian officials stand by the measure, emphasising the need for strong protections against online harm.

This move positions Australia as a leader in regulating children’s access to social media, with no other country implementing such stringent age-verification methods. The new rules will be introduced into parliament this year and are set to take effect 12 months after ratification.

Ex-Meta exec to oversee robotics and hardware at OpenAI

Caitlin Kalinowski, previously Meta’s head of augmented reality (AR) glasses, has announced she will join OpenAI to lead its robotics and consumer hardware initiatives. Kalinowski, who managed Meta’s AR glasses and VR goggles divisions, is expected to leverage her expertise in hardware to advance OpenAI’s efforts in robotics and develop consumer-focused AI products. She will focus on bringing AI into the physical world through collaborative projects and new technology partnerships.

This move is part of OpenAI’s growing commitment to hardware. Recently, OpenAI teamed up with Jony Ive’s LoveFrom to design a consumer AI device aimed at creating a computing experience that minimises social disruption. OpenAI has also resumed hiring robotics engineers after a previous shift away from hardware, reflecting a renewed interest in integrating its AI models into physical applications.

Kalinowski joins at a time when several companies, including Apple, are beginning to integrate OpenAI’s AI models into consumer technology. With the addition of Kalinowski, OpenAI aims to bring advanced AI functionality into robotics and consumer devices, promising transformative new products.

US Supreme Court set to review Facebook and Nvidia securities fraud cases

The United States Supreme Court will soon consider whether Meta’s Facebook and Nvidia can avoid federal securities fraud lawsuits in two separate cases that may limit investors’ ability to sue corporations. The tech giants are challenging lawsuits following decisions from the Ninth Circuit Court of Appeals, which allowed class actions accusing them of misleading investors to move forward. The cases will examine the power of private plaintiffs to enforce securities laws amid recent rulings that have weakened federal regulatory authority.

The Facebook case involves allegations from a group of investors, led by Amalgamated Bank, who claim the social media giant misled shareholders about a 2015 data breach linked to Cambridge Analytica, which impacted over 30 million users. Facebook argues that its disclosures on potential risks were adequate and forward-looking. Nvidia’s case, brought by Swedish investment firm E. Ohman JFonder AB, alleges that the company understated the role of crypto-related sales in its revenue growth in 2017 and 2018, misinforming investors about the volatility in its business.

Observers say these cases could further empower businesses by limiting legal risks from private litigation, especially as the US Securities and Exchange Commission (SEC) faces resource limitations. With recent Supreme Court rulings constraining regulatory bodies, private securities lawsuits may become an increasingly critical tool for investors. David Shargel, a legal expert, notes that as agencies’ enforcement powers weaken, the role of private litigation to hold companies accountable may expand.

Meta supports national security with Llama AI for US agencies

Meta is expanding the reach of its AI models, making its Llama AI series available to US government agencies and private sector partners involved in national security projects. Partnering with firms like Lockheed Martin, Oracle, and Scale AI, Meta aims to assist government teams and contractors with applications such as intelligence gathering and computer code generation for defence needs.

Although Meta’s policies generally restrict using Llama for military purposes, the company is making an exception for these government partners. This decision follows concerns over foreign misuse of the technology, particularly after reports revealed that researchers affiliated with China’s military had used an earlier Llama model without authorisation for intelligence-related applications.

The choice to integrate open AI like Llama into defence remains controversial. Critics argue that AI’s data security risks and its tendency to generate incorrect outputs make it unreliable in military contexts. Recent findings from the AI Now Institute caution that AI tools could be misused by adversaries due to data vulnerabilities, potentially putting sensitive information at risk.

Meta maintains that open AI can accelerate research and enhance security, though US military adoption remains limited. While some big tech employees oppose military-linked projects, Meta emphasises its commitment to strengthening national security while safeguarding its technology from unauthorised foreign use.

Facebook parent Meta continues post-election ban on new political ads

Meta has announced an extended ban on new political ads following the United States election, aiming to counter misinformation in the tense post-election period. In a blog post on Monday, the Facebook parent company explained that the suspension will remain in place until later in the week, preventing any new political ads from being introduced immediately after the election. Ads that were served at least once before the restriction will still be displayed, but editing options will be limited.

Meta‘s decision to extend its ad restriction is part of its ongoing policy to help prevent last-minute claims that could be difficult to verify. The social media giant implemented a similar measure in the last election cycle, underscoring the need for extra caution as elections unfold.

Last year, Meta also barred political advertisers and regulated industries from using its generative AI-based ad products, reflecting a continued focus on reducing potential misinformation through stricter ad controls and ad content regulations.

South Korea fines Meta $15.7 million for privacy violations

South Korea’s data protection agency has fined Meta Platforms, the owner of Facebook, 21.62 billion won ($15.67 million) for improperly collecting and sharing sensitive user data with advertisers. The Personal Information Protection Commission found that Meta gathered details on nearly one million South Korean users, including their religion, political views, and sexual orientation, without obtaining the necessary consent. This information was reportedly used by around 4,000 advertisers.

The commission revealed that Meta analysed user interactions, such as pages liked and ads clicked, to create targeted ad themes based on sensitive personal data. Some users were even categorised by highly private attributes, including identifying as North Korean defectors or LGBTQ+. Additionally, Meta allegedly denied users’ requests to access their information and failed to secure data for at least ten users, leading to a data breach.

Meta has not yet issued a statement regarding the fine. This penalty underscores South Korea’s commitment to strict data privacy enforcement as concerns over digital privacy intensify worldwide.