Australia plans to ban social media for children under 16

The Australian government has announced plans to introduce a ban on social media access for children under 16, with legislation expected to pass by late next year. Prime Minister Anthony Albanese described the move as part of a world-leading initiative to combat the harms social media inflicts on children, particularly the negative impact on their mental and physical health. He highlighted concerns over the influence of harmful body image content for girls and misogynistic material directed at boys.

Australia is also testing age-verification systems, such as biometrics and government ID, to ensure that children cannot access social media platforms. The new legislation will not allow exemptions, including for children with parental consent or those with pre-existing accounts. Social media platforms will be held responsible for preventing access to minors, rather than placing the burden on parents or children.

The proposed ban includes major platforms such as Meta’s Instagram and Facebook, TikTok, YouTube, and X (formerly Twitter). While some digital industry representatives, like the Digital Industry Group, have criticised the plan, arguing it could push young people toward unregulated parts of the internet, Australian officials stand by the measure, emphasising the need for strong protections against online harm.

This move positions Australia as a leader in regulating children’s access to social media, with no other country implementing such stringent age-verification methods. The new rules will be introduced into parliament this year and are set to take effect 12 months after ratification.

Ex-Meta exec to oversee robotics and hardware at OpenAI

Caitlin Kalinowski, previously Meta’s head of augmented reality (AR) glasses, has announced she will join OpenAI to lead its robotics and consumer hardware initiatives. Kalinowski, who managed Meta’s AR glasses and VR goggles divisions, is expected to leverage her expertise in hardware to advance OpenAI’s efforts in robotics and develop consumer-focused AI products. She will focus on bringing AI into the physical world through collaborative projects and new technology partnerships.

This move is part of OpenAI’s growing commitment to hardware. Recently, OpenAI teamed up with Jony Ive’s LoveFrom to design a consumer AI device aimed at creating a computing experience that minimises social disruption. OpenAI has also resumed hiring robotics engineers after a previous shift away from hardware, reflecting a renewed interest in integrating its AI models into physical applications.

Kalinowski joins at a time when several companies, including Apple, are beginning to integrate OpenAI’s AI models into consumer technology. With the addition of Kalinowski, OpenAI aims to bring advanced AI functionality into robotics and consumer devices, promising transformative new products.

US Supreme Court set to review Facebook and Nvidia securities fraud cases

The United States Supreme Court will soon consider whether Meta’s Facebook and Nvidia can avoid federal securities fraud lawsuits in two separate cases that may limit investors’ ability to sue corporations. The tech giants are challenging lawsuits following decisions from the Ninth Circuit Court of Appeals, which allowed class actions accusing them of misleading investors to move forward. The cases will examine the power of private plaintiffs to enforce securities laws amid recent rulings that have weakened federal regulatory authority.

The Facebook case involves allegations from a group of investors, led by Amalgamated Bank, who claim the social media giant misled shareholders about a 2015 data breach linked to Cambridge Analytica, which impacted over 30 million users. Facebook argues that its disclosures on potential risks were adequate and forward-looking. Nvidia’s case, brought by Swedish investment firm E. Ohman JFonder AB, alleges that the company understated the role of crypto-related sales in its revenue growth in 2017 and 2018, misinforming investors about the volatility in its business.

Observers say these cases could further empower businesses by limiting legal risks from private litigation, especially as the US Securities and Exchange Commission (SEC) faces resource limitations. With recent Supreme Court rulings constraining regulatory bodies, private securities lawsuits may become an increasingly critical tool for investors. David Shargel, a legal expert, notes that as agencies’ enforcement powers weaken, the role of private litigation to hold companies accountable may expand.

Meta supports national security with Llama AI for US agencies

Meta is expanding the reach of its AI models, making its Llama AI series available to US government agencies and private sector partners involved in national security projects. Partnering with firms like Lockheed Martin, Oracle, and Scale AI, Meta aims to assist government teams and contractors with applications such as intelligence gathering and computer code generation for defence needs.

Although Meta’s policies generally restrict using Llama for military purposes, the company is making an exception for these government partners. This decision follows concerns over foreign misuse of the technology, particularly after reports revealed that researchers affiliated with China’s military had used an earlier Llama model without authorisation for intelligence-related applications.

The choice to integrate open AI like Llama into defence remains controversial. Critics argue that AI’s data security risks and its tendency to generate incorrect outputs make it unreliable in military contexts. Recent findings from the AI Now Institute caution that AI tools could be misused by adversaries due to data vulnerabilities, potentially putting sensitive information at risk.

Meta maintains that open AI can accelerate research and enhance security, though US military adoption remains limited. While some big tech employees oppose military-linked projects, Meta emphasises its commitment to strengthening national security while safeguarding its technology from unauthorised foreign use.

Facebook parent Meta continues post-election ban on new political ads

Meta has announced an extended ban on new political ads following the United States election, aiming to counter misinformation in the tense post-election period. In a blog post on Monday, the Facebook parent company explained that the suspension will remain in place until later in the week, preventing any new political ads from being introduced immediately after the election. Ads that were served at least once before the restriction will still be displayed, but editing options will be limited.

Meta‘s decision to extend its ad restriction is part of its ongoing policy to help prevent last-minute claims that could be difficult to verify. The social media giant implemented a similar measure in the last election cycle, underscoring the need for extra caution as elections unfold.

Last year, Meta also barred political advertisers and regulated industries from using its generative AI-based ad products, reflecting a continued focus on reducing potential misinformation through stricter ad controls and ad content regulations.

South Korea fines Meta $15.7 million for privacy violations

South Korea’s data protection agency has fined Meta Platforms, the owner of Facebook, 21.62 billion won ($15.67 million) for improperly collecting and sharing sensitive user data with advertisers. The Personal Information Protection Commission found that Meta gathered details on nearly one million South Korean users, including their religion, political views, and sexual orientation, without obtaining the necessary consent. This information was reportedly used by around 4,000 advertisers.

The commission revealed that Meta analysed user interactions, such as pages liked and ads clicked, to create targeted ad themes based on sensitive personal data. Some users were even categorised by highly private attributes, including identifying as North Korean defectors or LGBTQ+. Additionally, Meta allegedly denied users’ requests to access their information and failed to secure data for at least ten users, leading to a data breach.

Meta has not yet issued a statement regarding the fine. This penalty underscores South Korea’s commitment to strict data privacy enforcement as concerns over digital privacy intensify worldwide.

Meta boosts green energy with 260 MW solar deal from Engie

Meta Platforms has signed an agreement to purchase the full output of a new solar power plant from French utility giant Engie. The Sypert solar plant, expected to generate 260 megawatts of clean energy, is scheduled to go live in late 2025. This partnership aligns with Meta’s ongoing commitment to meet the energy demands of its expanding data centre operations with sustainable power sources.

The Sypert plant will add to Engie’s growing renewable energy portfolio, which currently includes about 8 gigawatts of solar, wind, and battery storage projects across North America. Earlier this month, Engie also secured a solar power agreement with Google for its largest US solar project, reinforcing the company’s role as a major clean energy supplier for tech firms.

Driven by technologies like AI, the demand for data centre power in the US is predicted to triple by 2030, according to Goldman Sachs. The Biden administration has called on tech companies to invest in green energy to support this growth, and partnerships like Meta and Engie’s reflect this broader push toward a more sustainable digital economy.

The US federal agency investigates how Meta uses consumer financial data for targeted advertising

The Consumer Financial Protection Bureau (CFPB) has informed Meta of its intention to consider ‘legal action’ concerning allegations that the tech giant improperly acquired consumer financial data from third parties for its targeted advertising operations. This federal investigation was revealed in a recent filing that Meta submitted to the Securities and Exchange Commission (SEC).

The filing indicates that the CFPB notified Meta on 18 September that it evaluated whether the company’s actions violate the Consumer Financial Protection Act, designed to protect consumers from unfair and deceptive financial practices. The status of the investigation remains uncertain, with the filing noting that the CFPB could initiate a lawsuit soon, seeking financial penalties and equitable relief.

Meta, the parent company of Instagram and Facebook, is facing increased scrutiny from regulators and state attorneys general regarding various concerns, including its privacy practices.

In the SEC filing, Meta disclosed that the CFPB has formally notified the company about an investigation focusing on the alleged receipt and use for advertising of financial information from third parties through specific advertising tools. The inquiry targets explicitly advertising related to ‘financial products and services,’ although it remains to be seen whether the scrutiny pertains to Facebook, Instagram, or both platforms.

While a Meta spokesperson refrained from commenting on the matter, the company stated in the filing that it disputes the allegations and believes any enforcement action would be unjustified. The CFPB also opted not to provide additional comments.

Amid this scrutiny, Meta recently reported $41 billion in revenue for the third quarter, a 19 percent increase from the previous year. A significant portion of this revenue is generated from its targeted advertising business, which has faced criticism from the Federal Trade Commission (FTC) and European regulators for allegedly mishandling user data and violating privacy rights.

In 2019, Meta settled privacy allegations related to the Cambridge Analytica scandal by paying the FTC $5 billion after it was revealed that the company had improperly shared Facebook user data with the firm for voter profiling. Last year, the European Union fined Meta $1.3 billion for improperly transferring user data from Europe to the United States.

Big Tech AI investments test investor patience

Leading tech giants are racing to expand their AI infrastructure, with companies like Microsoft, Meta, and Amazon dedicating billions to meet rising demand. However, the heavy spending on data centres and computing power is sparking concern among investors who are eager for quicker returns. Big Tech’s significant capital investments come with mounting costs, threatening profitability and raising questions about how quickly these ventures will yield results.

Despite exceeding recent earnings forecasts, Big Tech stocks dropped on Thursday, underlining the pressure they face to balance AI expansion with shareholder expectations. Microsoft and Meta reported increased spending in their latest quarters, yet their shares fell, with Microsoft dropping 6% and Meta 4%. Amazon’s shares saw a brief dip before recovering on news of a strong third-quarter performance. Analysts point to a challenging road ahead as these firms juggle AI ambitions with market demands for near-term gains.

The challenges extend to capacity issues, with firms like Microsoft struggling to keep up with demand due to data centre constraints. Meanwhile, Meta forecasts that its AI-related expenses will increase significantly next year, and chip manufacturers like Nvidia and AMD are racing to fulfil orders. This supply bottleneck highlights the complex task of scaling up AI services, adding a layer of unpredictability to Big Tech’s efforts.

Despite short-term risks, companies remain committed to AI. Amazon CEO Andy Jassy described AI as a “once-in-a-lifetime” opportunity, while Meta’s Mark Zuckerberg likened today’s investment climate to the early days of cloud computing. As firms continue to ramp up infrastructure spending, they are counting on long-term returns, hoping to transform initial scepticism into eventual success.

Chinese military adapts Meta’s Llama for AI tool

China’s People’s Liberation Army (PLA) has adapted Meta’s open-source AI model, Llama, to create a military-focused tool named ChatBIT. Developed by researchers from PLA-linked institutions, including the Academy of Military Science, ChatBIT leverages an earlier version of Llama, fine-tuned for military decision-making and intelligence processing tasks. The tool reportedly performs better than some alternative AI models, though it falls short of OpenAI’s ChatGPT-4.

Meta, which supports open innovation, has restrictions against military uses of its models. However, the open-source nature of Llama limits Meta’s ability to prevent unauthorised adaptations, such as ChatBIT. In response, Meta affirmed its commitment to ethical AI use and noted the need for US innovation to stay competitive as China intensifies its AI research investments.

China’s approach reflects a broader trend, as its institutions reportedly employ Western AI technologies for areas like airborne warfare and domestic security. With increasing US scrutiny over the national security implications of open-source AI, the Biden administration has moved to regulate AI’s development, balancing its potential benefits with growing risks of misuse.