Facebook and Instagram data to power Meta’s AI models

Meta Platforms will soon start using public posts on Facebook and Instagram to train its AI models in the UK. The company had paused its plans after regulatory concerns from the Irish privacy regulator and Britain’s Information Commissioner’s Office (ICO). The AI training will involve content such as photos, captions, and comments but will exclude private messages and data from users under 18.

Meta faced privacy-related backlash earlier in the year, leading to its decision to halt the AI model launch in Europe. The company has since engaged with UK regulators, resulting in a clearer framework that allows the AI training plans to proceed. The new strategy simplifies the way users can object to their data being processed.

From next week, Facebook and Instagram users in the UK will receive in-app notifications explaining how their public posts may be used for AI training. Users will also be informed on how to object to the use of their data. Meta has extended the window in which objections can be filed, aiming to address transparency concerns raised by both the ICO and advocacy groups.

Earlier in June, Meta’s AI plans faced opposition from privacy advocacy groups like NOYB, which urged regulators to intervene. These groups argued that Meta’s notifications did not fully meet the EU’s privacy and transparency standards. Meta’s latest updates are seen as an effort to align with these regulatory demands.

Malta launches public consultation to establish legal protections for ethical hackers

The Government of Malta has initiated a public consultation to establish a comprehensive legal framework for ethical hackers, also known as security researchers, who identify and disclose vulnerabilities in ICT systems to bolster cybersecurity. That initiative aims to clearly define the role of ethical hackers, ensuring that their activities are regulated and protected by law, enabling them to operate within a transparent and legitimate framework.

In addition, the Government of Malta has proposed that ICT system owners, especially those managing critical infrastructure, implement Coordinated Vulnerability Disclosure Policies (CVDP) to handle better the detection and resolution of security flaws identified by ethical hackers. Overseen by the Directorate for Critical Infrastructure Protection (CIPD), this policy comes in response to an incident where four computer science students were arrested after discovering a vulnerability in the FreeHour app.

Despite acting in good faith, the students faced legal consequences, highlighting the urgent need for clearer protections and legal guidance for ethical hackers. The proposed framework aims to formalise the process, encouraging cooperation between public and private entities and ensuring that cybersecurity research is conducted safely and responsibly.

Open to public input until 7 October 2024, the consultation is expected to lead to legislative reforms that distinguish ethical hacking from illegal activities, providing much-needed clarity for those working to enhance cybersecurity.

Experts warn of AI dangers in Oprah Winfrey special

Oprah Winfrey aired a special titled ‘AI and the Future of Us,’ featuring guests like OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and FBI director Christopher Wray. The discussion was largely focused on the potential risks and ethical concerns surrounding AI. Winfrey highlighted the need for humanity to adapt to AI’s rapid development, while Altman emphasised the importance of safety regulations.

Altman defended AI’s learning capabilities but acknowledged the need for government involvement in safety testing. However, his company has opposed California’s AI safety bill, which experts believe would provide essential safeguards. He also discussed the dangers of deepfakes and urged caution as AI technology advances.

Wray pointed out AI’s role in rising cybercrimes like sextortion and disinformation. He warned of its potential to be exploited for election interference, urging the public to remain vigilant in the face of increasing AI-generated content.

For balance, Bill Gates expressed optimism about AI’s positive impact on education and healthcare. He envisioned AI improving medical transcription and classroom learning, though concerns about bias and misuse remain.

China amends law to tackle data fraud

Top legislative body in China has approved changes to its statistics law to combat data fraud. The move addresses growing concerns over the reliability of economic figures in the world’s second-largest economy. Amended regulations aim to prevent statistical manipulation and penalise officials involved in falsifying economic reports.

Authorities have acknowledged persistent problems with statistical fraud, which has led to public mistrust in economic data. The issue has become a major focus for lawmakers, as many believe it harms the accuracy of important economic indicators.

External analysts have long questioned the authenticity of Chinese data, particularly as the country grapples with an economic slowdown. The new law is part of ongoing efforts to restore confidence by cracking down on fraudulent reporting.

Government in China has vowed to investigate and penalise officials involved in data manipulation, seeking to improve transparency and the overall quality of economic statistics.

Meta revises AI labels on social media platforms to balance transparency and user experience.

Meta’s decision to change how it labels AI-modified content on Instagram, Facebook, and Threads signifies another advancement in the company’s approach to generative AI. The visibility of AI’s involvement is reduced by moving the ‘AI info’ label to the post’s menu for content that has been edited with AI tools. This could make it easier for users to overlook or miss the AI editing details in such posts.

However, for content fully generated by AI, Meta will continue to prominently display the label beneath the user’s name, ensuring that posts created entirely by AI prompts remain visibly marked. The distinction Meta is making here seems to reflect the varying degrees of AI involvement in content creation.

Meta aims to increase transparency about content labelling, specifying if AI designation is from industry signals or self-disclosure. This effort follows complaints and confusion over the previous ‘Made with AI’ label, particularly from photographers concerned that their real photos were misrepresented.

This change may raise concerns about the potential for users to be misled, especially as AI editing tools become more sophisticated and the line between human and AI-created content continues to blur. It highlights the need for continued transparency as AI technology integrates more deeply into content creation across platforms.

Legal showdown could decide TikTok ban in US

TikTok is facing a critical legal battle that could determine the future of the app in the US. On Monday, the US Court of Appeals in Washington, DC, will hear arguments from TikTok and its parent company, ByteDance, as they seek to block a new law that threatens to ban the app by 19 January 2024. With around 170 million US users, TikTok’s fate hangs in the balance just as the presidential election ramps up.

Donald Trump, the Republican candidate, and Vice President Kamala Harris are using TikTok to engage with younger voters, underscoring the app’s significant political and social influence. However, the US government remains concerned about national security risks, particularly the potential for China to access American user data through the app. Lawmakers passed the measure, calling for ByteDance to divest from TikTok, citing fears of surveillance.

ByteDance argues that the law violates free speech and insists that divesting from TikTok is not feasible. With a looming January deadline for a sale or a potential ban, TikTok’s legal team is seeking a ruling by early December. This would allow the US Supreme Court time to consider the case before any decision takes effect. President Joe Biden, who signed the law in April, holds the power to extend the deadline if ByteDance shows progress toward selling TikTok.

While the White House maintains that the move is about national security, not eliminating TikTok, the upcoming court ruling will be pivotal in shaping the app’s future in the US and possibly beyond.

NITDA and NBS join forces to transform Nigeria’s digital landscape

The National Information Technology Development Agency (NITDA) and the National Bureau of Statistics (NBS) have formed a strategic partnership to leverage data and technology to transform Nigeria’s digital landscape, aligning closely with President Bola Tinubu’s ‘Renewed Hope Agenda.’ By combining NITDA’s expertise in digital transformation with NBS’s data-driven insights, the collaboration is expected to significantly improve public service delivery, drive sustainable economic growth, and enhance policy-making.

In particular, the partnership focuses on data exchange and integration, facilitating more informed decisions across sectors such as infrastructure development, resource allocation, and urban planning, ensuring that initiatives are grounded in accurate and timely data. Moreover, the partnership emphasises fostering innovation and economic growth.

NITDA and NBS aim to create a digital ecosystem that supports tech startups and entrepreneurship, positioning Nigeria as a leader in the global digital economy. That collaboration is designed to attract foreign investment and create job opportunities, contributing to long-term economic prosperity.

Additionally, the partnership is committed to bridging the digital divide through digital skills development. By promoting digital literacy and modernising data processes with tools like Geographic Information Systems (GIS), NITDA and NBS will enhance decision-making and governance while empowering more Nigerians to participate in the digital economy and fostering inclusive growth.

Mauritius unveils Mobil ID digital identity card

Mauritius has launched the Mobil ID digital identity card as a significant milestone in its digital transformation journey. That initiative allows citizens to manage personal information, such as updating addresses or reporting lost physical IDs and supports secure electronic document signing. Designed with dual authentication features, the Mobil ID enhances security and protects against identity theft while streamlining administrative processes for businesses and government agencies.

The launch of the Mobil ID is a key component of the broader ‘Digital Mauritius 2030’ strategy. The ambitious initiative aims to transform the country into a digitally-driven economy by enhancing digital infrastructure, expanding 5G networks, modernising public services, and developing digital skills. The Mauritian government is committed to maintaining technological advancement while ensuring robust data protection, which positions the nation at the forefront of digital innovation and demonstrates its leadership in advancing technology across Africa.

Mauritius has also become the first African country to adopt a digital identity card that meets international ISO standards. Developed in collaboration with Thales and Harel Mallac Technologies, the Mobil ID sets a new benchmark for digital identity systems in the region, reflecting Mauritius’s commitment to leading digital innovation.

Salesforce launches local cloud platform in Israel for sensitive data

Salesforce has launched its Hyperforce cloud platform in Israel, marking its 17th global cloud location. The new platform will allow sensitive data from government entities and regulated companies to remain within Israel, ensuring compliance with local privacy laws. Initially, Hyperforce will operate on Amazon Web Services (AWS), with plans to potentially expand to Google Cloud in the future.

Before the launch, Israeli companies stored data at Salesforce’s Frankfurt facility, which had been approved for government use. The local cloud platform will now provide a more secure and convenient option for Salesforce’s customers in Israel, with all companies set to migrate soon.

Salesforce, which employs 750 people across three sites in Israel, has been heavily investing in AI. Its Israeli R&D centre plays a key role in developing AI and other advanced technologies, positioning the country as one of the company’s three major development hubs alongside the U.S. and India.

The company’s move to expand its cloud services in Israel aligns with its broader strategy to integrate AI into its product offerings and drive future growth in revenue and profitability.

UK police arrest over 1,200 after riots using facial recognition

In the aftermath of anti-immigration protests and riots in the UK, police have arrested 1,280 individuals, largely through the use of retrospective facial recognition. Authorities matched video footage from various sources, including body-worn cameras, social media, and CCTV, to identify and apprehend suspects. The violence, which erupted after a stabbing in Southport, resulted in the charging of 796 people by the end of August, with more suspects under investigation.

Throughout 29 demonstrations from late July to early August, the police swiftly moved cases to court. By early September, 570 individuals had faced trial, with one man receiving a nine-year sentence for arson involving a hotel housing asylum seekers. Other offenders were handed sentences ranging from two to over three years.

Why does this matter?

Despite the riots subsiding, live facial recognition remains in use for public safety. North Wales Police deployed the technology at a recent football match, scanning nearly 35,000 faces without making any arrests. Authorities clarified that the system only flags individuals on a wanted list and deletes others’ data immediately. The system has also been used at ferry ports and will soon be trialled in Hampshire, continuing to play a role in police efforts nationwide.