Israel deploys facial recognition program in Gaza

Israel has deployed a sophisticated facial recognition program in the Gaza Strip, according to reports. The program, initiated after the 7 October attacks, employs technology from Google Photos and a proprietary tool from Corsight AI, an Israeli firm dedicated to creating industry-leading facial recognition technology to identify individuals linked to Hamas without their consent.

The facial recognition system, crafted in parallel with Israel’s military operations in Gaza, operates by collecting data from diverse sources, including social media platforms, surveillance footage, and inputs from Palestinian detainees. Israeli Unit 8200, the primary intelligence unit, played a pivotal role in identifying potential targets through these means.

Corsight’s technology, known for its claim to accurately identify individuals even with less than 50% of their face visible, was utilised to construct a facial recognition tool. Establishing checkpoints equipped with facial recognition cameras along critical routes used by Palestinians to escape southwards, the Israeli military aims to expand the database and pinpoint potential targets, compiling a ‘hit list’ of individuals associated with the 7 October attack.

Despite soldiers acknowledging Corsight’s technology’s limitations, particularly in grainy images or obscured faces, concerns persist over misidentifications. One such incident involved the mistaken apprehension of Palestinian poet Mosab Abu Toha, who faced interrogation and detention due to being flagged by the system.

South Korea launches investigation into Worldcoin’s personal data collection

South Korea’s Personal Information Protection Commission (PIPC) has launched an investigation into cryptocurrency project Worldcoin following numerous complaints about its collection of personal information. Of particular concern is the project’s use of iris scanning in exchange for cryptocurrency. The PIPC announced on Monday that it will examine company’s collection, processing, and potential overseas transfer of sensitive personal information, and will take action if any violations of local privacy rules are found.

It is worth noting that OpenAI, which co-founded Worldcoin, was fined last year by the privacy watchdog for leaking personal information of South Korean citizens through its ChatGPT application. This connection with OpenAI adds weight to the concerns surrounding the handling of personal data by Worldcoin.

Worldcoin is an identity-focused cryptocurrency project. Participants in the protocol receive WLD tokens in return for signing up. The project’s unconventional sign-up process has also raised concerns in other jurisdictions. As of now, company has not responded to the investigation or the accusations.

Avast ordered to pay $16.5 million for illegally selling user browsing data

The US Federal Trade Commission (FTC) has ordered a software company Avast, to pay $16.5 million and cease selling or licensing web browsing data for advertising purposes. The charges against Avast include allegations that the company collected and sold users’ browsing information without their consent, despite promising to protect their privacy.

Czech company based in the UK, collected the US consumers’ browsing information using browser extensions and antivirus software, according to the FTC complaint. The collected data included details about users’ web searches, visited webpages, religious beliefs, health concerns, political leanings, location, financial status, and visits to child-directed content. This information was stored indefinitely and sold to third parties without adequate notice or consent.

The FTC also argues that Avast deceived users by falsely claiming that its software would safeguard their privacy and block third-party tracking. Company failed to sufficiently inform consumers that it would sell their detailed, re-identifiable browsing data. The data was sold to over 100 third parties through Avast’s subsidiary, Jumpshot.

In addition to fine, Avast and its subsidiaries will be prohibited from misrepresenting their data usage practices. Under the proposed order, Avast is required to delete the browsing information transferred to Jumpshot and any products or algorithms derived from that data.

The company must also notify consumers whose browsing information was sold without consent about the FTC’s actions. Furthermore, they will be required to implement a comprehensive privacy program to address the misconduct highlighted by the FTC.

What’s the future of AI services?

The recent disruption of OpenAI’s ChatGPT service, occurred on 8 November, has sparked waves of concern within the AI and natural language processing communities. With more than 100 million active weekly users, ChatGPT, a stalwart in the AI landscape, faced an unexpected blackout lasting over 90 minutes. However, the repercussions extended beyond ChatGPT, casting a shadow over OpenAI’s entire ecosystem as its API services also succumbed to the disruption.

The ChatGPT and associated services disruption has garnered significant attention not only due to its duration and extent, but also because it marked the second outage within a 48 hours timeframe. A partial outage had occurred on Tuesday, 7 November, following the one on Thursday, 19 October and the previous one of Friday, 15 September. These partial outages, together with the last, major one, raise concerns not only about the service’s stability and reliability, but also about the impact future outages may have on global business development.

OpenAI, a trailblazing company at the forefront of AI innovation, has been actively addressing this issue. They’ve implemented a fix and brought ChatGPT service back to its duties, assuring users of their commitment to resolving the problem swiftly and effectively. However, according to the last updates from OpenAI status website, it’s been stated that the company is ‘dealing with periodic outages due to an abnormal traffic pattern reflective of a DDoS attack’ and is continuing to work in order to mitigate similar issues.

Presumed cyberattack

In a recent development, the hacktivist group, Anonymous Sudan, has claimed responsibility for the cyberattack on OpenAI’s ChatGPT. The group outlined its motives in a post on their Telegram channel. The primary reasons cited for targeting ChatGPT include OpenAI’s collaborations with Israel, particularly emphasising the CEO’s expressed intention to increase investments in the country.

Anonymous Sudan underlined its focus on American companies as a driving force behind the attack. Additionally, the group claimed an alleged bias in ChatGPT, suggesting a preference for Israel over Palestine within the chatbot’s interactions. These stated reasons shed light on the complex motivations behind the cyber assault, intertwining geopolitical concerns and perceived biases within the AI realms.

Wide-scale adoption of AI services

These unexpected outages brought many users to question the matter of sustainability of such backing infrastructure, since many companies are increasingly adopting ChatGPT’s services and counting on its support to discharge their daily tasks. The trajectory of AI adoption within a wide range of industry sectors is poised for significant growth from 2022 to 2025. As of 2022, nearly half of the executives surveyed had the expectation that their respective companies would undergo wide-scale adoption of AI technologies. This anticipation underscores the recognition of AI as a transformative force within the world economy, with executives foreseeing the integration of AI across various facets of their operations.

 Bar Chart, Chart

Looking ahead to 2025, the sentiment among these industry leaders is even more optimistic. There is an expectation that the adoption rate of AI will not only continue to grow, but will surpass the earlier projections for wide-scale implementation. Executives envision a future where AI becomes not just a supplementary tool, but a critical component deeply embedded in the operational fabric of their companies.

This shift in expectations from wide-scale to critical implementation reflects a growing understanding of the profound impact that AI can have on the global economy workflow. AI is increasingly seen not merely as a trend or optional enhancement, but as an integral element that can drive efficiency, innovation, and strategic decision-making.

 Page, Text, Bar Chart, Chart, Blade, Razor, Weapon

The factors driving this anticipated surge in AI adoption within the global work environment are manifold. The promise of enhanced data analysis, streamlined processes, and the ability to derive actionable insights from vast datasets positions AI as a catalyst for operational excellence. Additionally, as AI technologies continue to mature and demonstrate tangible benefits, companies are more inclined to invest in and fully embrace these advancements.

 Chart, Line Chart

The increasing adoption of AI tools into various aspects of personal and professional lives is sparking a question about whether it will be possible to work and live without their backing in the near future, considering that once we got used to them, it will not be easy to replace automated processes and thus overcome longer inaccessibilities and misfunctions. Analysing this scenario requires consideration of the current role of AI, its impact on different sectors, and potential future developments, as the recent outage of OpenAI’s services highlighted the need for stronger infrastructure and consistent service reliability.

Knowledge slaveries: Is Bottom-up AI a possible solution?

The constant use of AI services reveals our thoughts and emotions through interactions with AI platforms, resulting in a vast amount of data that can be used to extract patterns in our thinking. This trend has given rise to a new AI economy where these patterns are collected, codified, and monetised, raising concerns about privacy and cognition intrusions beyond what social media and tech platforms currently pose.

 Bar Chart, Chart

This development risks creating a state of ‘knowledge slavery’, where corporate or government AI monopolies control access to our knowledge. To counter this, it is essential to retain ownership over our thinking patterns, including those derived automatically through AI.

One of possible solution lies in the development of bottom-up AI, Diplo’s Executive Director Dr Jovan Kurbalija claims. Bottom-up AI is both technically feasible and ethically desirable, and has the potential to address governance concerns raised by generative AI tools like ChatGPT. It gives control back to individuals and communities, ensuring privacy and data protection. It also fosters inclusivity, innovation, and democracy by mitigating the risks of power centralisation inherent in generative AI.

Contrary to the prevailing belief that powerful AI platforms can only be built using big data, leaked documents from Google suggest that open-source AI could outperform proprietary models like ChatGPT. Open-source platforms such as Vicuna, Alpaca, and LLama are already offering similar quality, while being more cost-effective, faster, more modular, and greener in terms of energy consumption.

The technology for bottom-up AI is advancing, but there is a need to ensure the quality of data. Currently, data labelling is mainly performed manually in low-cost English-speaking countries, risking labour law and data protection challenges. Diplo, a leading organisation, integrates data labelling into their daily operations, gradually building bottom-up AI by digitally annotating text during research and other tasks.

While the full adoption of bottom-up AI remains uncertain, it may coexist with top-down AI approaches. Some individuals and communities may be more inclined to experiment with and embrace bottom-up AI, while others stick to top-down AI due to inertia. However, questioning the prevailing AI paradigm and exploring alternatives is crucial to make informed decisions that benefit society as a whole and to prevent inconveniences from future outages of bigger AI service providers people are relying on.


UN Secretary-General issues policy brief for Global Digital Compact

As part of the process towards developing a Global Digital Compact (GDC), the UN Secretary-General has issued a policy brief outlining areas in which ‘the need for multistakeholder digital cooperation is urgent’: closing the digital divide and advancing sustainable development goals (SDGs), making the online space open and safe for everyone, and governing artificial intelligence (AI) for humanity. 

The policy brief also suggests objectives and actions to advance such cooperation and ‘safeguard and advance our digital future’. These are structured around the following topics:

  • Digital connectivity and capacity building. The overarching objectives here are to close the digital divide and empower people to participate fully in the digital economy. Proposed actions range from common targets for universal and meaningful connectivity to putting in place or strengthening public education for digital literacy. 
  • Digital cooperation to accelerate progress on the SDGs. Objectives include making targeted investments in digital public infrastructure and services, making data representative, interoperable, and accessible, and developing globally harmonised digital sustainability standards. Among the proposed actions are the development of definitions of safe, inclusive, and sustainable digital public infrastructures, fostering open and accessible data ecosystems, and developing a common blueprint on digital transformation (something the UN would do). 
  • Upholding human rights. Putting human rights at the centre of the digital future, ending the gender digital divide, and protecting workers are the outlined objectives in this area. One key proposed action is the establishment of a digital human rights advisory mechanism, facilitated by the Office of the UN High Commissioner for Human Rights, to provide guidance on human rights and technology issues. 
  • An inclusive, open, secure, and shared internet. There are two objectives: safeguarding the free and shared nature of the internet, and reinforcing accountable multistakeholder governance. Some of the proposed actions include commitments from governments to avoid blanket internet shutdowns and refrain from actions disrupting critical infrastructures.
  • Digital trust and security. Objectives range from strengthening multistakeholder cooperation to elaborate norms, guidelines, and principles on the responsible use of digital technologies, to building capacity and expanding the global cybersecurity workforce. The proposed overarching action is for stakeholders to commit to developing common standards and industry codes of conduct to address harmful content on digital platforms. 
  • Data protection and empowerment. Ensuring that data are governed for the benefit of all, empowering people to control their personal data, and developing interoperable standards for data quality as envisioned as key objectives. Among the proposed actions are an invitation for countries to consider adopting a declaration on data rights and seeking convergence on principles for data governance through a potential Global Data Compact. 
  • Agile governance of AI and other emerging technologies. The proposed objectives relate to ensuring transparency, reliability, safety, and human control in the design and use of AI; putting transparency, fairness, and accountability at the core of AI governance; and combining existing norms, regulations, and standards into a framework for agile governance of AI. Actions envisioned range from establishing a high-level advisory body for AI to building regulatory capacity in the public sector. 
  • Global digital commons. Objectives include ensuring inclusive digital cooperation, enabling regular and sustained exchanges across states, regions, and industry sectors, and developing and governing technologies in ways that enable sustainable development, empower people, and address harms. 

The document further notes that ‘the success of a GDC will rest on its implementation’. This implementation would be done by different stakeholders at the national, regional, and sectoral level, and be supported by spaces such as the Internet Governance Forum and the World Summit on the Information Society Forum. One suggested way to support multistakeholder participation is through a trust fund that could sponsor a Digital Cooperation Fellowship Programme. 

As a mechanism to follow up on the implementation of the GDC, the policy brief suggests that the Secretary-General could be tasked to convene an annual Digital Cooperation Forum (DCF). The mandate of the forum would also include, among other things, facilitating collaboration across digital multistakeholder frameworks and reducing duplication; promoting cross-border learning in digital governance; and identifying and promoting policy solutions to emerging digital challenges and governance gaps.

Employees at Fortune 1000 telecom companies are some of the most exposed on darkweb, researchers report

A recent report by threat intelligence firm SpyCloud has shed light on the alarming vulnerability of employees at Fortune 1000 telecommunications companies on dark web sites. The report reveals that researchers have uncovered approximately 6.34 million pairs of credentials, including corporate email addresses and passwords, which are likely associated with employees in the telecommunications sector.

The report highlights this as an ‘extreme’ rate of exposure compared to other sectors. In comparison, SpyCloud’s findings uncovered 7.52 million pairs of credentials belonging to employees in the tech sector, but this encompassed a significantly larger pool of 167 Fortune 1000 companies.

Media reports that these findings underscore the heightened risk faced by employees within the telecommunications industry, as their credentials are more readily available on dark web platforms. The compromised credentials pose a significant threat to the affected individuals and their respective companies, as cybercriminals can exploit them for various malicious activities such as unauthorized access, data breaches, and targeted attacks.

White House OSTP issues request for information on automated tools used by employers

In the USA, the White House Office of Science and Technology Policy (OSTP) is releasing a public request for information (RFI) to learn more about the automated tools used by employers to surveil, monitor, evaluate, and manage workers. The main arguments of the RFI are that automated tools used by employers to surveil, monitor, evaluate, and manage workers should be better understood, that the federal government should respond to any relevant risks and opportunities associated with these tools, and that best practices should be shared with employers, worker organizations, technology vendors, developers, and others in civil society. The RFI is intended to advance the government’s understanding of the design, deployment, prevalence, and impacts of automated tools, to inform new policy responses, to share relevant research, data, and findings with the public, and to amplify best practices among employers, worker organisations, technology vendors, developers, and others in civil society. To that end, the RFI proposes to gather workers’ firsthand experiences with surveillance technologies, details from employers, technology developers, and vendors on how they develop, sell, and use these technologies, best practices for mitigating risks to workers, relevant data and research, and ideas for how the federal government should respond to any relevant risks and opportunities.

Telegram to appeal Brazilian judge’s order to block the platform

Telegram’s CEO, Pavel Durov, has announced that the company would appeal a Brazilian court’s order to suspend its services temporarily. The court order follows the platform’s non-compliance with a prior court order to provide data on two neo-Nazi groups accused of inciting violence in schools. Durov claimed that compliance with such a request was ‘technologically impossible’.

The judge had also set a daily fine of nearly US$200,000 for noncompliance. Telegram’s CEO did not state whether the company intends to pay the fine.

US state of Utah introduces laws that prohibit social media platforms from allowing access to minors without explicit parental consent

In the USA, the Governor of Utah, Spencer Cox, has signed two laws introducing new measures intended to protect children online. The first law prohibits social media companies from using ‘a practice, design,
or feature that […] the social media company knows, or which by the exercise of reasonable care should know, causes a Utah minor account holder to have an addiction to the social media platform’. The second law introduces age requirements for the use of social media platforms: Social media companies are required to introduce age verification for users in Utah and to allow minors to create user accounts only with the express consent of a parent or guardian. The laws also prohibit social media companies from advertising to minors, collecting information about them, or targeting content to them. In addition, there is a requirement for companies to enable parents or guardians to access the minors’ accounts. and minors should not be allowed to access their social media accounts between 10:30 pm and 06:30 am.

The laws – set to enter into force in March 2024 – have been criticised by civil liberties groups and tech lobby groups who argue that they are overly broad and could infringe on free speech and privacy rights. Social media companies will likely challenge the new rules.

IEEE European Symposium on Security and Privacy 2023 (EuroS&P)

The 8th IEEE European Symposium on Security and Privacy will be held on July 3-7, 2023, in Delft, Netherlands and is organised by the TU Delft Cybersecurity group.

Since its establishment in 1980, the IEEE Symposium on Security and Privacy has served as the foremost forum for presenting innovations in computer security and electronic privacy and for fostering connections between researchers and practitioners in the field. Expanding upon this achievement, IEEE launched the European Symposium on Security and Privacy (EuroS&P), which takes place annually in different European cities.

For more information, please visit the dedicated web page.