As part of the process towards developing a Global Digital Compact (GDC), the UN Secretary-General has issued a policy brief outlining areas in which ‘the need for multistakeholder digital cooperation is urgent’: closing the digital divide and advancing sustainable development goals (SDGs), making the online space open and safe for everyone, and governing artificial intelligence (AI) for humanity.
The policy brief also suggests objectives and actions to advance such cooperation and ‘safeguard and advance our digital future’. These are structured around the following topics:
Digital connectivity and capacity building. The overarching objectives here are to close the digital divide and empower people to participate fully in the digital economy. Proposed actions range from common targets for universal and meaningful connectivity to putting in place or strengthening public education for digital literacy.
Digital cooperation to accelerate progress on the SDGs. Objectives include making targeted investments in digital public infrastructure and services, making data representative, interoperable, and accessible, and developing globally harmonised digital sustainability standards. Among the proposed actions are the development of definitions of safe, inclusive, and sustainable digital public infrastructures, fostering open and accessible data ecosystems, and developing a common blueprint on digital transformation (something the UN would do).
Upholding human rights. Putting human rights at the centre of the digital future, ending the gender digital divide, and protecting workers are the outlined objectives in this area. One key proposed action is the establishment of a digital human rights advisory mechanism, facilitated by the Office of the UN High Commissioner for Human Rights, to provide guidance on human rights and technology issues.
An inclusive, open, secure, and shared internet. There are two objectives: safeguarding the free and shared nature of the internet, and reinforcing accountable multistakeholder governance. Some of the proposed actions include commitments from governments to avoid blanket internet shutdowns and refrain from actions disrupting critical infrastructures.
Digital trust and security. Objectives range from strengthening multistakeholder cooperation to elaborate norms, guidelines, and principles on the responsible use of digital technologies, to building capacity and expanding the global cybersecurity workforce. The proposed overarching action is for stakeholders to commit to developing common standards and industry codes of conduct to address harmful content on digital platforms.
Data protection and empowerment. Ensuring that data are governed for the benefit of all, empowering people to control their personal data, and developing interoperable standards for data quality as envisioned as key objectives. Among the proposed actions are an invitation for countries to consider adopting a declaration on data rights and seeking convergence on principles for data governance through a potential Global Data Compact.
Agile governance of AI and other emerging technologies. The proposed objectives relate to ensuring transparency, reliability, safety, and human control in the design and use of AI; putting transparency, fairness, and accountability at the core of AI governance; and combining existing norms, regulations, and standards into a framework for agile governance of AI. Actions envisioned range from establishing a high-level advisory body for AI to building regulatory capacity in the public sector.
Global digital commons. Objectives include ensuring inclusive digital cooperation, enabling regular and sustained exchanges across states, regions, and industry sectors, and developing and governing technologies in ways that enable sustainable development, empower people, and address harms.
The document further notes that ‘the success of a GDC will rest on its implementation’. This implementation would be done by different stakeholders at the national, regional, and sectoral level, and be supported by spaces such as the Internet Governance Forum and the World Summit on the Information Society Forum. One suggested way to support multistakeholder participation is through a trust fund that could sponsor a Digital Cooperation Fellowship Programme.
As a mechanism to follow up on the implementation of the GDC, the policy brief suggests that the Secretary-General could be tasked to convene an annual Digital Cooperation Forum (DCF). The mandate of the forum would also include, among other things, facilitating collaboration across digital multistakeholder frameworks and reducing duplication; promoting cross-border learning in digital governance; and identifying and promoting policy solutions to emerging digital challenges and governance gaps.
A recent report by threat intelligence firm SpyCloud has shed light on the alarming vulnerability of employees at Fortune 1000 telecommunications companies on dark web sites. The report reveals that researchers have uncovered approximately 6.34 million pairs of credentials, including corporate email addresses and passwords, which are likely associated with employees in the telecommunications sector.
The report highlights this as an ‘extreme’ rate of exposure compared to other sectors. In comparison, SpyCloud’s findings uncovered 7.52 million pairs of credentials belonging to employees in the tech sector, but this encompassed a significantly larger pool of 167 Fortune 1000 companies.
Media reports that these findings underscore the heightened risk faced by employees within the telecommunications industry, as their credentials are more readily available on dark web platforms. The compromised credentials pose a significant threat to the affected individuals and their respective companies, as cybercriminals can exploit them for various malicious activities such as unauthorized access, data breaches, and targeted attacks.
In the USA, the White House Office of Science and Technology Policy (OSTP) is releasing a public request for information (RFI) to learn more about the automated tools used by employers to surveil, monitor, evaluate, and manage workers. The main arguments of the RFI are that automated tools used by employers to surveil, monitor, evaluate, and manage workers should be better understood, that the federal government should respond to any relevant risks and opportunities associated with these tools, and that best practices should be shared with employers, worker organizations, technology vendors, developers, and others in civil society. The RFI is intended to advance the government’s understanding of the design, deployment, prevalence, and impacts of automated tools, to inform new policy responses, to share relevant research, data, and findings with the public, and to amplify best practices among employers, worker organisations, technology vendors, developers, and others in civil society. To that end, the RFI proposes to gather workers’ firsthand experiences with surveillance technologies, details from employers, technology developers, and vendors on how they develop, sell, and use these technologies, best practices for mitigating risks to workers, relevant data and research, and ideas for how the federal government should respond to any relevant risks and opportunities.
Telegram’s CEO, Pavel Durov, has announced that the company would appeal a Brazilian court’s order to suspend its services temporarily. The court order follows the platform’s non-compliance with a prior court order to provide data on two neo-Nazi groups accused of inciting violence in schools. Durov claimed that compliance with such a request was ‘technologically impossible’.
The judge had also set a daily fine of nearly US$200,000 for noncompliance. Telegram’s CEO did not state whether the company intends to pay the fine.
In the USA, the Governor of Utah, Spencer Cox, has signed two laws introducing new measures intended to protect children online. The first law prohibits social media companies from using ‘a practice, design, or feature that […] the social media company knows, or which by the exercise of reasonable care should know, causes a Utah minor account holder to have an addiction to the social media platform’. The second law introduces age requirements for the use of social media platforms: Social media companies are required to introduce age verification for users in Utah and to allow minors to create user accounts only with the express consent of a parent or guardian. The laws also prohibit social media companies from advertising to minors, collecting information about them, or targeting content to them. In addition, there is a requirement for companies to enable parents or guardians to access the minors’ accounts. and minors should not be allowed to access their social media accounts between 10:30 pm and 06:30 am.
The laws – set to enter into force in March 2024 – have been criticised by civil liberties groups and tech lobby groups who argue that they are overly broad and could infringe on free speech and privacy rights. Social media companies will likely challenge the new rules.
The 8th IEEE European Symposium on Security and Privacy will be held on July 3-7, 2023, in Delft, Netherlands and is organised by the TU Delft Cybersecurity group.
Since its establishment in 1980, the IEEE Symposium on Security and Privacy has served as the foremost forum for presenting innovations in computer security and electronic privacy and for fostering connections between researchers and practitioners in the field. Expanding upon this achievement, IEEE launched the European Symposium on Security and Privacy (EuroS&P), which takes place annually in different European cities.
For more information, please visit the dedicated web page.
According to Meta, the change does not have any impact on users’ privacy settings and their ‘information will still be protected by UK data protection and privacy laws’.
The World Economic Forum and the Council on the Connected World published the State of the Connected World 2023 report exploring governance gaps related to the internet of things (IoT). The report outlines the findings of a survey conducted with 271 experts worldwide to understand the state of IoT affairs. The COVID-19 pandemic has increased IoT demand in health, manufacturing, and consumer IoT. However, there is a lack of confidence when it comes to matters such as privacy and security.
Two main governance gaps are identified: (1) a lack of governmental regulation and implementation of industry standards and (2) IoT users are more susceptible to cyber threats and cyberattacks.
One recommendation is for businesses and governments to develop and implement practices to improve privacy and security and create a more inclusive and accessible IoT ecosystem. The need to improve equal access to technology and its benefits is also underscored.
The Bundeskartellamt’s preliminary conclusions of its administrative proceeding against Google state that users of Google services ‘are not given sufficient choice as to whether and to what extent they agree to [a] far-reaching processing of data. The choices offered so far, if any, are, in particular, not sufficiently transparent and too general.’ The office argues that users should be able to limit the processing of data to the specific service used and to differentiate between the purposes for which the data are processed. In addition, the choices offered must not be devised in a way that makes it easier for users to consent to the processing of data across services than not to consent to this.
Following the issuance of the statement of objections, Google has the opportunity to comment on the office’s preliminary assessment and present either reasons to justify its practices or suggestions to dispel the concerns. A final decision on the administrative proceeding is awaited in 2023.
In another change to be introduced in March 2023, new controls will allow teenage users to choose to ‘see less’ of certain types of adverts in both Facebook and Instagram.
Meta had previously put in place restrictions to stop advertising for teenagers based on their interests and activities.