Hello from the CyberVerse: Maximizing the Benefits of Future Technologies

1 Nov 2023 13:30h - 14:05h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Ahmed Al-Isawi

The analysis of the provided data highlights several key points regarding the concerns and potential of emerging technologies. One notable concern raised is the potential impact of hacking in a highly digital world. Ahmed Al-Isawi, a renowned expert, showcases his worries about the security risks associated with hacking. He started as a hacker himself and understands their potential. He currently holds the responsibility for security in a digital city being developed in Neom.

Another concern is the need to foster innovation. It is emphasized that fostering a team that can innovate is crucial. A NASA study is mentioned, which indicates that children as young as five years old possess a 95% capability to innovate. Additionally, inculcating the right skills, knowledge, and values in teams is highlighted as being vital in cultivating a culture of innovation.

The role of Artificial Intelligence (AI) in enhancing cybersecurity is also discussed. It is stated that AI can be used to monitor a large supply chain, detect anomalies, and respond to them within a limited timespan. However, it is important to note that the traditional methods and solutions may not suffice in solving modern cybersecurity problems. Ahmed Al-Isawi argues that if organisations continue to rely solely on traditional methods, they will fail to fully exploit the potential of AI in securing their systems.

The application of AI in breaking traditional boundaries is presented as a positive aspect. By employing AI innovatively, it is suggested that AI has the potential to overcome traditional limitations. Moreover, the shrinking turnaround time for detecting and reacting to cybersecurity incidents is highlighted, indicating that humans alone cannot cope with the short timeframe and that AI can play a significant role in addressing this challenge.

The metaverse, a virtual space, is explored in terms of its cybersecurity challenges and potential benefits. One notable challenge is the issue of user protection, as observed in the case of Second Life, an early example of a metaverse that faced problems with bullying and harassment. However, there is also optimism regarding the potential use of decentralised digital identities to improve behaviour in the metaverse. It is proposed that having people identified in the digital world may lead to better behaviour.

The importance of interdisciplinary cooperation and involving more than just cybersecurity experts in protecting the metaverse is emphasised. Authorities such as the police are suggested to contribute to maintaining order in the digital space.

Advancements in education through the use of the metaverse are highlighted. It is suggested that the metaverse enables school experiments to be conducted in a safe virtual environment and may lead to cost reduction for schools.

Regarding regulatory frameworks, it is argued that current regulations may not be sufficient to protect emerging technologies such as AI. The asymmetric nature of emerging technologies, where AI is expected to be used by approximately 60% of employees, raises concerns about the lack of policies to regulate its use.

Another concern raised is the potential for AI to produce faked or hallucinated information, especially with the development of generative AI. As a result, the need for AI to provide transparency and explain its processes is stressed.

It is noted that while regulations are important, they alone will not solve everything in the context of preserving values in an uncontrolled metaverse. Other factors such as education, parenting, and cultural and religious values are deemed necessary for value preservation.

The human element within the digital ecosystem is identified as crucial in preserving values. Humans are often considered the weakest link in a digital ecosystem, and education and parenting are seen as vital in addressing this issue.

Lastly, the significance of open-source development and public accessibility in advancing technology is highlighted. It is suggested that open-source contributions and public exploration of technology can help accelerate advancements, as closed-door development has been slowing down progress.

In conclusion, the analysis sheds light on various concerns and potentials related to emerging technologies. It underlines the need for heightened cybersecurity measures, fostering innovation, and acknowledging the role of AI in enhancing security. Moreover, it highlights the challenges and benefits of the metaverse, the need for updated regulatory frameworks, and the importance of the human element and open-source development in the digital ecosystem. Overall, this analysis provides valuable insights into the complex landscape of emerging technologies.

Adam Russell

During the cybersecurity discussion, the speakers addressed several key topics. They first highlighted the increasing complexity of transactions and data storage worldwide. With more transactions occurring daily and a growing volume of data being stored, the need for robust security platforms and tools is increasing.

The participants also expressed concern over the persistent threat of attackers finding ways to penetrate networks, even with advanced security measures in place. They specifically mentioned the introduction of ransomware as a method employed by attackers. Despite advancements in cybersecurity, attackers are still able to exploit vulnerabilities and gain unauthorized access to systems.

To combat these threats, organisations are increasingly turning to artificial intelligence (AI). AI is being used to quickly gain context on adversaries and reduce the time it takes to detect potential cyber attacks. By leveraging AI technologies, organizations can enhance their ability to identify and prevent these threats.

The emergence of quantum computing was another significant topic of discussion. Although quantum computing brings various benefits, it also introduces cybersecurity risks. However, the speakers stressed that at present, quantum computing does not pose a threat to encryption systems. Nevertheless, they highlighted the importance of exploring post-quantum cryptography as an opportunity to address these future risks.

The importance of collaboration and teamwork in strengthening cybersecurity was also emphasized. Participants acknowledged that different facets need to work together, as everyone brings their unique expertise to the table. By collaborating, stakeholders can bolster the technology and its security, ensuring a more robust defense against cyber threats.

In virtual spaces, regulation and safety measures were discussed. Speakers underscored the need for flexible, ecosystem-specific policies to ensure safety while promoting innovation. They cited the example of Second Life, which successfully implemented user-friendly regulations to safeguard users and encourage innovation. The notion of a “metaverse of metaverses” was also introduced, highlighting the existence of diverse ecosystems with their specific safety measures.

Regulation was seen as crucial for the security of critical systems and the safety of users. However, the speakers cautioned against rushing into extensive regulation on top of artificial intelligence (AI) and large language models. They expressed concern that excessive regulation could impede technology adoption and hinder a country’s ability to harness its potential.

The importance of partnerships and international cooperation in combating global cyber threats was emphasized. The participants cited ongoing efforts to combat child safety issues, tackle ransomware attacks, and establish public-private partnerships with companies that host substantial amounts of data. Collaboration was viewed as key to addressing the evolving landscape of cyber threats effectively.

In conclusion, the discussion on cybersecurity highlighted the challenges and opportunities brought forth by the increasing complexity of transactions, data storage, and emerging technologies. The participants emphasized the need for robust security measures, including the use of AI and exploration of post-quantum cryptography. Collaboration, regulation, and partnerships were viewed as vital tools in fortifying cybersecurity and safeguarding critical systems and user safety.

Moderator – Lucy Hedges

During the discussions, the speakers delved into the complexities of emerging technologies, focusing on AI, cybersecurity, and the virtual world. They acknowledged that AI is a technology that has barely scratched the surface of its potential benefits or detriments. While it has been a long-standing technology, it is now gaining mainstream attention.

One of the main points raised was the need to find a balance in how AI is used due to its potential impact, both beneficial and detrimental. The speakers noted that the full extent of AI’s benefits and dangers are still not fully known. This highlights the importance of carefully considering and managing the deployment of AI technologies to harness its potential advantages whilst mitigating the potential risks.

The discussions also highlighted the significance of teamwork in innovation and effective cybersecurity. A diverse team with different skills and perspectives fosters innovation and strengthens technology security. By collaborating and working together, different facets of a team contribute to building a more robust environment for enhancing technology security.

While AI can be effectively leveraged to enhance cybersecurity, it was also acknowledged that emerging technologies, specifically AI, present significant cybersecurity challenges. The rapid advancement and complexity of AI technology create new vulnerabilities that must be addressed to ensure the security of digital systems and infrastructure.

The negative aspects of the virtual world were also discussed, particularly experiences of harassment and bullying in platforms like Second Life. It was argued that there is a lack of preventative measures and punitive actions in place to address such behaviors. Thus, there is a need for regulation to prevent and punish bad behavior in the virtual world, ensuring a safer online environment.

Additionally, the discussions highlighted the intertwining of digital and physical lives, emphasizing the need to regulate these experiences. As digital lives become increasingly connected with the physical world, effective regulations must be put in place to protect individuals and maintain peace, justice, and strong institutions in both realms.

The importance of developing emerging technologies in the public domain was another noteworthy point raised. By allowing everyone to “play with” and experience these technologies through open source support, there can be faster knowledge generation and advancement than with traditional research approaches. This aligns with the goal of accelerating progress and knowledge sharing in the field of emerging technologies.

Overall, the discussions were neutral to positive in sentiment, with recognition of the potential benefits and challenges associated with emerging technologies. The speakers encouraged finding a balance, fostering teamwork, addressing cybersecurity challenges, regulating the virtual world, and promoting the development of emerging technologies in the public domain. These discussions shed light on the intricacies and complexities surrounding these topics, urging stakeholders to approach these technologies with caution and responsibility.

Chante Maurio

The analysis provides a comprehensive overview of various perspectives on the benefits and challenges of AI and emerging technologies. One key finding is that AI technology advancement has both advantages and drawbacks. On the positive side, it provides the opportunity to process large sets of data and use them in predictive ways. However, there are concerns that this advancement also allows bad actors to be trained at a faster rate. Furthermore, it enables less skilled individuals to build capabilities that they would not have otherwise been able to acquire.

In terms of emerging technologies, the analysis highlights the challenges they pose not only in terms of technological advancements but also in talent acquisition. To overcome these challenges, some argue for the utilization of AI to substitute for certain analysts and upskill existing ones. This approach is seen as a way to address the talent gap in this rapidly evolving field.

Education and proper educational programs emerge as crucial factors for the success of the global economy in mitigating the risks associated with emerging technologies. It is believed that these programs can help individuals and organizations navigate the complexities of this evolving landscape and ensure the development of necessary skills. Additionally, global harmonization of regulations is seen as vital for preventing issues of equity and competition that can arise from uneven adoption and control of emerging technologies.

The timing of introducing frameworks, standards, and regulations is also deemed critical. If introduced too soon, regulations may hinder technology’s potential. Thus, it is recommended to carefully consider the best time for implementing regulations to strike a balance between innovation and regulation.

Ethical considerations are viewed as an important aspect of tech regulation, and it is suggested that they should be managed alongside the implementation of technology. Technicians must not overlook the ethical dimensions while focusing solely on technical requirements. This recognition highlights the need for an inclusive and comprehensive approach to tech regulation.

In terms of cybersecurity, the analysis emphasizes the importance of education and training. Numerous resources, such as technical documents and standards offered by the National Institute of Standards and Technologies, free cybersecurity training provided by organizations like the Global Cyber Alliance and the Cyber Readiness Institute, and training and certificate programs offered by testing and certification organizations, can facilitate education and training in this field.

The analysis also recognizes the significance of communities, forums, and the exchange of ideas. These aspects are seen as essential for collective learning and the development of innovative solutions in response to emerging technologies.

The importance of introducing frameworks and standards at the right time into an ecosystem is underscored. While baseline standards are required, the adoption of these standards remains somewhat fragmented. It is acknowledged that certain additions and deviations from the standards may have purpose and necessity, but they should be mapped back to the baseline to ensure coherence and interoperability.

Finally, the analysis highlights the importance of international collaboration in aligning standards. Organizations such as IEC, ISO, and ISA are commended for providing forums that facilitate collaboration and cooperation in developing and aligning standards.

Overall, the analysis reveals that while AI and emerging technologies bring about numerous opportunities, they also pose challenges that require careful consideration. Ensuring proper education, timely regulations, ethical considerations, cybersecurity training, community building, and international collaboration are identified as critical factors in navigating the evolving landscape of these technologies.

Session transcript

Moderator – Lucy Hedges:
Hello from the cyberverse, maximizing the benefits of future technologies. Adam Russell, Vice President, Cloud Security, Oracle. Ahmed Al-Isawi, Director of Cybersecurity Governance, Risk and Compliance, GRC, NIAAM. Chanti Maurio, Vice President and General Manager, Identity Management and Security, UL Solutions. Lucy Hedges, Moderator, Technology Journalist and TV Presenter. All right, well that’s the introductions out of the way. Hello everybody, hope we’re all good and the energy levels are still high even though we are getting towards the end of the day. So we are about to dive into why it’s so critical to understand and act upon the implications of a digital future in order to prepare for it from a cyber security perspective. And ultimately lay the foundations of a stable and secure cyberspace for future generations. Now I have a brilliant bunch of esteemed panelists to my left who are very well versed to talk in this area and divulge their expertise while navigating through the current progress of emerging technologies like quantum computing, AI and the metaverse for example. And why we need to develop mechanisms and policies to maximize the benefits and opportunities presented by these future technologies. So first things first, hi guys, how are you? Adam, Chanti and Ahmed, all the way over there, hi. So we’ve got 35 minutes to talk about quite a complex topic so I’m just going to dive in with my first question to get things going. So I think a great place to start would be to paint a bigger picture, to really contextualize the conversation. So how has the cyber security landscape evolved with the advent of emerging technologies like quantum computing, AI and the metaverse? Adam, I’m going to throw that to you first.

Adam Russell:
You’ve heard a lot of these topics today from the advent of AI, how we can use AI for security purposes and some of the risks with AI overall around safety. Overall I think the world’s growing more and more complex. You’re seeing more transactions on a daily basis, more data being stored throughout the world. And yet we’re growing more of our security platform and tools globally. We’re introducing new techniques to prevent hackers from attacking our data, introducing ransomware across the landscape. But the data shows that we’re continually enabling attackers into our networks, unfortunately. So with the advent of AI, what we’re seeing is ultimately the attackers and what my organization is ultimately doing from a security operations perspective is utilizing AI for gaining context quickly on adversaries and bringing down that meantime to detection, ultimately. And we’re introducing those into our tool sets and then ultimately leveraging them to protect customers globally.

Moderator – Lucy Hedges:
Yeah, it really is quite fascinating just how fast everything’s evolved. Companies like Oracle really have to be at the top of their game in order to not only help yourselves but help customers and businesses as well. Do you guys have anything to add to that question?

Chante Maurio:
Lucy, maybe, well first of all thanks to NCA and GCF for having me here today. And maybe just to add, with any technology advancement there’s positives. And as you spoke to, there’s the opportunity to go through large sets of data and use it in predictive ways. There’s also the challenges that come with that on the other hand. And with AI, it enables the bad actors to be trained up faster. It takes maybe a less skilled individual and allows them to build capabilities that they would otherwise not have been able to. So it increases the threat in many ways.

Moderator – Lucy Hedges:
Go on, Ahmed.

Ahmed Al-Isawi:
First of all, assalamu alaikum wa rahmatullah wa barakatuh and really thankful for being here. Thanks for NCA and the Global Cyber Security Forum for making this happen. And actually, to be honest, I’m terrified. Yeah? I’m really terrified. Why? Because I started as a hacker. And I know exactly how hackers are thinking and how devastating they can be. If we live in, for example, a city that everything around us is digital, we have sensory around us, speaking things about us and about our personal lives. And for example, I’m thinking about the line being developed in Neom. I’m responsible over there with the rest of the team led by Mr. Al-Masferhizi over there and the rest of my colleagues. We have the responsibility of protecting this future and protecting the livability of the residents who will come there, the companies that will come, the business that will come there. So actually, I’m terrified because I’m standing on the front end of all the advances that human science and innovation came to. How really we can take it further and protect the future for this? How can I protect the future of my children when AI helped them in solving, for example, their homeworks? Are they getting the right education, for example? Are they really developing their skills while the AI itself is helping them? How originality are we keeping in the future generations? Things like that. But actually, I’m terrified, a little bit terrified.

Moderator – Lucy Hedges:
Yeah, I think you have a right to be terrified. AI is something that’s been around for an incredibly long time, but it’s only really being pushed into the mainstream, I’d say now. I think it might be fair to say that. And it’s incredibly complex. We’re still really only scratching the surface of how we’re going to benefit from this technology. Is it going to be detrimental to us? Is it going to be incredibly beneficial? And what is the best way to really balance that? So with that, what would you say are the most important cybersecurity challenges of these emerging technologies? And there’s a lot of emerging technologies, but is there anything that stands out to you guys?

Adam Russell:
I think ultimately, right now, we have a true opportunity to forward look on post-quantum crypto. That is an emerging topic that’s been discussed for the last 10 years. But we’re finally coming to a reality where a large percentage of the tech companies globally are introducing quantum computing, leveraging qubits. And there’s a lot of energy in the cybersecurity spectrum through NIST in the U.S., as well as organizations globally in Germany, as well as in the kingdom here, looking at mechanisms to safeguard their data and their encrypted data against threats. NIST just recently announced the selection of PQC algorithms on signing and digital signatures, as well as the encryption mechanisms under the hood, such as Kyber. And although quantum computing isn’t breaking our encryption today, it’s nice to see that it’s not a fear-mongering effect in the cybersecurity spectrum. We’re actually taking this as an opportunity rather than as a challenge.

Moderator – Lucy Hedges:
Yeah, so you’re seeing things moving in the right direction right now. I can see you nodding away, Shantae. Have you got anything to add to that?

Chante Maurio:
Maybe to supplement it, you’re talking about the technology aspect of the challenges. And when we think about these emerging technologies, in addition to the technology challenges, there’s the people challenge. There’s the talent challenges that we all have in the marketplace. So finding ways to overcome the talent challenge, whether it’s being using AI to substitute for some of the analysts and upskilling the analysts. We talked about that in an earlier session today. Or whether it’s putting together proper educational programs. It’s going to be very important for the success of the global economy in coming back against some of the risks that are created.

Ahmed Al-Isawi:
I can expand to what my colleague panelists shared here. Maybe the challenge for us as we are in the leadership right now is how can we lead others to innovation and beyond innovation. We need to look at the problem not only from technology but also from other dimensions. Like how us as leadership can we lead others. There’s a very interesting study. I think it was conducted by NASA itself. They tried to understand how much innovation, how much percentage of innovation in different age categories. For example, they found in the children of age 5, they have 95% capability to innovate new things. For example, in age 31, I think, it became much less, like 5%. I’m at age 44. I’m wondering how much innovation I can bring to the table. Maybe my responsibility and the challenge on me right now is how can I foster a team that innovates. How can I multiply this within my team? How can I drive them through this journey? Of course, I believe whatever challenges will come in front of us, if we have the right skills, knowledge and values in our team, in the thinking of our team, in the design thinking, for example, for the future, we can embed cyber security from the initial stages of the ideation itself and then drive it through. One of the things we are discussing in the big projects being developed in NEOM, what’s the right ontology of things? How can we bring security in the ontology of things themselves? There’s many challenges, but maybe if I can conclude with this, the challenge on us as the leaders of today to build the future, to lead to the future.

Moderator – Lucy Hedges:
I think a key word just to pull out of what you just said there is team. This is a collaborative effort. You feel that compared to maybe the younger generation, you don’t feel as innovative, you’re not as innovating as much, but you are really strong when it comes to leadership. You’ve got so many different facets working together. That’s what helps build a stronger environment in order to bolster this technology and in order to make everything and all these emerging technologies more secure. We can’t do things by ourselves. We all bring different things to the table. I just want to take it back and go back to talking about AI for a sec. How can AI be effectively leveraged to enhance cyber security and what are the considerations for AI-driven threat detection and response?

Adam Russell:
I think AI has been discussed quite a bit, Lucy, throughout today’s conference. Yes, it’s an underlying theme. But I don’t think it’s a general panacea that we should all fear. It’s a tool that we can leverage to better protect our networks as well as our people and data. You’re seeing the advent of AI being applied to security operations as well as ultimately, as an example, the supply chain security aspect. At least in the context of many startups globally, what we’re seeing is they’re looking at mechanisms to perform semantic analysis as well as detect adversaries that are pushing back doors into the most popular third-party libraries. They’re leveraging that social network graph on vector databases as well as ultimately the database backends and your normalized relational databases. You can use large-scale language models to detect when there’s a single developer developing on log4j, as an example, or a developer sitting in country X that your particular country doesn’t trust anymore. It gives us the ability to better understand our supply chain that today is more complex than ever and make decisions on that in a real-time basis. Then you can apply that into your build system, so you can evict actors or say, I no longer want to take a dependency on that particular third-party library. AI is a powerful tool in that context, but it’s also a threat intelligence component on the supply chain in this context.

Moderator – Lucy Hedges:
It’s like AI versus AI, isn’t it? Like I said, it’s this collaborative effort. You use it to enhance current infrastructure. Anyone got anything to add before I move on?

Ahmed Al-Isawi:
Definitely, AI has a big role to enhance the cybersecurity, but I think the biggest limit is our imagination. How can we imagine doing… Traditionally, we need big space for a sock. I think through Metaverse, for example, and AI, it can be something different, something more innovative, like breaking the traditional boundaries. One of the things we always say is that if we keep trying to solve modern problems using traditional methods and solutions, whether it’s governance, whether it’s policy, whether even the idea of the solution itself, we will never be able to secure the AI nor use the AI itself. But just to name a few examples, like if we have this huge supply chain and we need not only to monitor ourselves internally and our digital cyberspace, we need to keep an eye on even the supply chain, supplying to us the technology goods or whatever kind of product or service. If we can use AI to pick up anything that’s happening there and directly reacting to that, we have only a matter of minutes before something bad happens. Even the turnaround time for detection and for reacting to the incident is becoming much, much shorter. We have a much shorter window. I think for the recent studies, it’s around eight minutes just as a window to react. As a human, we cannot. We have to use AI in that.

Moderator – Lucy Hedges:
That’s such a great example of just the power of artificial intelligence and how it can really enhance businesses. You mentioned the metaverse, which segues me in nicely to my next question. Of course, the metaverse is this virtual interconnected digital universe and it presents a host of quite unique cybersecurity challenges with the convergence of all these various technologies that are living within this digital world. What unique cybersecurity challenges does the metaverse present? What security measures are necessary to protect users and their data in these virtual environments? Who wants to go?

Ahmed Al-Isawi:
One of the earliest metaverses is called Second Life. It’s a game. I think, sadly, and you will be very lucky if you pass the first five seconds without being bullied or harassed in Second Life. This is one of the challenges. As a cybersecurity, we can protect the infrastructure, but who will protect the individuals inside that digital world? Human-to-human interaction is something very important, in my opinion. We can use the emerging technology to protect the emerging technology itself, like the idea of using decentralized digital identities inside the metaverse itself. Once this person is identified, of course, will behave much better than without identity, I think. But still, not only cybersecurity experts should work to protect this new technology. Even other domains should also contribute to protecting, like the police, for example. For example, making sure that the morals of people interacting inside this digital space is at a good level.

Moderator – Lucy Hedges:
Well, that’s just it. I think in the real world, obviously, we know when we do things wrong. We’ve got police, we’ve got law enforcement in place, and punishments for people that do bad. But I think in the virtual world, there’s nothing there at the moment. You can be bullied, you can be beaten up, all types of things can happen. And sometimes we brush it under the carpet. It’s digital, but it’s like, no. Eventually, our digital lives are merging with the physical, and these experiences really need to be regulated, and there needs to be some kind of enforcement in place that’s going to punish people or reprimand them for the negative behavior that they enforce in these environments.

Adam Russell:
I think, at least me personally, I grew up in the advent of the internet where it allowed innovation, a lot of freedom, a lot of independence to hack, like my fellow panelists. I grew up as a hacker in the underground. I know when things can go awry, but I think there’s further discussions today around cyberpaths. Ultimately, we all need a little bit of exploration and independence to innovate and look for opportunities. to build new tools on top of protecting our citizens within the metaverse. So to take an example on Second Life, Second Life became extremely important and ultimately popular because it enabled hacking, it enabled that conversation without a domination of regulation within that space. It was almost like a tit for tat game within the metaverse within Second Life where they built in regulation, they enabled users where they felt safe in different ecosystems. And if users broke that trust, they are cordoned off into an area that they could ultimately have some freedom that wouldn’t disrupt other users. And so within the metaverse, I think there’s not gonna be a one size fits all within a safety domain. You’re gonna have to look at a metaverse of metaverses, so to speak. And it’s gonna give users choice and opportunities for innovation. And you’re seeing that, at least in the context of social media, there’s an explosion of social media networks. And I think we’re gonna see this within the metaverse as well.

Moderator – Lucy Hedges:
Yeah, yeah, absolutely. Anything to add?

Ahmed Al-Isawi:
One? No, go please. Metaverse has its negative things, but also it’s contributing, I think, very greatly to the advancements. Like, for example, having or doing school experiments in a metaverse. Imagine that. This will not only cut costs on schools and education, but it will also give everybody or students the chance to have this experience in a very safe environment. Like how chemicals will react with each other. Imagine this in schools. I think this is a very fantastic idea.

Moderator – Lucy Hedges:
Would’ve made my education a lot more exciting, that’s for sure. Yeah. So Shante, my next question’s for you. What are the potential challenges that might arise from the uneven adoption and control of emerging technologies, particularly when it comes to cross-border contexts?

Chante Maurio:
So interesting question. Really interesting word that’s in the question. So when I hear control in the context of cross-border scenarios, I think regulations. And as we all know, regulations lag adoption in many cases. And so when there’s uneven adoption and uneven control of emerging technologies, the potential exists to create market confusion for companies as they are trying to navigate various market requirements around the globe. And as they’re trying to navigate these market requirements, it can create an issue of equity of access for particular citizens and geographies as well, while also creating competitive imbalances for companies and countries as well. So the implications of an uneven adoption really run the gamut and they stretch from the citizens to the companies from a commercial perspective as well as to the countries as well. In the absence of regulations, I’m always a proponent of frameworks and standards. We have a wide variety of frameworks and standards today in the cybersecurity market. And there’s really been a waterfall. I think we were speaking about it before in the green room, really a waterfall of regulations just in the last 12 to 18 months in this space. But because of this, UL Solutions, the company that I work for, we’ve long been a supporter of harmonization. And so global harmonization is really what’s going to allow companies to navigate the adoption of standards around the world and allow for access by the citizens so that there aren’t the equity concerns or the competition issues in the future.

Moderator – Lucy Hedges:
Yeah, absolutely. Well, while we’re in the realm of regulation, what are the key considerations for governments in developing adaptable regulatory frameworks that really foster innovation while addressing concerns related to data privacy, cybersecurity, job displacement? Ahmed, I’m gonna throw that at you. Given your insights in cybersecurity, GRC and regulatory frameworks, it’d be great to hear from you on this.

Ahmed Al-Isawi:
So I think there’s a fundamental challenge because regulations usually work well in a symmetric world. Like establishing the baseline that every infrastructure or organization should operate on. But the problem is, fundamentally, cyberspace and emerging technology are so destructive. They are making the world, they are pushing the world to asymmetric nature. How can you protect something that is a flying object, like a drone? There’s no cable from here to the drone, it’s all waves in the air. How can you protect that, for example? And I think having more than 19 year of experience just in crafting policy and frameworks and trying to use automation as much as possible, I think there’s no framework as of now that can protect this emerging technology. We are learning as we go, we’re trying to protect as we go. But one of the issues, for example, one of the studies expects that next year, around 60% of the employees will start using AI in their work, if not already. Are we ready from policy for this? Should we allow our employees to use AI? What are the implications of that? Everything is going into that, and how can we protect this data? How can we enable our employees to benefit from this powerhouse of AI tools? And I think we should start from now exploring this area. We used to say, bring your own device, but next year we’ll say, bring your own AI. Think about that. Everybody’s using AI in their digital phones or smartphones, or even in their work. How can we regulate that? How can we protect the data? How much data can we allow our employees to put in AI? Of course, for me, I always feel much safer if I have more control, enforcement around that. But still, there’s also an issue that, do you know that AI itself, or generative AI itself, can hallucinate, can fake information for you? And this is really proven. If you take a fake website, and you give it to ChargeGBT, and you ask a question, you will receive an answer. Even if the website does not exist, AI hallucinates, it’s a hallucination. And this is very, very dangerous, because if we become dependent on AI, how can we judge whether the AI itself is reliable? And this brings us to the point of, AI explains to us the steps of solving the problem or generating the answer itself. This is what I think.

Moderator – Lucy Hedges:
Yeah, and do you think it’s down to the convoluted and multi-layered nature of all these emerging technologies as to why we’re not really nailing this regulatory framework yet? Kind of to your point, it’s kind of not a one-size-fits-all, but regulation really needs to happen, because the technology’s moving at such a fast pace. We need to make sure that we’re protected in all senses of the word.

Adam Russell:
I’ll keep it short, and I’ll pass it to you. I think regulation should occur for the security of our critical systems, as well as the safety of our users. But we should be wary about how fast we wanna push regulation on top of AI, our large language models. I think right now we’re just in the midst of innovation. Innovation. And if we start pushing regulation too quickly, you’re going to end up in a world where there’s going to be that lack of adoption or lack of ability for a country to ultimately intake that technology. And there’s gonna be that varied approach ultimately. So at least what we’re doing at Oracle is we’re giving enough platform tools for our enterprise users globally to make the decisions on how they wanna regulate the data that they have in their ecosystem, giving them flexibility. Because flexibility’s ultimately the key that we wanna provide.

Moderator – Lucy Hedges:
Yeah, absolutely.

Chante Maurio:
So I mean, I think maybe two comments. One, on what you just said. I think there’s a reason, there’s many reasons, right? There’s many reasons that regulations lag, innovation and adoption. And one of those reasons is because if you put regulations in place, and the ecosystem understands that too soon, then you’ll stop innovation and you’ll stop the technology from reaching its potential and its capability. And so what the overall ecosystem around technology creation and adoption has to think about is what is the right time? What is the right time to introduce framework standards regulations into an environment? And we talk a lot, now to Matt’s comment earlier, we talk a lot about the technology, the technical requirements that are going to exist in regulation. And there’s also this ethical piece that maybe us as technicians don’t think about all the time. I know you do, because I’ve heard you mention it many times in today’s conversation, which is wonderful. But we do need to think about how we are going to manage through, and I know that there’s many big brains thinking around this already, but how do we manage through all of the ethical considerations as well?

Moderator – Lucy Hedges:
Yeah, go on, Ahmed.

Ahmed Al-Isawi:
I think, you know, regulations will not solve everything, because there’s a value dimension that we need to preserve. You know, especially for our kids when they go into an uncontrolled metaverse. It all comes back to us as human beings as having values, cultural values, or religious values, et cetera. Now, how can we, you know, values cannot be regulated by frameworks. It’s something that goes back to the education, parenting, for example. So we have to solve this problem from multi-dimensions, not only from regulations. We can secure servers, no problem about, but how can we secure the human? You know, we always say that humans are the weakest link. I don’t like that, but literally, we have to work on the human element in this ecosystem.

Moderator – Lucy Hedges:
Yeah, so to your point about, you know, education is education that needs to happen, what steps, or what better steps can be taken to educate and train individuals and organizations on cybersecurity best practices, and all the unique threats that come along with all these emerging technologies? What can be done in this department, do you think?

Chante Maurio:
I think one really fortunate thing about new and emerging areas is the people, and the individuals that are attracted to it. So the people that are attracted are people that are individuals that are very curious, lifelong learners, individuals that are constantly seeking knowledge, problem solvers, individuals that like to solve problems, and like to solve puzzles, which leads to the innovation that you talked about earlier. And so that’s wonderful. Those are critical aspects of education, because at the end of the day, we are all responsible for the ownership of our own learning and development journey. And so I think that first and foremost is important to note. Once you have individuals in place, and once individuals are attracted to a particular domain that are knowledge seekers, that are lifelong learners, that are problem solvers, there’s a number of resources available to support these individuals and organizations to better understand cybersecurity, for example. Using that as an example, in the US, the National Institute of Standards and Technologies offers a number of technical documents and standards, such as the NIST Cybersecurity Framework and the Cyber Essentials Toolkit to support it. If we zoom out and we think more from the global environment, you have the Global Cyber Alliance and the Cyber Readiness Institute that offer free cybersecurity training for small businesses, addressing some of that equity conversation that was alluded to earlier. And then testing, certification, inspection organizations like UL Solutions and the International Society of Automation offer trainings, personnel qualifications, certificate programs to support. And then honestly, forums like the one we’re in today, these are fantastic, right? They build communities, they allow us to exchange ideas, and they allow us to learn and grow together. And so these forums are also incredibly important.

Moderator – Lucy Hedges:
Yeah, absolutely, I agree. You guys up to add anything? I think she nailed it. No, we’re nearly out of time, guys. So I’m just gonna, to wrap up our conversation, my final question to you is, how can international cooperation and collaboration, back to this key theme of collaboration that we’ve been talking about, how can this help establish common cybersecurity standards and norms in the face of global challenges that are posed by all these emerging technologies? So final thoughts.

Adam Russell:
I think ultimately, we need to strengthen more and more of our partnerships. I think we’re doing that today through combating child safety on the internet, as an example. I think we’re doing a lot already. We could do even more on stopping ransomware attacks. So we’re setting these standards, but I think it extends beyond just the standard body. We need to start looking at partnerships. A large percentage of private enterprises are hosting a large percentage of our data, such as the ghouls of the world, even Oracle as an example. It creates that opportunity for public-private partnership, and I just want to thank GCF and NCA for allowing us to participate, and I’m looking forward to future collaboration.

Moderator – Lucy Hedges:
Yeah, absolutely. This is a brilliant example of that international collaboration that we’re just talking about. Shante, would you like to add anything?

Chante Maurio:
I think I spoke earlier about the importance of frameworks and standards introduced at the right time into an ecosystem, and once they’re introduced, what’s important is that we begin to see baseline standards that kind of come bubble to the top, and we’re beginning to see that, for example, in industrial IoT with IEC 62443, and in consumer IoT with EN 303645, and so adoption, though, of those baseline standards is still somewhat fragmented, and at the same time, when they are adopted, there’s additions and there’s deviations that are created along with them. Those additions and deviations are absolutely necessary. There’s reasons, and there’s a purpose for them. What we need to be able to do is then map those back to the baseline so it’s readily visible how these anomaly requirements actually map back, and so there’s organizations out there that can support that, IEC, ISO, ISA. Those organizations, those standards bodies create great forums for the collaboration and the cooperation to really align around those, and I think that’s going to be critical moving forward.

Moderator – Lucy Hedges:
Yeah, absolutely, and last but not least, Ahmed, let’s hear from you.

Ahmed Al-Isawi:
Yeah, maybe I can contribute to my esteemed panelists here. Like, you know, this is a new domain, but it’s really in close doors, or behind closed doors. It’s not a public domain knowledge. It’s not well supported by open source, so we have very, very limitation in this area, so maybe if it can be supported by open source being also provided to the public so everybody can play with it, can experience it. We reached to this point only after long journey of trial and error. If this advanced technologies, emerging technologies being developed behind closed doors, I understand the potential and the intellectual property behind that, but the real advancement is when everybody can play with it, can practice it, and this will generate knowledge lots faster, more faster than the traditional research.

Moderator – Lucy Hedges:
Yeah, absolutely, and on that note, I’d like to thank my knowledgeable panelists. Thank you so much for taking the time for sharing your insights and expertise. Please, round of applause for these guys. They did an amazing job. Thank you. You know, there’s still, we’re moving in the right direction. So much incredible things are happening in this space, but still a lot of work to be done, but the bonus, the positive takeaway is that we are moving in the right direction, and that can only be a good thing. So, thank you once again, all right, and thank you.

Adam Russell

Speech speed

150 words per minute

Speech length

1196 words

Speech time

477 secs

Ahmed Al-Isawi

Speech speed

145 words per minute

Speech length

1888 words

Speech time

781 secs

Chante Maurio

Speech speed

158 words per minute

Speech length

1310 words

Speech time

496 secs

Moderator – Lucy Hedges

Speech speed

210 words per minute

Speech length

1487 words

Speech time

425 secs