Cognitive Vulnerabilities: Why Humans Fall for Cyber Attacks

2 Nov 2023 09:05h - 09:45h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Gareth Maclachlan

Trellix, which was formed around a year ago, is the result of a merger between FireEye and McAfee. It is a global organization serving approximately 45,000 enterprises. Human exploitation in cyber threats revolves around three main tactics: familiarity, urgency, and personal or corporate cost. Cyber attackers use familiar elements to manipulate users into making decisions that benefit the attackers. They create a sense of urgency, forcing users to act quickly without thinking critically. Additionally, they exploit the personal or corporate cost associated with certain actions, making users more likely to react as desired by the attackers.

One common type of cyber attack is VIP impersonation, where attackers use a text message from a CEO or executive, requesting the recipient to perform unusual activities. However, this tactic is often ineffective as such activities are typically not part of regular business practices.

Credential phishing, on the other hand, is a common and highly effective cyber attack method. Attackers run campaigns focused on obtaining users’ credentials, often using pop-ups or fake login pages that mimic reputable companies. The stolen credentials can be valuable to the attackers for further malicious activities.

Another approach used by cyber attackers is exploiting usual business activities. For example, they may send invoices or resumes through email, taking advantage of the fact that users are more likely to trust such communication as everyday business practices. By doing so, the attackers bypass users’ natural suspicion towards email and successfully launch their attacks.

Security firms should focus on assisting customers in safeguarding their organizations from cyber threats. It is crucial to avoid blaming users for system failures, as this approach creates a culture of fear and discourages individuals from reporting potential threats. Gareth Maclachlan argues for a different perspective on cybersecurity, emphasizing the need to investigate how an attack bypassed the system, rather than blaming individuals who may have clicked on malicious links or fallen victim to other tactics.

Traditional phishing training methods may inadvertently desensitize employees to actual threats. Research suggests that employees feel they understand the risks and may miss genuine threats as a result. It is important to consider alternative approaches to phishing training, such as personalizing the training using AI and LLMs, to increase its effectiveness.

Recognizing and praising individuals who successfully identify and report genuine cyber attacks can encourage a behavioral norm of recognizing that security is everyone’s responsibility. This proactive approach to positive reinforcement could decrease the likelihood of mistakes in the future.

Psychologists can also play a role in understanding and dealing with cognitive biases that impact data security. Gareth Maclachlan contemplates the role of psychology in this context and acknowledges his own biases in his perspective.

When considering digital transformation in regions like the Kingdom, it is essential to view security from a broader perspective beyond just enterprise security. Gareth Maclachlan highlights the large scale of digital transformation in the Kingdom and suggests that minds should open to consider security in relation to systems and spaces beyond individual enterprises.

During incidents, it is important to focus on learning from system failures rather than blaming users. This approach promotes growth and improvement in security practices.

Publicly celebrating and recognizing employees when they correctly report potential threats can contribute to a culture of security awareness and employee engagement.

Performing regular checks on all applications, particularly hosted software-as-a-service applications, is crucial to avoid compromise. Organizations can be compromised if a customer or individual uploads a hostile file.

In conclusion, the summary highlights the importance of understanding how cyber attackers exploit human vulnerabilities and the need for security firms to prioritize assisting customers in protecting their organizations. It emphasizes the significance of taking a system-focused approach to cybersecurity rather than blaming users for system failures. Additionally, the summary explores alternative approaches to phishing training, the role of psychologists in addressing cognitive biases, and the need for a broader perspective on security in the context of digital transformation.

Moderator – Lucy Hedges

The threat of cyber attacks in today’s interconnected and digital world is larger than ever before. Cyber criminals are taking advantage of human cognitive vulnerabilities, exploiting weaknesses in human nature within cyber systems. They employ various tactics to exploit human fallibility and compromise cybersecurity.

To address these vulnerabilities, industry-industry collaboration is crucial. By working together, industries can explore elements of human error and gain insights into the psychological factors that make humans susceptible to attacks. This collaborative approach can lead to the development of effective strategies and measures to reduce cyber vulnerabilities.

One area where human vulnerability is evident is in the realm of social networks. Many people are unaware of the extent to which they reveal personal information on these platforms. This lack of understanding puts individuals at risk, as attackers can exploit this information for malicious purposes. Attackers are becoming increasingly sophisticated and can use personal data shared on social media platforms to impersonate friends and family members, effectively deceiving individuals. This highlights the importance of being selective and cautious with the information shared online.

Lucy Hedges, a cybersecurity expert, emphasises the significance of understanding and managing the information shared online. She shares anecdotes of individuals who have fallen victim to cyber attacks as a result of their personal information being exploited. While living in the online world can be beneficial, it is crucial to exercise caution and be mindful of the information we share.

Furthermore, there is a need for workplaces to promote caution and awareness towards potential cybersecurity threats, particularly those that come through emails. Hedges recalls an incident at her former workplace where a cyber attack occurred due to an employee interacting with a malicious link. It is essential for organisations to create a culture that encourages vigilance and provides training on identifying suspicious emails and other potential threats.

In conclusion, the threat of cyber attacks is ever-present in today’s digital world. Human cognitive vulnerabilities are exploited by cyber criminals, and it is vital to address this issue through industry collaboration. Individuals must be cautious about the information they share on social networks, as attackers can use personal data for malicious purposes. Additionally, workplaces should promote awareness and caution towards cybersecurity threats, especially those via email. Being alert and proactive is essential in combating cyber vulnerabilities and protecting personal and organisational data.

Prof. William H. Dutton

The discussions focused on important themes such as cybersecurity and cognitive biases, highlighting several key points and arguments.

One significant issue that was discussed is the confirmatory bias, which is the tendency for individuals to believe information that confirms their existing beliefs. It was emphasized that this bias can be exploited, as people are more likely to accept and share information that aligns with their preconceived notions. This poses a challenge in combatting misinformation and propaganda, as individuals tend to seek out information that reaffirms their own opinions.

The emergence of cognitive politics was identified as a consequence of cognitive warfare. It was revealed that in the past, attitude shaping was common, but now the focus has shifted towards shaping beliefs about a particular subject matter. This manipulation of beliefs through cognitive tactics raises concerns about the trustworthiness of information on the internet and its impact on society.

Blaming users for succumbing to cyber threats was strongly argued against. It was emphasized that blaming individuals solely for falling victim to cyber attacks absolves others who are involved in cybercriminal activities. Instead, open communication and collaboration were suggested as necessary approaches to rectify and avoid future issues. By discussing suspicions or experiences with phishing or scams, people can collectively learn from each other’s mistakes and work towards a safer online environment.

The adoption of a cybersecurity mindset was identified as an increasing trend among internet users. There is a growing awareness of the cybersecurity implications of every action taken online, as people are becoming more conscious of the threats and seeking to protect themselves. This shift in mindset is encouraging and demonstrates a proactive approach towards personal cybersecurity.

Addressing cybersecurity threats was viewed as an ongoing process that requires an ecosystem-wide approach. It was recognized that everyone, from the top to the bottom of an organization, has responsibilities towards cybersecurity. This highlights the need for collective efforts to ensure a secure online environment.

Psychologists were seen as playing a significant role in cybersecurity by educating users about their psychological tendencies. It was noted that human bias and the tendency to confirm existing biases play a significant role in the propagation of misinformation. Therefore, educating individuals about these biases can help them recognize and mitigate the impact of these tendencies on their online behavior.

While acknowledging the positive aspects of social media, such as networking and information exchange, it was suggested that more support should be given to smaller organizations and individuals outside the corporate sector. Data showed that smaller organizations and individuals in non-corporate sectors did not receive as much support as larger organizations and SMEs. Addressing this disparity in support is crucial to ensure that all entities have the necessary resources and knowledge to protect themselves online.

In conclusion, the discussions highlighted the need for individuals to take an active role in ensuring cybersecurity. The confirmatory bias, cognitive politics, and the importance of a cybersecurity mindset were all significant points of focus. Open communication, collaboration, and the involvement of psychologists were recognized as important measures in combating cyber threats. Notably, addressing cybersecurity challenges were seen as requiring a collective effort that involves individuals, organizations, and society as a whole.

David Chow

David Chow, an experienced IT expert, provides valuable insights into the complexities of cybersecurity, with a particular emphasis on the human aspect. He highlights the challenge posed by the human factor, stating that while technical aspects such as patching and network assessments can be effectively managed, the human element presents a bigger challenge. Exploiting cognitive vulnerabilities, such as appealing to emotions or curiosity, can be a significant avenue for cyberattacks.

Chow gives an example of potential scams that exploit human nature, such as seeking donations or manipulating curiosity. This underscores the need for individuals to be vigilant and aware of these cognitive vulnerabilities to prevent falling victim to such attacks.

Furthermore, Chow discusses the importance of background checks and personal security measures in mitigating cognitive vulnerabilities. Drawing from his experience at the White House, he explains that extensive background checks, FBI reviews, and financial assessments are crucial in making informed decisions and minimizing risks associated with those who may exploit cognitive vulnerabilities.

Regarding news consumption, Chow observes a clear pattern where different political administrations tend to prefer news channels aligned with their political ideologies, demonstrating confirmation bias. During Republican rule, Fox News, a conservative news channel, is the preferred choice, while CNN is commonly watched during Democrat rule. This highlights how political biases can shape news consumption and potentially influence public opinion.

Addressing user responsibility, Chow argues against solely blaming IT professionals for cybersecurity breaches. He conducted a phishing exercise that revealed the need for users to be more vigilant and take responsibility in ensuring cybersecurity. He emphasizes that everyone plays a role in cybersecurity and that it is a collective effort.

Chow also warns against excessive sharing of personal information on social media, as it can make individuals vulnerable to frauds and scams. He shares a personal experience of receiving a fraudulent text asking for an Apple gift card, which targeted him based on the information he had shared about his new job on social media. This highlights the importance of exercising discretion and being mindful of the information shared online.

In conclusion, Chow’s analysis underscores the multifaceted nature of cybersecurity, highlighting the need to address the human aspect and cognitive vulnerabilities. Measures such as background checks and personal security are essential in mitigating risks. Awareness of confirmation bias in news consumption and the importance of user responsibility contribute to establishing a strong cybersecurity culture. Lastly, his experience with social media scams serves as a reminder to exercise caution and respect individuals’ privacy when sharing personal information online.

Philippe VALLE

The analysis highlights several key points regarding cybersecurity and social engineering. One important aspect is the prevalence and impact of attacks based on human vulnerability, commonly known as social engineering. Attackers exploit the information available on social networks to gain the trust of their victims. This underscores the need for awareness and education to combat social engineering attacks. The analysis suggests that training sessions within companies could play a crucial role in educating individuals about social engineering techniques and how to identify and avoid falling victim to them.

However, it is also mentioned that blaming the user for cybersecurity breaches is counterproductive. Human error is an inevitable factor in any system, and it is unrealistic to expect individuals to be perfect in preventing all cyber threats. Instead, it is argued that a system-based approach should be adopted to address the root causes of cyber attacks. This observation underscores the importance of having robust cybersecurity measures in place, such as implementing multi-factor authentication and regularly updating access management policies.

The analysis further suggests that companies should establish quick incident reporting systems to effectively respond to cyber incidents. Time is of the essence in handling incidents, and prompt reporting can enable response teams to address the issues in a timely manner. This recommendation aligns with the notion that incident management should prioritize quick reporting and response rather than focusing on blaming individuals.

When it comes to application design, the analysis emphasizes the need for a balanced approach that considers both security and user-friendliness. Applications that are too difficult to access or operate may be bypassed, while those perceived as easily accessible may be seen as weak in terms of security. Therefore, application designers should aim to strike a balance between ensuring the security of transactions and providing a user-friendly experience.

Regarding data and application access, the analysis highlights the importance of clear and strong access management policies that focus on segmentation or zero trust. Defining who has access to what in terms of applications and data is crucial in controlling security, and monitoring access levels is considered good practice. Additionally, the implementation of multi-factor authentication is seen as crucial for organizations to enhance security and prevent unauthorized access. These measures can significantly contribute to safeguarding sensitive information.

An additional noteworthy observation is the need for regular updates to access management policies when people change roles within a company. As responsibilities change, so should access rights, ensuring that individuals only have access to the data and applications necessary for their current position.

In conclusion, the analysis highlights the significance of addressing social engineering attacks, the importance of implementing robust cybersecurity measures, the need for quick incident reporting systems, the balance between security and user-friendliness in application design, and the crucial role of access management policies and multi-factor authentication in maintaining data security.

Session transcript

Moderator – Lucy Hedges:
Philippe Vallee, Executive Vice President, Digital Identity and Security, Thales Lucy Hedges, Moderator, Technology Journalist and TV Presenter Professor William Dutton, Martin Fellow, Oxford University’s Global Cybersecurity Capacity Centre Emeritus Professor, University of Southern California David Shaw, Global Chief Technology Strategy Officer, Trend Micro Getting that selfie in there David, I like that. Hi everybody, it’s great to be back on stage here at the Global Cybersecurity Forum on Day 2. I hope you’re all having a fantastic day so far and after Day 1, I don’t doubt for a second that today is going to be another brilliant day of informative and insightful discussions like the one we’re about to have on stage right now. So in today’s interconnected and digital world, the threat of cyber attacks is larger than it’s ever been before, I don’t need to tell you that. And what makes this subject particularly intriguing is that it’s not just about technology, it’s about human nature as well. So we’re going to unravel the mystery behind why humans often fall prey to cyber attacks from phishing emails and social engineering, there are countless tactics that cyber criminals employ to exploit human fallibility and our cognitive vulnerabilities as a clear point of weakness in cyber systems. And my brilliant bunch of esteemed panellists are going to explore the elements of human error and shed light on the psychological factors that make us susceptible to these kinds of attacks while offering insights into the potential benefits of industry-industry collaboration and how we can better protect ourselves and ultimately reduce cyber vulnerabilities to create a more secure cyberspace for everyone. We’ve got a diverse range of experts with various backgrounds, so I don’t doubt for a second that this is set to be a very insightful conversation. So Philippe, Gareth, Bill, David, how are you? Great. Thanks, Lucy. Excellent. Very well, thank you. It’s good to have you. So I think a great place to start would be by really setting the scene. Let’s kind of paint the bigger picture by asking what are cognitive vulnerabilities in the context of cyber security, and how do they differ from technical vulnerabilities? And anyone can grab that one first. Don’t be polite.

David Chow:
Sure. I guess I can start since everybody’s looking at me. So my name is David Chow. I want to share a little bit about my past experience working as an IT practitioner. I worked in the U.S. government for 20 years, and also working at the White House for President Bush and President Obama. And coming from an IT practitioner standpoint, that I can handle all the technical aspect from the technical vulnerabilities, your patchings, your exploits, your network assessments, anything related to that. But the hardest part to defend is actually the human aspect. The human aspect in terms of every day, everybody goes through on a daily basis, they have their daily motions, you have your kids that you have to take care of. You may feel up, you may feel down, but because of that daily changes, you may click on something that you typically don’t click on. Or somebody could potentially try to exploit your softer side. Somebody could be saying that, try to appeal to your nature and say, hey, we’re seeking for a donation. We’re looking for this. Would you mind help us with something like donating certain money? So you click on the link out of curiosity, and then all of a sudden that creates some sort of cyber attack. I want to share very quickly about an example. It’s not entirely related to cybersecurity, but it’s definitely focusing on cognitive vulnerability. When I was working at the White House, we had to go through an extensive background investigation. Obviously, you’re serving the president, you have to do that. We also have to go through… FBI reviews, personal interviews, neighbor interviews, as well as going through your assessments of your financial background. The whole concept there is actually to ensure that there is not a level of cognitive vulnerability. So you’re making the right decisions, you’re not hanging out with the wrong crowd, you don’t have large sum of money coming in, or you’re not incurring any debt. So that in a way, it’s more from the physical personnel security standpoint, but it’s actually tie into the cyber as one enhance on practice and better cyber maturity.

Moderator – Lucy Hedges:
Thanks, David. Anyone want to add anything to that?

Gareth Maclachlan:
Yeah, I’ll add a bit. So just to kind of give you a little background, Trellix was the merger of FireEye and McAfee that we brought together about a year or so ago. And we cover about 45,000 enterprises across the globe, a lot here in the kingdom. One of the things that we see is always the attack and really the attempt to exploit the human part of it really focuses maybe on kind of three things. It focuses on familiarity. Does it look like something a user is used to doing? Is there a sense of urgency, something which is like forcing you to make a decision faster or behave in a way that you wouldn’t normally do? Is there a personal cost? Maybe it’s a corporate cost, maybe it’s a personal cost. For example, if I look at my own email that comes to me personally, I seem to have an addiction for buying antivirus software. I must sign up for a year’s worth of Norton antivirus at least once a week. So you kind of get this idea that you might have lost your own money, so you’re more likely to respond to it. And for us, try to understand those bits, see how attackers are starting to exploit them and get people to act almost against their better judgment because putting some of those stresses on them really gets to the heart of the human factor.

Philippe VALLE:
One point, these attacks based on people or let’s say human vulnerability are called also sometimes social engineering. By social engineering, you connect to social networks, which means that people often do not know the number of information they are releasing to the public by putting all their life on their social networks. Typically, one of the things that could be done in training session, for example, within the company is to explain to people how they could retrieve, for example, the stock of information that Facebook has on them, I should say Meta, has on them about their personal life. Because, I mean, those attackers are using that core information to attack and like was said previously and pretend that they know very well the person. So let’s be very careful about the level of information we leave every day on the different social networks.

Moderator – Lucy Hedges:
It really is quite unbelievable how many people don’t really realize that the information that they put out there, especially on social media, is so susceptible to these kind of attacks. You know, we’re under the impression that this data that was being owned by these big companies is potentially private, but, you know, these attackers are getting smarter and smarter by the day and being able to tune in to all these personal details is really quite mind-blowing. I know so many people that have been, you know, attacked by their personal information that they’ve put online and I think it’s important for us all to realize that living your life online is fantastic, but also be very selective about the kind of information that you put out there as well. So what about cognitive biases, guys? What does this mean and how does that affect our behavior online? Do you want to go for that, Bill?

Prof. William H. Dutton:
I think, you know, this might be a way of broadening the discussion a bit, because I think we usually mean by cognitive biases what psychological… predispositions do we have that could be played with by bad actors and I think or with information that they may have and I think that’s the general way we think about cognitive biases but I my own personal view is that I think more and more the biggest issue is confirmatory bias that is we all want to confirm what we already believe to be the truth and and this it applies to hacking I mean if we really want our printer fixed and we’re in an emergency and somebody approaches us and say hey I can fix your printer and log on here and whatever then you’re you want to believe that because it it meets a need I mean but I think in another way cognitive biases have a much broader it’s a very broad area that we’re talking about and I would I would link it right now to the the rise of what what I would call cognitive politics which is the it derives from the emergence of cognitive warfare in the sense that in earlier days we take up a propaganda and influence campaign and advertising shaping your opinion shaping your opinions about a person or a product or a thing and I think increasingly propaganda and influence campaigns are focused on challenge on shaping your beliefs so instead of shaping attitudes were shaping beliefs what is the truth so what is the border of this country what is the history of this person and so forth so that what that means is increasingly we shape where how people vote or how people side with different issues by shaping their beliefs about the whole subject matter. And so this is really a big issue where I think that, I don’t know if it’s too broad for this panel, but I think that we have to think more and more about cognitive politics because it undermines what we believe and it may undermine, you know, it may really harm trust in the internet and trust in information because we don’t know whether we’re being played by particular individuals or trying to shape what we believe rather than simply whether we’re positively or negatively disposed to a person.

David Chow:
Yeah, it’s enough to make you super paranoid, isn’t it? Can I give an example? Okay. So when I was, obviously I worked for three different presidents and when there’s a change in administration, you see that the television, the television news channel that the political appointee watch is actually different. So when Republican is actually in charge, you see Fox News. That’s conservative news, right? And then when you see Democrats, when they’re in charge, you actually see pervasively CNN. So that’s an example where they want to be confirmed of their viewpoints, these politicians or these political appointees. And that’s very interesting in terms of rather than looking from a broader point of view, they just want to confirm their own assumptions and be able to move forward with their assumptions.

Moderator – Lucy Hedges:
Yeah, absolutely. So let’s give a few examples now. You know, what are some of the most common types of cyber attacks? You know, we’ve touched on a few examples, but if you’ve got any more to add, I’m sure the audience will appreciate that. So what are the common types of cyber attacks or psychological tricks that attackers use to manipulate victims and, you know, obviously target these human cognitive vulnerabilities and why are they so effective? Go on, Gareth.

Gareth Maclachlan:
I’ll take that first of all. So one of the things I think it’s also worth thinking about is what’s the call to action that an attacker actually wants? You know, you can spend a lot of time thinking about, you know, how you might construct a phishing email, how you might influence someone and get them to respond to something. But you’ve actually got to get them to do something in order to have an effect. So it might be intelligence operations, as you say. It might be changing the way they think, changing the way they vote. That’s too big for me to worry about, right? You know, working in a cybersecurity firm, I care really about helping our customers keep their organization safe, keeping their citizens safe. And what we’ve seen is, you know, different waves of different attacks. So for example, there’ll be a lot which talk about VIP impersonation. You get a text message from your CEO saying, I want you to go and do something. You must go and do it now. That’s a great way to get people to respond because it’s a position of authority. But what does your CEO normally ask you to do? It’s kind of unlikely that he’s going to say, I need you to transfer money to this organization you’ve never heard of and isn’t set up in your systems. Our business practices go against that. So you don’t get people to act even though the authority is there. And your CEO sending you a text message and say, I want you to run down the road and go and buy some gift cards. That’s not usual either. So it doesn’t work. What does work are things like credential phishing. So we see a lot of campaigns really focused on people trying to get someone’s credentials because that’s the most valuable thing you can use as a way to go launch another attack. So we’ll see pop-ups pretending to be a log on for Microsoft or a log on for Cisco or log on for some other organization. That is quite effective. Very difficult to know what it is. You’re used to it. It’s familiar. It’s a usual action. And it’s incredibly valuable to the attackers. So those sorts of things go through. And then what we see is really people trying to bypass the natural kind of suspicion we’ve built up around email. We all know email is a bad thing. is dangerous. Our antenna are up, we worry about it, we’re gonna think twice before we click on that link. But if you’re working in finance, if you’re working in accounts, you’re working in an HR and a invoice comes through or a resume comes through a CV, well that’s usual so you’ll click on it. So we often think about it’s not just email or something was suspicious, what’s all the other routes in which you may be less aware of or less resistant to might come through.

Moderator – Lucy Hedges:
And it’s that familiarity isn’t it, that’s what really traps people. You know when I was working at the Metro newspaper, we got attacked because someone clicked on a malicious link and it was connected to work. I don’t think it was a CV, I can’t remember what it was, but this email went around and said you know this is happening, we’ve been attacked, so be more aware and just be a bit more cautious when you’re clicking on these links. And it’s a bit frustrating isn’t it, but you know we all have to be cautious, incredibly cautious, especially in a work environment. We do, but just if

Gareth Maclachlan:
I may continue, we also you know avoid blaming the user. Links are supposed to be clicked on. You know we’ve always taken the approach of think how, not who. If someone clicks on a link, well you can’t expect your employees to be perfect every time. You’ve got to ask how did the link actually get there, what failed to put them in that situation. So do you think this kind of

Moderator – Lucy Hedges:
blame the user mentality in cybersecurity is counterproductive, you know, in addressing these issues when it comes to cognitive vulnerabilities?

Philippe VALLE:
For me to be even blunter, every time a CISO or Chief Information Security Officer of a company runs an internal phishing campaign to test, there will always be a percentage point of the population which will click anyway. So you can train the people and so on. So for me, being the victim of phishing attack is not a human error, it’s a technical error. You should have a system, a technology and probably things need to be invented here. to be perfected, but it’s a system answer that we need to provide and not blame somebody for clinking on it. You can be tired, it can be the end of the day, you have been trained, but you are subject to error. That’s human beings.

Moderator – Lucy Hedges:
Yeah, yeah, absolutely. Anyone got anything to add before I move on?

Prof. William H. Dutton:
Well, I mean, I’m totally for this idea, because I think if you blame the user that you let everybody else off the hook. But you’re reminding me, think back to telemarketing. I mean, telemarketing had an economic model where they could send out tons of marketing material to tons of people, but they only needed a small fraction of individuals to be interested in that. And so you could never stop it, because the economic model of that was so successful. And I think it’s similar here where you may see an obvious phishing email in your inbox, and think you’re smart this time. But they send out this to so many people that it may hit another person at the wrong time for that person where they really want that, and it makes sense to them, because at that moment they are looking for this particular aspect or whatever. So even really very intelligent people in really great positions can be fooled by this. And that’s why I think one of the key issues is always to talk to people. If you think something might be a little funny about this, talk to the person next to you or talk to a friend. What do you think this is, a phishing email or whatever? If you have doubts, it probably is, and you should have… But, gosh, the president of a major corporation in the United States years ago, decades ago, clicked on the I love you virus, you know, the I love you. And so, I mean, he’s hitting himself, right? And he was, but he, at least he had the audacity to say, admit that he did this, it was stupid and whatever, but it hit him at the wrong time. He was busy, clicked on this, opened a link, and infected all of the systems in his corporation. So it’s, anyway, I think, don’t blame the user, but every time there isn’t a problem, you should let people know about it. If you suspect it, or if it happened, you should let people know about it so that it can be corrected. If you don’t tell anybody, it’s very hard to correct these problems.

Moderator – Lucy Hedges:
Exactly. And that’s a great rule of thumb. David, I can see you’re ready for Mike.

David Chow:
I just want to provide a slightly contrary view to not blaming the user. And this is based on personal practice. So I was a CISO for a financial regulator within the US, and we sent out this phishing campaign, right? This phishing exercise, basically, we sent an email saying that, you know, see what your colleagues are doing in the lunchroom. So people click on it, right? We sent it to executives, obviously the most high profile target, and then we sent it to everybody else. So the executive director and deputy executive director for the agency clicked on it. They’re the top two career individuals within the agency. So I asked the executive director, I said, why did you click on it? He said, well, you know, I was curious, right? And then I asked the deputy executive director, who’s actually very IT savvy, I said, why did you click on it? And he said, well, I clicked on it because I was curious, plus I know that you IT guys will take care of it if something happens. So I agree with Philippe, when he’s talking about that there There’s technical errors, technical issues that we need to set as practitioners. We need to set the expectation. We need to provide the education. We also need to constantly ensure that our tools is catching ransomware attacks or some other attacks. But at the same time, it starts with everybody, right? Users need to take the mindset of being more vigilant. If we continue to say that don’t blame the user, so if something happens, we blame the CIO or blame the CISO, that’s not fair for the CIO or the CISO or the practitioners either. So I think cybersecurity actually starts with everybody. Perhaps you get one free pass, and there needs to be a level of expectation. But the bottom line is that it has to start with everybody.

Moderator – Lucy Hedges:
Go on, Bill.

Gareth Maclachlan:
I think one of the things that we need to start thinking about as an industry is we spent a lot of time doing phishing training, sending out phishing emails, encouraging people to say, did you click on it or not? Click on it. Ooh, tick. Yes, good. You got it. You found the right thing. There’s a little research now, which is almost starting to suggest that that is training people the wrong way. People are starting to feel like they know what the risk is, and they’re missing things. We’ve been doing some experimentation with, yeah, guess what, AI and LLMs to start looking about can you actually generate personalized training? To your point earlier about the social media information, can you go and create a targeted email to train a user based upon information you know about them? And the second bit for me is whilst we tell people, good job, you caught that phishing email, what we tend not to do as organizations is actually also call out when people find real attacks that have come through. You’re encouraged. You see an attack. You think, I’m not sure about this email. I’ll report it to the IT department. The IT department will come back a little bit later and go, yeah, we investigated. Yeah, that was bad. Well done. That’s it. But actually starting to maybe… report to the company as a whole, this month these individuals found these things and kept us safe. You start to encourage that almost kind of behavioral norm of getting people to actually recognize that security is owned by everyone. My comment around don’t blame the user is you don’t want people to feel that if they do inadvertently forefoul of something, that is necessary weakness. You’re right about they’ve got to keep the antenna up, but trying to find that balance and kind of call out successful activity, successful steps, rather than just punishing negative is always good.

Philippe VALLE:
Yeah. Philippe, did you have something to add to that? No, but it’s similar to what you just said, Gareth. I think instead of having this name and blame approach, which is counterproductive, I think the company should create a notion of the quicker I report this incident to the respond team. A fake or true incident, by the way, the better the security response team can act and address the question. So I think it’s very important that in any company you have an emergency number to be called so that you can report it as quickly as possible, because time is really of the essence. If we need to cut the server from the organization and so on, the response team can do it quickly if they know that something is happening.

Moderator – Lucy Hedges:
Yeah. Oh, go on, Bill, if you’ve got something to add.

Prof. William H. Dutton:
Just to say, just comment on, I mean, I don’t mean, yes, the user, first of all, all of us are users, and everybody from the top to the bottom of the organization and across society are internet users, 5.3 billion users in the world. And so I’m not going to let them all off the hook. I mean, the thing is not to pass on the blame to the user and not fix these issues that enable bad actors to get more access. But there is encouraging. growth of what I would call a cyber, I wrote years ago about the need for a cybersecurity mindset among everyone, all users at all levels. And there’s really a lot of signs that that’s happening, that more of us, if I ask somebody over dinner or visit, you know, what do you think of cybersecurity? They’ll tell me what they do and what they’re thinking about and what kinds of emails they’ve gotten and how they protect themselves. This is the kind of thing that has to happen, that we all have to have a more of a cybersecurity mindset where we’re not thinking of doing this once a week or doing that when I’m told to by IT, but then every day that it’s just a normal habitual part of your life that you think through the cybersecurity implications of everything you do, whether you download new software or answer an email or what have you. And there are signs that that’s actually happening. But again, that’s the challenge of the whole ecosystem of cybersecurity, that we continue to build a cybersecurity mindset so that malicious actors have a much more difficult time stealing your information or informing you and misinforming you.

Moderator – Lucy Hedges:
So staying within the realm of this cybersecurity mindset, we’ve just discussed solutions or measures put in place by businesses to try and help detect and kind of counter these kind of attacks. But how can psychologists be brought into this conversation to help support efforts to detect and counter these kinds of attacks?

Prof. William H. Dutton:
Well, I’m not a psychologist, so I’ll answer that. First of all, you know, everyone wants to blame the technology. You’re getting disinformation because you’re in a filter bubble or you’re in an echo chamber caused by social media. media, are caused by the search engine that you’re looking at. Bull! This is not… You are the biggest algorithm, okay? You are the worst algorithm in the lot, because you’re the one who decides not to look at that, but to look at this, to watch a particular channel and not to watch contrary information. So you have to… Psychologists need to explain to users that they have often psychological propensities like confirming their existing biases, and they have to understand that. And if you understand that, gee, yeah, we all try to confirm exactly our political beliefs or what we want is that somebody loves me and I’m a whatever, then you will challenge that more often. You’ll try to diversify the information you see. You’ll try to find counter-information and look at the arguments of the opponents and so forth. So anyway, I think… But psychologists have to tell us about, you know, raise public awareness not about computing, but about ourselves, about what our propensities are in misusing computing. Yeah, go on, David. Is there a way to use psychology to make people not to use social media? Social media is fantastic. Eighty percent of the people in Britain use the internet. Eighty percent of the internet users use social media. It’s fine. It is… But it’s demonized. And I think what we need to do is try not to… Think about it. Even in cybersecurity areas. The internet is fantastic in terms of shopping, in terms of getting information. People believe, have confidence in what they can find online through search, for example, as much as they do in broadcast television news. And I don’t think they’re wrong. But I think we’re in a time frame in which we’re demonizing all media, but I get it. You do see examples of bad use of social media and bad actors on social media. Good practice, people creating private social media groups on WhatsApp, things like that. People are responding to that, adapting to it. And don’t throw away what is really valuable, networking people. Social media allows you to source the people you want to talk to and not to rely on just the people in your office, just the people in your home, just the people in your school. Extremely valuable.

Philippe VALLE:
Yeah. Go on, Philippe. My two cents of psychology here is also to really work on the balance between security and user-friendliness. Because people tend to, if the application is too easily accessible, whatever the application, it’s weak. If the application is too hard to access or to operate, then people will try to bypass. That’s a two cents. That’s a standard psychological behavior. So what is important, I would say, when we design an application as a company, as a product, when we put an application on the market, it’s important that we think about the usability, the user-friendliness, the way it will interact with the people. Obviously, a higher level of security will be required if the transaction at stake is important, but let’s make sure that we always find that right balance between security and user-friendliness.

Moderator – Lucy Hedges:
Yeah. Yeah.

David Chow:
I do want to demonize social media. Sorry, as a practitioner, this is what I have to say. I do, I’m not faulting that people use their, what they need to use to put their personal lives out there, put their professional lives out there. I don’t have social media account except LinkedIn. And I thought that I was safe, right? I don’t have anything, nobody’s stalking me. So, two weeks after I started working at Trend Micro, I got a text from the CEO. Or you mentioned that the CEO was asking for a gift card, Apple gift card, and she said that she’s at a conference. She just doesn’t have time to talk. Her minutes is running out. So, I was like, okay, so I just interviewed with the CEO. She just brought me on board. What do I do? So, what I did was, I thought this must be fraud, but there’s also inkling that this may happen because she travels quite a bit. I call her assistant, and it’s actually because she’s overseas. I was in the US, she’s in Taiwan. So, it was three in the morning. The guy was upset. The guy basically said, yeah, she’s here. She’s not traveling. So, I realized, okay, I made a mistake in terms of getting into believing that this could be possibly true. And I didn’t put anything on social media except that I started my job at Trend Micro, right? So, I think in a way that people are looking for, bad actors are looking for ways to get whatever information. And when we talked about using AI, we talked about social engineering, the more information that you put out there about yourself, the more vulnerable that you actually become. So, yes, I’m not discouraging you from putting your information out there. This is who you are. This is what you want to do. But I’m just saying that you also have to be extra vigilant in terms of the issue that you may encounter. And also, at the same time, that from a practitioner standpoint, yeah, I mean, this is something that it’s actually frowned upon because somebody can actually use AI to create some sort of a personalized email or letter sent in directly. to you, knowing everything about you, which making you to believe that this person is actually sharing the right information. So you’ll probably click on it. All you need to do is just click on something, click on the wrong link, and you can actually gridlock your entire environment. That’s why I’m demonizing the practice from the, more from the practitioner safeguarding standpoint. But if people wanna continue to use it, that’s their discretion.

Gareth Maclachlan:
Yeah, yeah. Go on, Gareth. I’m gonna go back to that kind of question around the role of psychologists, and particularly the role of psychologists in helping us understand the impact and understanding our own biases. I’m going to admit to maybe two of my own biases right now. The first one thing to me is, you know, when thinking about this panel and thinking about the questions, I was thinking about it really from my own fairly myopic view of keeping companies safe. So I was thinking about enterprise security, how do we do that, what do we do for employees? And it’s really, you know, I was last in the kingdom in 2014, 2015. You know, first time back in eight years. And the scale of digital transformation, the changes that have happened in the kingdom are huge. And so the first bit for me was realizing I was thinking about enterprise security. Suddenly you start thinking about what’s the role of trust and bias and kind of cognitive exploitation in a country like this, which is focused on digital transformation and what it means for citizens. You start to understand that there’s a much broader aspect that we need to go think about. So I think, you know, it’s the combination of, yeah, even us sitting up here as technologists, we think about systems, we think about our own little space, we forget to open our minds each time.

Moderator – Lucy Hedges:
Yeah, it’s kind of looking at it from a bigger picture point of view and all the kind of multifaceted nature of these kinds of attacks. And so time’s running low, so I’m just going to move on to the next one. I think maybe offering up a bit of advice might be quite nice. You know, how can organizations and industries collaborate to share insights and- best practices when it comes to addressing human weaknesses? I’m sure the audience would be interested to hear from you guys on this. Anyone can take it.

Philippe VALLE:
Let me start here, if I can. So I would say this is this notion of segmentation. We have different name for it. The fact that we also call it zero trust, in the sense that you need to define with a fairly strong policy, who has access to what, in terms of application, in terms of data, in terms of you segregate really the different access levels. You monitor it, you check it. And this is a possibility, let’s say, to control, to control, let’s say, the level of security. Something that we will never say enough, implement every time you can multi-factor authentication. This is very, very strong advice. Simple technology, you would not imagine how many companies today don’t even have this kind of simple measure in place for all the application, whether you access them inside or outside. And again, this policy of access management needs to be updated, because I mean, when people are changing job within the company, they are also changing responsibility, so this needs to be updated. It’s fastidious, it’s heavy, but it’s usually a good practice to have.

Moderator – Lucy Hedges:
Yeah, yeah, brilliant advice there. Go on, Gareth.

Gareth Maclachlan:
For me, I’d say there’s three things that I would usually say when I’m talking to a CISO. I mean, first of all, I think it’s that concept of how, not who. So when an incident does happen, you don’t blame the user, you focus on what failed in the systems and the processes and the controls that got there, and you learn from that. The second bit for me is actually celebrating or publicizing when an employee actually does report something correctly, because it starts to. Reinforce, but that’s the behavior that’s expected. You want people to protect the organization. We all have a duty to do that But third for me I mean everyone in this room is very familiar with thinking about risk and about controls and manage that and identifying that’s the one place that I do see as As an industry we’ve tended to Maybe ignore or not think about the risk quite so much is some of the business applications that we’re starting to adopt particularly Hosted software as a service as applications whether those are finance systems HR systems customer care systems. We’ve seen so many organizations actually being compromised because a customer has or a customer has uploaded a file which is supposed to be something that affects sending through or a Individual has made a loan application through a banking portal and what they’ve uploaded is a hostile file Yeah, so being able to scan and do the same checks on all applications as you do on email is one thing I’d always call out. Yeah, you guys have anything to add gone David

David Chow:
From my perspective one is that you have to have the visibility of your risks That is you don’t know what you don’t know But what’s critical is that you need to know what’s going on within your environment so that you can start quantifying their risk level and then Prioritize what needs to be addressed, but then also focusing on people process technology I know I sound like IT practitioner, which I’m very proud of it, but you know Set aside the technology aspect you hear a lot from various vendors I’m part of the vendors community, but the bottom line is that people process is actually the You could be a strength it could also be weakness that one needs to explore to make sure that proper education proper expectation But at the same time not having the proper process procedure lay in so that people can really build a cyber awareness culture I think that’s that’s what’s so critical within the environment You know, I agree with the panel members here that, you know, there is a lot of, you know, blaming the users in a way that I think everybody should be on the hook. Perhaps setting the right expectation, but then after the expectation has been set, focusing on, you know, everybody needs to be vigilant and protecting the environment.

Moderator – Lucy Hedges:
Yeah. Absolutely. Bill, final words from you.

Prof. William H. Dutton:
One thing that probably should be said is we did a global survey of people recently about whether they had more cybersecurity problems working from home or working in different locations and so forth. And we found out that actually working at home wasn’t a problem. Most organizations are set up to support remote working and they have a variety of strategies and we asked people whether their corporate or organization or institution supported like their own laptop from the company. Do they use multi-factor authentication and so forth? We are surprised most companies and most organizations are providing a lot of support so that people can work almost anywhere at any time within a safe environment and they have relatively few problems. But in the very smaller, the smaller organizations and with individuals that are outside the corporate sector, if there’s something that can be done to support those smaller organizations and I don’t, but in terms of those in sizable, in small and medium sized enterprises even are fairly well protected and they, companies are doing a pretty good job actually, pretty good security in a sense.

Moderator – Lucy Hedges:
Yeah. So on that note, that brings us to the end of our conversation. Please give a well-deserved round of applause to my excellent, all knowledgeable panelists Philip, Gareth, Bill and David. Thank you for a brilliant and insightful conversation. It’s great to pick your brain. So thank you. You’re welcome.

David Chow

Speech speed

197 words per minute

Speech length

1660 words

Speech time

506 secs

Gareth Maclachlan

Speech speed

202 words per minute

Speech length

1827 words

Speech time

542 secs

Moderator – Lucy Hedges

Speech speed

201 words per minute

Speech length

1125 words

Speech time

336 secs

Philippe VALLE

Speech speed

157 words per minute

Speech length

812 words

Speech time

311 secs

Prof. William H. Dutton

Speech speed

151 words per minute

Speech length

1805 words

Speech time

716 secs