The Human Elements of Cybersecurity: Privacy, Ethics, Usability, and Responsibility
A recurring theme I have found in security industry discussions since the start 2020 is the “human element,” of cybersecurity. a topic that I highly value. Information security professionals often interpret the human component of IT as “human fallibility,” the weakest link in a company’s data security apparatus. You can’t blame them. In many cases, cybersecurity incidents are enabled by human error, malicious intent, or ignorance. In fact, according to a study by IBM, human error is the leading cause of 95% of cybersecurity breaches. Therefore, it makes sense that the industry is increasingly investing in technologies, strategies, and standards that minimize these human risks. It’s one of the primary reasons that technologies offering behavior monitoring, insider threat detection, and data loss prevention tools are designed to reduce threats from both malicious and accidental human actors.
However, this isn’t a diatribe about the obvious predicament facing today’s data security landscape. Instead, I’ll look from the other side of the human equation: the users we are supposed to guard. Humans aren’t just resources that you can force to comply with security best practices. We have feelings, concerns, and needs. An effective security strategy will need to address these human elements.
For example, if you implement a strong password security policy without addressing the human tendency to look for convenience, people will find a way to bypass the rule. They will either write it down in plain text, save it on their browser, or start repeating the same passwords on unsanctioned/personal sites. You will need to provide them with an efficient option such as SSO, key vault, or something else to manage their passwords easily.
Similarly, let’s consider workplace monitoring. Many companies use these services to improve productivity and to reduce insider threats and data leaks. However, if you ignore the employees’ right to privacy, you will risk legal ramifications, not to mention cultural rifts, loss of trust, and many other issues that will outweigh any security benefits you can achieve. In other words, you need to adopt solutions and policies that are effective at delivering not just a functional security but enables inclusion. Let’s take a look at how this is accomplished.
In recent years, data privacy has become the topic of conversation among cybersecurity professionals because of the introduction of GDPR, CCPA, and other similar laws. On the one hand, you need to protect your customers’ data, your intellectual property, and business secrets from external or insider threats. At the same time, you have an obligation to uphold your employees’ privacy. The solution is to use autonomous systems, such as employee monitoring, UEBA, and DLP systems, to implement endpoint security but do so without inadvertently capturing employees’ personal data and exposing yourself to privacy violations. For example, suspend monitoring and keystrokes logging when users visit their bank’s website or access their personal email account, use anonymization or smart blackout features to redact PII/PFI/PHI or other private data. This can be a bit tricky and requires modern solutions that have such capabilities.
While data security is undoubtedly a good thing, it’s also a nuanced issue that can present companies with an ethical dilemma. After all, you are protecting your organization, customers, and employees from a devastating data loss event. In reality, things aren’t as black and white. However, it’s easy for motivations to get muddled when working to protect customer data.
For instance, employees might wonder why you are implementing specific security measures or monitoring initiatives. Is it because you want to increase your workplace productivity? Do you truly need to scan their emails to achieve that? While the goal of data security is ethical, the defensive measures need to be appropriate. Finding the purpose for monitoring and security and establishing boundaries and transparency protocols is key to avoiding such ethical pitfalls.
Security shouldn’t compromise usability. Instead, it should enable freedom and creativity. Fortunately, with the introduction of machine learning/AI, NLP, context-based classifications, and other software developments, companies can balance security and usability. However, you still need to spend time configuring those solutions or training them with enough data to minimize false positives. In addition, the success of your security initiative will suffer when you block a workflow without offering an alternative solution. For example, you might think blocking the use of cloud drives a sensible precaution. However, if you don’t allow another channel such as a private cloud or a ‘cloud-like’ solution such as Transporter or Space Monkey, employees will most likely share those files using email, USB drives, or less secure methodologies, ultimately making it even harder to enforce your security policy.
Data security isn’t just the responsibility of security experts. To be successful, data security priorities have to be a collective effort that extend to all levels of the company. Indeed, everything from election hacking and deep fakes to the weaponization of information can’t be addressed if we just rely on security professionals and technologies.
The problem is too big for a single group to handle. So, what can we do as security professionals to drive mass engagement? Most importantly, we can evangelize the importance of data privacy best practices.
As security professionals, we can all help in 2020 to do more and have a greater impact. Educate and train people whenever you have a chance. Skills like avoiding phishing emails, detecting the signs of social engineering, acting responsibility online, using basic protections, and reporting spam calls are some topics we can all share on our social channels. The more we share, the more awareness we create.
It’s easy to pass the buck and blame the users when they do something wrong, but as security professionals, we are the ones who are responsible for weighing the hard decisions between security and privacy, ethics and profitability, usability and compliance, responsibility and authority. Developing a human-centric policy to security will make it more approachable to our users and, in turn, propel its success.
This article was originally published on IT Security Central and reprinted with permission.