Cyber Safeguards Against Human Lapses

Whether you are a novice or a seasoned professional, one of the great advantages of working in cybersecurity is the availability of well-established industry frameworks and standards. These include concepts like Zero Trust Security, Secure by Design, Defense in Depth, and guidelines from the National Institute of Standards and Technology (NIST). These frameworks offer excellent guidance on security strategies, designs, and practices. However, selecting the most effective safeguards for your specific organization requires significant on-the-job experience and possibly, lessons learned from past headline-making incidents.

Some organizations, driven by a fear of cyber threats, may choose to implement as many safeguards as their budget allows. Others might assess their risk profile, quantifying the likelihood and impact of various attack scenarios to determine the essential protections. However, neither approach is ideal. An excess of defensive solutions can create a false sense of security, expand the attack surface, and unnecessarily inconvenience users. Conversely, focusing solely on risks from potential attacks can overlook the critical human factor.

Humans are often the weakest link in cybersecurity. End users with poor cyber hygiene, who are unaware of phishing attempts and security patches, or who continue to use outdated software, pose a constant threat. However, it’s important to recognize that even the most seasoned tech staff can be part of this vulnerability. Network engineers, server administrators, and software developers are only human, and a momentary lapse in judgment could lead to dire consequences. These individuals often hold privileged credentials, which, if compromised, could allow attackers to create backdoors into servers, inject malware, or crack user passwords across the organization.

In my decades of experience as a tech chief, I have seen that the most malicious attacks require the exploitation of three key assets: the corporate network, a login identity, and a device. Unfortunately, many organizations fall victim to attacks because the human element, which serves as the first line of defense, is inadvertently compromised.

Capitalizing on Private IP Addresses

Personal computers, by design, are often viewed as devices that individuals can use with minimal restrictions. This includes the freedom to share folders and files, and to run freeware and shareware. However, the influx of thousands of “install-and-forget” Internet-of-Things (IoT) sensors and personal smart gadgets into corporate networks has made it increasingly difficult to strike a balance between mitigating endpoint exposures and providing a user-friendly experience.

One effective protection against attacks, particularly Zero-Day threats from the Internet, is the use of private IP addresses. These addresses are not routable, meaning that servers, applications, desktops, IoT devices, and other resources within this address space are not reachable from the outside. This effectively blocks malicious probes and connection requests from ever reaching them.

Enforcing Network Admission Control

Internally, the combination of user lapses and the sheer scale of desktop computers presents significant risks. It’s not uncommon to find misconfigured folders with open access to sensitive data, outdated software with known vulnerabilities, or desktops lacking the anti-malware provisions that should have been in place from day one.

With Internet ingress heavily guarded by firewalls and virtual private networks, adversaries often target users’ desktops as a soft entry point into the enterprise. Known as lateral movement in cybersecurity, this tactic allows attackers to conduct reconnaissance, exploit identities, escalate privileges, and eventually target high-value resources.

To mitigate user lapses, it’s crucial to limit users’ rights to make indiscriminate changes to their desktops. If this isn’t feasible administratively, Network Admission Control (NAC) should be adopted to enforce compliance before allowing any desktop to connect to the network. The enrollment process should ensure that all legitimate and authenticated devices are registered centrally. Upon user login, NAC will check for compliance against a pre-qualified list of cybersecurity safeguards, such as excessive rights or signs of infection. This is particularly valuable in complex environments with multiple operating systems, hardware, and software profiles.

Automating Security Patches and Configurations

A moment of human error can be costly for an enterprise. Relying too heavily on memory, written standard operating procedures (SOPs), or common practices often falls short when it comes to addressing anomalies. Server administrators are inundated with software updates, bug fixes, security patches, and configuration changes daily. A missed patch on one of thousands of servers might go unnoticed until it’s too late, especially if that server was supposed to be taken offline months ago but becomes the initial point of entry for a lateral move attack.

With frequent server additions, removals, and configuration changes, it’s essential for server administrators to maintain continuous visibility of all servers, be promptly alerted to security patches and dubious changes, and have confidence in an accurate asset list for remediation.

Patch and Configuration Management (PCM) automates asset tracking, checks for pending software updates and security patches, and applies remediation. As with any automation, it’s crucial to establish a process with identified control points before implementing the tools around it. In the case of PCM, ensuring that the enterprise keeps an up-to-date server inventory is pivotal to the overall cybersecurity operation.

Locking Up Privileged Credentials

Most user access to corporate resources is now protected by two-factor authentication (2FA). While not perfect due to risks like phishing and lifelike login pages, 2FA is still a reasonable safeguard for general user logins. However, when it comes to privileged credentials with full control and access over databases, log files, memory dumps, and the ability to spawn new processes across all servers, the stakes are much higher.

Integrating 2FA for privileged credentials in a heterogeneous environment, with a mix of third-party cloud applications, proprietary core business management software, and network and security appliances, is not always straightforward. Furthermore, the human factor often comes under scrutiny during audits. For example, should admin credentials be disabled when idle? Are there improper uses when there is no record of access? Should credentials be changed after each use? With staff turnover, disgruntled employees, and operational lapses, audits, rightfully highlight the need for action.

Like a bank managing deposits and withdrawals, organizations should use automation and tools to secure privileged credentials and allow access only upon approval. These tools can enforce audit trails, check out privileged credentials, set time limits for use, check them in upon expiry, and change passwords without the tech staff’s knowledge. Effectively, nobody should have access to privileged credentials unless cleared through the control process.

Final Thoughts

From the boardroom to the executive suite, very few would argue against investing in cybersecurity. However, one provocative thought I encountered is that even the top companies by market capitalization, despite significant cybersecurity investments, still get hacked. My response? The key to success isn’t just how much you spend, but the people on the job—those who can make or break your security efforts.

Ultimately, the effectiveness of cybersecurity lies not just in technology but in the people who implement, manage, and use it.





*Copyedit: ChatGPT

Leave a Reply

Your email address will not be published. Required fields are marked *