As I continue to talk about the systems on the internet today that potentially harbor digital diseases and vulnerabilities that could lead to digital illness and even digital pandemics, I feel it is essential to discuss the notion of digital negligence.
Now, this topic has the potential to be a minefield. In general, establishing negligence is tough. It requires knowledge of a given situation and what is considered a reasonable approach to addressing the given topic of interest. Moreover, there must also be some acceptable baseline and methods for achieving the given baseline.
Negligence can be difficult to define. In comparison, establishing that an industrial company is engaged in environmental pollution requires evidence of the pollution, measuring against an acceptable baseline, and an established way to address deviations. Circumstances can lead to various deviations from established criteria, not the least of which is how good an organization’s legal team is.
Now with the issue of cybersecurity, we fall far short of meeting any of these requirements. We have not established acceptable levels for what constitutes a “cyber pollutant,” nor any established method for determining the level at which it becomes egregious. We do not even have a process to drive to the requisite baseline.
What we do have, however, and Arctic Security is a good example, is tools that can help us establish a knowledge base of potential cyber pollutants. We also have some ways to identify organizations that perhaps need to address such known pollutants or, at a minimum, address systems that could be particularly vulnerable to such contaminants and be used as vehicles to spread them throughout the networked world.
Now the tough question becomes, how do we take this information and determine potentially negligent behavior, and what would be the best way to change the behavior, perhaps short of fines and sanctions against known offenders?
In the previous article, I discussed cybersecurity labeling designed to identify products less likely to cause cyber pollution. There are also efforts to address the software negligence at the source, such as the BSIMM (Building Security In Maturity Model) measurement framework that organizations developing software can use to improve their security baseline.
Perhaps we could develop actionable criteria by tying vulnerable and affected systems knowledge with something like BSIMM. What other parts are we missing from this puzzle to form a complete picture?
It's worth considering.