Great graphic (from the people at Information is Beautiful) but it immediately raises a question of just how we measure severity, extent and consequence in this context. Is there an offset and identifiable boundary between naive analysis that concentrates on common as relatively unproblematic or pragmatic metrics and more detailed (yet broader spectrum) analysis that engages the radiative speciation of potentially or likely unmanaged as unknown or unprovable complexity?
The power law distribution of events holds from small with high frequency to large with low frequency. Intuition suggests that the mass-density of consequences from high frequency “minor” events more or less matches the mass density of low frequency “major” as catastrophic events. So, very many small incidents can be more or less equivalent to a handful of very large incidents.
What this means is that an aggressor in cyberspace need not (or only) seek or cultivate large as catastrophic events. The aggregate effect of very many small breaches and intrusions or thefts might have an equivalent mass-density of unmanageable complexity and consequence.
This means that large campaigns of intrusion spread across many thousands of less severe instances are more effective because analytical integration is more difficult.