Too often, cyber security is considered to be exclusively the domain of information technology (IT) and cyber security specialists. This is unfortunate, as we are progressing towards a future where an interdisciplinary approach will be crucial to deal with complex systems and their systemic fragilities, where cause and effect are not evident at all
As an IT auditor and futurist, I am in a privileged position to approach problems from a broader contextual perspective; the bigger picture, the macro patterns crystallising over a long period of time – and what they mean for the assurance industry.
The Internet was initially perceived by many as a social system, promoting openness and data-sharing. Cyberspace was not ‘built’ for the purpose of securing data. In recent times, the importance of securing data has been recognised. In response to this need for increased security, privacy and cyber security advocates, and many other role-players influenced governments to enact cyber security and privacy legislations in many countries, including South Africa.
The introduction of cyber security has given rise to two seemingly irreconcilable paradigms, namely openness and security, that need to be balanced in managing expectations around cyber security. Too many executives and company boards have unrealistic expectations about the prevention of cybercrime.
Furthermore, cyberspace is a complex system, with more than three billion people (with a free will) interacting with each other on a daily basis – over and above the billions of machines and devices facilitating these interactions. And believe it or not, political, economic and social conditions in the world, to name a few contextual factors, are part of the bigger cyber security picture.
How fragile are we?
In 2014, the Central Intelligence Agency and National Security Agency did not anticipate a scenario in which Russian intelligence operatives could pass on orders and amass troops near Ukraine’s border in the old-fashioned way of making use of the old courier system instead of electronic communication. As a result, they were not aware of Russia’s plan to invade Ukraine. By using traditional methods, Russia essentially had an anti-systemic response of disengagement from the specific technology (that is, cyberspace).
The question is: can your organisation still disengage from cyberspace? Most probably not. In our urge for efficiency and effectiveness – that is, by automating all tasks and connecting everything to the Internet – we create fragile systems. A reduction in redundancy or alternative (even manual) options creates fragility. And if your organisation is inevitably adjoined to the Internet, it is open to the dark forces thereof – and you will have to manage amidst the systemic chaos.
Flawed thinking – our downfall
The captain of the Titanic, E J Smith, once said: ‘But in all my experience, I have never been in any accident … of any sort worth speaking about. I have seen but one vessel in distress in all my years at sea. I never saw a wreck and never have been wrecked nor was I ever in any predicament that threatened to end in disaster of any sort.’
We create our own Black Swans by being too optimistic and over-confident in our capabilities to achieve success. As Smith relied too much on the Titanic’s design to resist all disasters imaginable, we also place too much confidence in our ‘secure’ designs to prevent security incidents. Just as Smith did, we architect cyber security on the basis of success – which is an inaccurate assumption. In the future we will have no other choice but to architect on the basis of failure – and to adopt methods to deal with failures. Cyber security challenges will increase and systemic breakdowns will occur. It is inevitable.
Why traditional risk management and best practice are not enough to deal with cyber risks
Professor David Snowden1 makes a case with his Cynefin framework as to why best practice is not suitable for complex systems. Best practice is past practice, and suitable for simple systems. Simple systems are stable systems, with clear cause-and-effect relationships. In simple systems, the right answer is self-evident and undisputed. Simple systems are areas where very little changes, such as problems with an order processing system. Adjacent to a simple system is a complicated system, which belongs to the domain of good practice (and the expert) and where there might be multiple right answers. Although there is a clear relationship between cause and effect, not everyone can see it. In his words, a complicated system is like a Ferrari – it is possible to disassemble it and to put it together again, with the help of an expert, of course.
A complex system, however, is such because of the complexity in the relationship and interaction of multiple components or sub systems. For example, once you have uprooted your garden or the rainforest, you cannot bring it back to its previous state, as things will not be the same again. This is the domain of emergence – where solutions cannot be imposed on the system. Solutions will emerge as you continue, as they do during a crisis. To illustrate, just after 9/11, airplanes were grounded in the USA. Shortly after the crisis, an effort was made to plan for the next crisis – for a situation where airplanes may have to be grounded again. The conclusion of this futile exercise was that, although a list of airfields can be made with additional information about the size of airplanes, the number of planes every airfield can take, etc, it is impossible to determine, in advance, which airplane can land, where. The answer or solutions will emerge during the crisis.
Traditional risk management as well as best and good practices do not deal in an adequate way with Black Swans and unpredictability. As an organisation you do have control over simple and complicated systems; but you do not have control over a complex system – like cyberspace – as there are currently more than three billion people making up the components of the complex system – the Internet. The actions of these cyberspace players are unpredictable. If the expansion plans of companies like Google and Facebook succeed, we can expect even more cybercrime when eight or nine billion people are on the Internet in the future.
Systemic cybercrime: Why we are heading for systemic chaos?
The main reason cited for the systematic chaos that is foreseen is our urge for efficiency and effectiveness which create fragile systems. The increase in religious, political (and geo-political conflict), economic and job polarisation worldwide could lead to the proliferation of organised crime syndicates who want to exploit government and corporate systems. Other factors that contribute to this systemic chaos include:
The rapid pace at which automation is being introduced
The increased numbers of the global population coming online, consisting of people with a free will and whose actions cannot be predicted, and
The increase in integration of all systems, platforms and people
In certain circumstances the best predictor of future behaviour is past behaviour. Up to now we could not prevent the massive number of known cyber security breaches. What makes us think that we will be able to prevent cyber incidents in the future, whilst operating in a complex system? We have already noticed systemic breakdowns and uncontrollable situations, for example, ransomware-as-a-service console, which is now commercially available on the Internet – so a buyer of the console can attack whoever is next in a very long line of cyber victims.
What is the solution?
There are a number of possible options, including some that are not so conventional. This article will briefly focus on two.
First, we need to deal with our obsession with preventive controls and best practice in a complex system. Preventive controls could not prevent the known and unknown incidents in the past, and will be less likely to do so in the future. We will have to learn to manage amidst the systemic chaos. We need to architect, not on the basis of success using preventive controls that may or may not be effective, but on the basis of failure. We should be in a position to detect breaches in a very short period of time – and correct them. The question to a Chief Information Officer should not be: ‘Why did your controls not prevent the incident?’ But rather, ‘How quickly did you notice it — in 1 day or 100 days?” and ‘What did you do about it?’
Using auditing jargon, in the future we will not have a choice but to spend more funding on detective and corrective controls. Ransomware has already proven to us that corrective controls are basically the only type of control that remains in the toolkit to rely on.
Second, 2013’s remote hacking of a dam’s operating system in the USA (only announced in 2016) by an Iranian terror group illustrated the potentially devastating consequences of having critical networks integrated online. The hacker gained unauthorised access to the dam’s office data systems, but luckily did not have the ability to control it because the sluice gate happened to be disconnected for maintenance. Therefore, it may be necessary to avoid such critical networks from being accessible online.
While conventional wisdom tells us that prevention is far better than dealing with the effects of a cyberattack, it is now very clear that we might not have a choice but to rely more on detection and correction and, in ultimate cases (that is, extremely and critical national infrastructure, like nuclear plants or dams), it might even be better not to allow certain high-risk networks to be online at all. We have to accept a future in which we should carefully consider the interfacing of systems.
An anti-systemic response is an approach of disengagement from a system’s rules which you cannot beat; for example, guerrilla warfare compared to a conventional army’s tactics. Avoidance might be a suitable alternative, in extreme cases, in a world where overconfidence in preventive controls (sometimes to the extent of arrogance) has let us down.
What does this all mean for the assurance industry?
From an assurance perspective, we are interested in determining that for a given financial period, the financial reporting systems responsible for generating the data were processing the data in a complete and accurate manner, and that the data was generated in an authorised manner. In the future, the assurance profession will have to focus more on the integrity of detective and corrective controls. For example, after a data loss due to a security incident, the audit should focus on whether data recovery controls in place were reliable in recovering the data to a verified state of data integrity again – and that evidence is available and verifiable. If an entity could not restore its operational and financial systems, which has happened in the past, a modified audit opinion might be on the cards. This would naturally be an unpleasant outcome, especially if investors or bankers are on a board’s case.
NOTE
1 David J Snowden and Mary E Boone, A leader’s framework for decision making, Harvard Business Review, November 2007, 1–9.
Author: Michiel Jonker is Director: IT Advisory at Grant Thornton