By Hugh Boyes CEng FIET CISSP, Cyber Security Lead, Institution of Engineering and Technology (IET)
The UK’s Critical National Infrastructure comprises a number of complex cyber–physical systems, where increasingly sophisticated control is required to maintain and deliver critical services. Any failures or disruption of these systems may cause significant disruption and serious economic damage. The ‘CIA’ triad that is at the heart of information assurance policies is not considered well suited for managing infrastructure susceptibilities. This article describes an alternative model that should enable greater alignment and collaboration of the functional safety and cyber security communities, and may therefore result in systems which are both safe and secure.
INTRODUCTION
The UK’s Critical National Infrastructure (CNI) comprises a geographically and technically diverse range of complex cyber–physical systems that are necessary for the country to function and for the operation of daily services upon which our daily lives depend. We take for granted that water will flow when we turn on a tap, that electricity is available when we turn on a switch and that food and fuel will be available when we want to buy them. We also expect emergency, financial and health services to be available when we need them. However, they all rely on a predominantly privately owned and operated infrastructure, which is largely invisible to us.
This infrastructure is becoming increasingly complex as the operators introduce greater automation and more sophisticated controls in response to environmental and economic pressures. These so called ‘smart’ systems, such as the Smart Grid, Smart Metering, Intelligent Transport, etc. rely heavily on the use of communications and information technologies to enable dynamic control in response to demand or consumption. Significant benefits will come where systems integration delivers operational gains, e.g., the production and storage of electricity from renewable sources. However, the introduction of ‘smart’ technologies and greater integration of these systems is not without its risks, particularly those related to cyber security vulnerabilities.
This article examines the nature of cyber–physical systems and how they differ from the information processing systems used in administrative or banking functions. It discusses the limitations of the traditional information assurance ‘CIA’ triad when applied to these complex systems and illustrates how, with minor modifications, an alternative information assurance model can be used to address both cyber security and systems safety.
CHARACTERISTICS OF A CYBER–PHYSICAL SYSTEM
A cyber–physical system is one that combines computation and electronic processing with physical processes and actions. Examples of these systems can be found in power generation, water treatment and processing, manufacturing, transportation, etc. A key difference between a cyber–physical system and your personal computer, or a corporate server providing email and accounting services, is that it directly controls or influences outcomes in the real world. Cyber-physical systems may be standalone, but are increasingly being networked, using physical cables, wireless technologies, or the Internet.
Cyber–physical systems typically comprise the following components:
- Control and processing units – a system may have a single central processor, or a set of distributed units, the sophistication of the units will be determined by the required functionality, e.g. to simply turn electro-mechanical components on or off, or to greatly interact so as to maintain a physical process within a specified range.
- Sensors to provide the system with inputs from the physical environment and/or users. The sensors may measure and report on the state of a physical component, e.g. that a value or cover is open or closed, or it may measure a parameter, e.g. temperature, flow rate, pressure, weight or light intensity.
- Actuators to enable the system to interact and influence the physical environment, e.g. valves, lock release mechanisms, solenoids, motors, fans, etc.
- Interfaces, which may include human-machine interfaces used by system operators enabling input of information or commands, and machine-to-machine interfaces enabling systems to interact with each other.
The control of cyber-physical systems is accomplished with industrial control systems (ICS), which, depending on the nature of the system, may be a Programmable Logic Controller (PLC), a Distributed Control System (DCS) or a Supervisory Control and Data Acquisition system (SCADA). Unlike the technology we find on our desktop and in the corporate data centre, the operating lives of these control systems can be decades. For example, some of the control systems in power stations are over 30 years old and employ operating systems and hardware that were long since deemed obsolete in the general corporate environment.
The failure of a cyber-physical system can have a serious impact on our lives, on society as a whole and on the environment. Failures may result in loss of life, explosions, pollution and economic damage. The explosion and fire in December 2005 at the Buncefield oil storage depot in Hemel Hempstead is an example of what happens when a cyber-physical system fails. Whilst the root cause of this explosion was the overfilling of a fuel tank, resulting in spillage of highly inflammable petroleum products, which led to the formation of an explosive vapour cloud, this arose from system failures. The overfilling was caused by the failure of both an automatic tank gauging system and an independent high-level safety switch, the former preventing operators spotting an overfilling situation, the latter resulting in failure to automatically cut off the pumps that were overfilling the tank.
The cause of the Buncefield incident was a combination of a system design issue and a procedural failure during tank maintenance. However, cyber-physical systems are potentially vulnerable to cyber-attacks, whether by disgruntled former insiders as was the case with the Maroochy sewerage treatment plant in Australia, or the Stuxnet malware that affected Iranian nuclear processing plants. Due to the spread of this malware, the latter incident had a global impact requiring patching and clean-up of vulnerable control systems that became infected in a wide range of industries.
INFORMATION ASSURANCE FRAMEWORKS AND CYBER SECURITY
The traditional information assurance model that forms the basis of many standards, policies and procedures is the ‘CIA’ triad, which comprises three attributes:
- Confidentiality – which encompasses privacy, the control and authorisation of access to data or information, and any ability to process, modify or delete data or information;
- Integrity – which addresses the trustworthiness of the data or information storage, the authenticity of data and results;
- Availability – the availability of systems and associated business functions when needed.
For corporate IT systems, the order of the triad is seen as reflecting the priority of each attribute, i.e. maintaining confidentiality is often seen as more important than integrity and availability. From an information assurance perspective, this is understandable when the systems are, for example, storing personally identifiable information, sensitive financial data or material with significant ineffectual property value. Corporate IT systems operate on IP-based networks, the technologies, policies and practices associated with the protection of these networks, servers and personal IT devices can also be applied to their connectivity to and use of the Internet.
Cyber security is much wider than these IP-based networks, the computing devices connected to them and the information that is stored, transferred or used within these networks. The cyber environment, or cyberspace as it is sometimes called, effectively comprises the interconnected networks of electronic, computer-based and both wired and wireless systems. Cyber–physical systems go beyond the traditional Internet Protocol (IP); they can use a significant number of other protocols. They also include embedded devices that often operate independently within their own platforms. Where the cyber-physical system includes a wireless element, it could be a common protocol, e.g. Wi-Fi or Bluetooth, an RF modem, or it may use light, e.g. laser or infrared. Given their nature, cyber-physical systems can be vulnerable to environmental disturbance, whether man-made or natural, e.g. electromagnetic compatibility issues, jamming and interference or damage from lightning or solar storm events.
Given the breadth of the issues regarding the cyber security of cyber-physical systems, e.g. those related to process, asset and environmental security, the ‘CIA’ triad does not adequately address the safety and control aspects of these systems. An alternative approach, which combines engineering good practice with information security can be achieved by adapting the Parkerian Hexad, proposed by Donn B. Parker in 1998, with the addition of safety as a seventh element. Using this alternative approach requires cyber security to be considered in terms of the following seven attributes:
- Confidentiality – the system and associated processes shall be designed, implemented, operated and maintained so as to prevent unauthorised access.
- Possession and/or Control – the system and associated processes shall be designed, implemented, operated and maintained so as to prevent unauthorised control, manipulation or interference.
- Integrity – the system and associated processes shall be designed, implemented, operated and maintained so as to prevent unauthorised changes being made to assets, processes, system state or the system itself.
- Authenticity – it shall be possible to verify the authenticity of inputs to and outputs from the system, its state and any associated processes. It shall also be possible to verify the authenticity of components, software and data used within the system and any associated processes.
- Availability – the system and associated processes shall be consistently accessible in an appropriate and timely manner. In the event that the system or associated processes suffer disruption, impairment or an outage occurs it should be possible to recover to a normal system operating state in a timely manner.
- Utility – the system and associated processes shall be designed, implemented, operated and maintained so that the utility of their assets is maintained throughout their life cycle.
- Safety – The design, implementation, operation and maintenance of the system and associated processes shall not jeopardise the health and safety of individuals, the environment or any associated assets.
By applying these seven attributes to cyber–physical systems, the risk of serious cyber security vulnerabilities should be reduced. These attributes are also relevant and applicable to the treatment of functional safety. A common problem with cyber–physical systems is the different perspectives employed by security and safety practitioners. Security specialists generally focus on Threats (the probable effects of a threat actor affecting a system), whereas safety specialists focus on Hazards (the probability of a hazardous event occurring). However, for cyber–physical systems there is an implicit non-functional requirement for the systems to be trustworthy in the face of adversity (a superset of Hazards and Threats). Adopting this extended list of attributes provides a common framework to facilitate collaboration between the security and safety specialists.
CYBER SECURITY AND CYBER–PHYSICAL SYSTEMS
In cyber-physical systems, cyber security is not just about preventing attacks, it is also about the systems operating in a trustworthy manner. Safety critical systems are designed and operated so that if an incident occurs they should fail safe. Increasingly, cyber–physical systems need not only to fail safe, but to fail secure. For example, a system malfunction in the high security wing of Miami TGK Jail in August 2013 led to the cell locking system causing the unplanned and unauthorised unlocking of the cell doors. A consequence of this failure of a specialised industrial control system was the hospitalisation of a prisoner who came close to losing his life in an incident, following the unplanned opening of cell doors.
Two attributes that are significant in cyber-physical systems, and missing in the ‘CIA’ triad, are possession and/ or control, and utility. A critical issue for control systems is avoidance of failure modes where an operator is unable to control the system, either through loss of control or loss of view. In the former, the operator may be able to see a fault but the control system is not responsive to the operator’s actions to remedy it. In the latter, the operator loses situational awareness, e.g. of a system alarm state or anomalous system behaviour. These failure modes could arise from network floods or other malicious traffic, or the presence of malware on the system as was the case with Stuxnet. An example of a utility issue is the loss of the Mars Climate Orbiter, where one design team had implemented a system using metric units of measurement (e.g., m and km/h) and another was using imperial measurements (e.g., ft. & mph). As a result, the Orbiter incorrectly entered the Martian atmosphere and was destroyed.
As explained, cyber–physical systems are not just computer networks, so it is essential that both the characteristics and behaviour of sensors and actuators are understood, and the means by which they connect to the control system. Increasing use of wireless technologies introduces significant vulnerabilities, e.g. the potential ability to access the system from outside of a site’s perimeter, the relative insecurity of many wireless links, and the susceptibility of wireless links to jamming or interference. An example of a system component that is vulnerable to jamming or interference is the use of a GPS receiver for timing and positioning purposes. The weak satellite navigation signals required by the GPS receiver are easy to jam, interfere with and even to spoof, so the consequences of a loss of a GPS data feed or the presence of an inconsistent feed should be taken into account in the design of any critical infrastructure system relying on GPS signals, for timing or spatial information.
CONCLUSIONS
In a world where society depends on the safe and secure operation of complex and potentially autonomous cyber-physical systems, we need to ensure that these systems operate in a dependable manner. To achieve this will require a better understanding of system vulnerabilities and their impact on cyber security. We also need to recognise that due to the longevity of most cyber-physical systems, they tend to develop though incremental upgrades and enhancements. Attaching new wireless or Internet-facing components to a legacy system may severely compromise its integrity.
It is also important to recognise that the system maintainers and operators are potentially the greatest threat to complex cyber-physical systems. Whether through careless use of removable media, unauthorised connection of personal IT devices to a control room network, accessing email and the Internet from control workstations or the actions of a disgruntled employee or contractor, insiders are a threat to the cyber security of these systems. The seven attributes outlined in this paper can form the basis of an analysis of the vulnerabilities of a cyber-hysical system and identify appropriate countermeasures.
Given the increasing sophistication of the potential attackers, whether from a hostile state, organised crime, or individuals, there is a need for constant vigilance and improvements in the cyber security of critical infrastructure. The seven attributes affect both the overall system design and the trustworthiness of its software. The Trustworthy Software Initiative (TSI), discussed in another article in this issue, provides a Trustworthy Software Framework (TSF) for assuring the quality of the software. By adopting the TSF, and considering the seven attributes and their impact on the operation and performance of the software, system owners and operators can be assured of a design which affords improved cyber security and resilience. ■
ABOUT THE AUTHOR
Hugh Boyes is Cyber Security Lead at the IET, where he advises on the development of good practice and cyber security skills initiatives for engineering and technology communities. His work focuses on the design and operation of cyber-hysical systems. Hugh is a Principal Fellow in the Cyber Security Centre at WMG within the University of Warwick.