Cyber-physical systems are at the core of our current civilization. Countless examples dominate our daily life and work, such as driverless cars that will soon master our roads, implanted medical devices that will improve many lives, and industrial control systems that control production and infrastructure. Because cyber-physical systems manipulate the real world, they constitute a danger for many applications. Therefore, their safety and security are essential properties of these indispensable systems. The long history of systems engineering has demonstrated that the system quality properties—such as safety and security—strongly depend on the underlying system architecture. Satisfactory system quality properties can only be ensured if the fundamental system architecture is sound! The development of dependable cyber-physical architectures in recent years suggests that two harmonical architectures are required: a design-time architecture and a run-time architecture. The design-time architecture defines and specifies all parts and relationships, assuring the required system quality properties. However, in today’s complex systems, ensuring all quality properties in all operating conditions during design time will never be possible. Therefore, an additional line of defense against safety accidents and security incidents is indispensable: This must be provided by the run-time architecture. The run-time architecture primarily consists of a protective shell that monitors the run-time system during operation. It detects anomalies in system behavior, interface functioning, or data—often using artificial intelligence algorithms—and takes autonomous mitigation measures, thus attempting to prevent imminent safety accidents or security incidents before they occur. This paper’s core is the protective shell as a run-time protection mechanism for cyber-physical systems. The paper has the form of an introductory tutorial and includes focused references. © 2023, The Author(s).
This publication provides a catalog of security and privacy controls for information systems and
organizations to protect organizational operations and assets, individuals, other organizations,
and the Nation from a diverse set of threats and risks, including hostile attacks, human errors,
natural disasters, structural failures, foreign intelligence entities, and privacy risks. The controls
are flexible and customizable and implemented as part of an organization-wide process to
manage risk. The controls address diverse requirements derived from mission and business
needs, laws, executive orders, directives, regulations, policies, standards, and guidelines. Finally,
the consolidated control catalog addresses security and privacy from a functionality perspective
(i.e., the strength of functions and mechanisms provided by the controls) and from an assurance
perspective (i.e., the measure of confidence in the security or privacy capability provided by the
controls). Addressing functionality and assurance helps to ensure that information technology
products and the systems that rely on those products are sufficiently trustworthy.
this paper outlines the variables that shape government decisions to intervene via trade policy and investment rules in these markets
This article seeks to understand which actors desire securitisation or its opposite, desecuritisation, of technology. The contribution of this research is twofold. Firstly, securitisation of technology has implications for understanding defence and security in contemporary Europe. Secondly, identifying the actors involved in (de)securitisation allows for the analysis of their different roles in determining security discourses around technologies. The article builds on the literature on securitisation theory.
I create a typology of technology sharing policies based on the ease and breadth of technology transfer they facilitate and explain choices amongst these policies with an original theory called Threats Over Time Theory (TOTT). TOTT predicts decisionmakers share technology when they face severe threats – to either the survival of their state or the organization that they lead. When such threats exist, decisionmakers adjust the liberalness of their desired technology sharing policy based two factors: the likelihood a future adversary may gain the technology because of the sharing – either through a leak or because recipient itself becomes an adversary – and the speed at which the shared technology is likely to become obsolete. I test TOTT using cases during and between the World Wars – the most recent previous period of multipolar international competition.
In this article, we provide an introduction to simulation for cybersecurity and focus on threethemes: (1) an overview of the cybersecurity domain; (2) a summary of notable simulation research efforts for cybersecurity; and (3) a proposed way forward on how simulations could broaden cybersecurity efforts
This paper presents the holistic view of the security landscape and highlights the security threats, challenges, and risks to the smart city environment.
Based in computational social science, this paper argues for cybersecurity to adopt more proactive social and cognitive (non-kinetic) approaches to cyber and information defense. This protects the cognitive, attitudinal, and behavioral capacities requiredfor a democracy to function by preventing psychological apparatuses, such as confirmation bias and affective polarization, that trigger selective exposure, echo chambers, in-group tribalization, and out-group threat labelling. First, such policies advocate cyber hygiene through rapid alert detection networks and counterdisinformation command centers. Second, they advocate information hygiene through codes of online behavior stressing identity- and self-affirmation, as well as media literacy training and education programs. This supplements the bridging of the STEM and social sciences to present a policy framework for confronting information threats based on a blended understanding of computer science and engineering, social and cognitive psychology, political and communication science, and security studies.
This report investigates the growing role of defence
software and AI/ML (machine learning) in military power
now and in the medium term. It focuses on three goals:to define software-defined defence, to assess ongoing practices and processes in the development of defence software and AI/ML, and identify recurring challenges, to explore and assess the ongoing efforts towards
software-defined defence in five country case studies – China, France, Germany, the United Kingdom and the United States – and how SinoAmerican strategic competition is shaping them.
The results of successful hacking attacks against commercially available cybersecurity protection tools that had been touted as secure are distilled into a set of concepts that are applicable to many protection planning scenarios. The concepts, which explain why trust in those systems was misplaced, provides a framework for both analyzing known exploits and also evaluating proposed protection systems for predicting likely potential vulnerabilities. The concepts are: 1) differentiating security threats into distinct classes; 2) a five layer model of computing systems; 3) a payload versus protection paradigm; and 4) the nine Ds of cybersecurity, which present practical defensive tactics in an easily remembered scheme. An eavesdropping risk, inherent in many smartphones and notebook computers, is described to motivate improved practices and demonstrate real-world application of the concepts to predicting new vulnerabilities. Additionally, the use of the nine Ds is demonstrated as analysis tool that permits ranking of the expected effectiveness of some potential countermeasures.