Charl van Der Walt of SecureData SensePost examines some of the major stories that have hit the headlines recently and the implications for the future. And he identifies some key trends, including the role that government policy will play in the security of nations and individual citizens.
Over the past decades, industries and governments have progressively been relying upon space data-centric and data-dependant systems. This led to the emergence of malicious activities, also known as cyber-threats, targeting such systems. To counter these threats, new technologies such as Artificial Intelligence (AI) have been implemented and deployed. Today, AI is highly capable of delivering fast, precise, and reliable command-and-control decision-making, as well as providing reliable vulnerability analysis using well-proven cutting-edge techniques, at least when applied to terrestrial applications. In fact, this might not yet be the case when used for space applications. AI can also play a transformative and important role in the future of space cybersecurity, and it poses questions on what to expect in the near-term future. Challenges and opportunities deriving from the adoption of AI-based solutions to achieve cybersecurity and later cyber defence objectives in both civil and military operations require rethinking of a new framework and new ethical requirements. In fact, most of these technologies are not designed to be used or to overcome challenges in space. Because of the highly contested and congested environment, as well as the highly interdisciplinary nature of threats to AI and Machine Learning (ML) technologies, including cybersecurity issues, a solid and open understanding of the technology itself is required, as well as an understanding of its multidimensional uses and approaches. This includes the definition of legal and technical frameworks, ethical dimensions and other concerns such as mission safety, national security, and technology development for future uses. The continuous endeavours to create a framework and regulate interdependent uses of combined technologies such as AI and cybersecurity to counter “new” threats require the investigation and development of “living concepts” to determine in advance the vulnerabilities of networks and AI. This paper defines a cybersecurity risk and vulnerability taxonomy to enable the future application of AI in the space security field. Moreover, it assesses to what extent a network digital twins’ simulation can still protect networks against relentless cyber-attacks in space against users and ground segments. Both concepts are applied to the case study of Earth Observation (EO) operations, which allows for conclusions to be drawn based on the business impact (reputational, environmental, and social) of a cyber malicious activity. Since AI technologies are developing on a daily basis, a regulatory framework is proposed using ethical and technical approaches for this technology and its use in space. © 2023 International Association for the Advancement of Space Safety
Theoretical management thought posits that the challenge of addressing cyber risks has instigated a reevaluation of business resilience, particularly as organizations increasingly adopt digital transformation strategies. To empirically investigate this proposition, this paper delves into the realm of cybersecurity measures and their influence on business resilience. Employing a cross-sectional research design and a quantitative methodology involving 255 respondents, encompassing entrepreneurial SME managers, the study collected the essential data for its investigation. Structural equation modeling techniques were leveraged to scrutinize both the measurement model and the structural model.
A multitude of studies have suggested potential factors that influence internet security awareness (ISA). Some, for example, used GDP and nationality to explain different ISA levels in other countries but yielded inconsistent results. This study proposed an extended knowledge-attitude-behaviour (KAB) model, which postulates an influence of the education level of society at large is a moderator to the relationship between knowledge and attitude. Using exposure to a full-time working environment as a proxy for the influence, it was hypothesized that significant differences would be found in the attitude and behaviour dimensions across groups with different conditions of exposure and that exposure to full-time work plays a moderating role in KAB. To test the hypotheses, a large-scale survey adopting the Human Aspects of Information Security Questionnaire (HAIS-Q) was conducted with three groups of participants, namely 852 Year 1–3 students, 325 final-year students (age = 18–25) and 475 full-time employees (age = 18–50) in two cities of China.
As individuals, businesses and governments increasingly rely on digital devices and IT, cyber attacks have increased in number, scale and impact and the attack surface for cyber attacks is expanding. First, this paper lists some of the characteristics of the cybersecurity and AI landscape that are relevant to policy and governance. Second, the paper reviews the ways that AI may affect cybersecurity: through vulnerabilities in AI models and by enabling cyber offence and defense. Third, it surveys current governance and policy initiatives at the level of international and multilateral institutions, nation-states, industry and the computer science and engineering communities. Finally, it explores open questions and recommendations relevant to key stakeholders including the public, computing community and policymakers. Some key issues include the extent to which international law applies to cyberspace, how AI will alter the offencedefense balance, boundaries for cyber operations, and how to incentivize vulnerability disclosure.
Cyber warfare and the advent of computer network operations have forced us to look again at the concept of the military objective. The definition set out in Article 52(2) of Additional Protocol I – that an object must by its nature, location, purpose or use, make an effective contribution to military action – is accepted as customary international law; its application in the cyber context, however, raises a number of issues which are examined in this article.
The NIST Cybersecurity Framework (CSF) 2.0 provides guidance to industry, government
agencies, and other organizations to manage cybersecurity risks. It offers a taxonomy of highlevel cybersecurity outcomes that can be used by any organization — regardless of its size,
sector, or maturity — to better understand, assess, prioritize, and communicate its
cybersecurity efforts. The CSF does not prescribe how outcomes should be achieved. Rather, it
links to online resources that provide additional guidance on practices and controls that could
be used to achieve those outcomes. This document describes CSF 2.0, its components, and
some of the many ways that it can be used.
n liberal democratic countries, the role of the state in cybersecurity is a politically contested space. We investigate that role along three dimensions: the first is theoretical and we look at existing cybersecurity literature, showing that international affairs literature is almost exclusively highlighting the role of the state as a security actor. We argue that this view is too narrow and risks limiting the discussion to only a few aspects of what cybersecurity entails. The second is empirical and we analyse policy development, showing the diversity of the roles the state imagines for itself. The state occupies six different roles in cybersecurity: (1) security guarantor, (2) legislator and regulator, (3) supporter and representative of the whole of society, (4) security partner, (5) knowledge generator and distributor, and (6) threat actor. The third dimension is normative and we investigate what the role of the state should be.
The increasing demand for cybersecurity has been met by a global supply, namely, a rapidly growing market of private companies that offer their services worldwide. Cybersecurity firms develop both defensive (e.g. protection of own networks) and offensive innovations (e.g. development of zero days), whereby they provide operational capacities and expertise to overstrained states. Yet, there is hardly any systematic knowledge of these new cybersecurity warriors to date. Who are they, and how can we differentiate them? This contribution to the special issue seeks to give an initial overview of the coordination between public and private actors in cyberspace. I thus explore these new private security forces by mapping the emerging market for these goods and services. The analysis develops a generic typology from a newly generated data set of almost one hundred companies. As a result of this stock-taking exercise, I suggest how to theorize public-private coordination as network relationships in order to provide a number of preliminary insights into the rise of this ‘brave new industry’ and to point out critical implications for the future of private security forces.
This paper examines the multifaceted role of AI in cybersecurity, elucidating its applications in threat detection, vulnerability assessment, incident response, and predictive analysis.