Skip to main content
Author(s):
Antonio Carlo Francesca Casamassima Nebile Pelin Mantı Bintang Alam Semesta W.A.M Paola Breda Tobias Rahloff Nicolò Boschetti
Journal
Elsevier BV
Abstract

Over the past decades, industries and governments have progressively been relying upon space data-centric and data-dependant systems. This led to the emergence of malicious activities, also known as cyber-threats, targeting such systems. To counter these threats, new technologies such as Artificial Intelligence (AI) have been implemented and deployed. Today, AI is highly capable of delivering fast, precise, and reliable command-and-control decision-making, as well as providing reliable vulnerability analysis using well-proven cutting-edge techniques, at least when applied to terrestrial applications. In fact, this might not yet be the case when used for space applications. AI can also play a transformative and important role in the future of space cybersecurity, and it poses questions on what to expect in the near-term future. Challenges and opportunities deriving from the adoption of AI-based solutions to achieve cybersecurity and later cyber defence objectives in both civil and military operations require rethinking of a new framework and new ethical requirements. In fact, most of these technologies are not designed to be used or to overcome challenges in space. Because of the highly contested and congested environment, as well as the highly interdisciplinary nature of threats to AI and Machine Learning (ML) technologies, including cybersecurity issues, a solid and open understanding of the technology itself is required, as well as an understanding of its multidimensional uses and approaches. This includes the definition of legal and technical frameworks, ethical dimensions and other concerns such as mission safety, national security, and technology development for future uses. The continuous endeavours to create a framework and regulate interdependent uses of combined technologies such as AI and cybersecurity to counter “new” threats require the investigation and development of “living concepts” to determine in advance the vulnerabilities of networks and AI. This paper defines a cybersecurity risk and vulnerability taxonomy to enable the future application of AI in the space security field. Moreover, it assesses to what extent a network digital twins’ simulation can still protect networks against relentless cyber-attacks in space against users and ground segments. Both concepts are applied to the case study of Earth Observation (EO) operations, which allows for conclusions to be drawn based on the business impact (reputational, environmental, and social) of a cyber malicious activity. Since AI technologies are developing on a daily basis, a regulatory framework is proposed using ethical and technical approaches for this technology and its use in space. © 2023 International Association for the Advancement of Space Safety

Concluding remarks
The world is transitioning into a new era. The importance of space as a military and strategic domain as well as an economic domain requires policy making and legal regulation of responsi- bilities for governments and growing private space actors, as well as outsider adversaries using emerging technologies in space and against space assets. AI technologies are expected to be used more extensively in fu- ture space missions, and augmented use of these new technolo- gies and cybersecurity concerns as to the latter, brings more top- ics to discuss to the security of future space missions and to the applicability of existing norms for new technology driven chal- lenges. However, cyber-attacks on space assets are different from the cyber-attacks targeting other kinds of critical infrastructure. This is because numerous States and now private actors are en- gaged in space activities, and considering the augmentation of ser- vices provided from space, the regulation of the new relationships requires new discussions, beyond existing frameworks provided by international space law. AI is enabling progress and innovation in the space sector and helps to provide robust solutions to the most relevant problems. Therefore, creating processes and frameworks to use AI technolo- gies requires taking into consideration particularities of the tech- nology, in the first place, in order to ensure clarity in normative and policy grounds, and to respond to cyber security requirements timely. Neither existing space policy nor cybersecurity policy is prepared for the challenges created by the meshing of space, cy- berspace and emerging technologies, especially designed for space assets and use of emerging technologies in space activities. In or- der to ensure adaptable/compatible use of emerging technologies with other technologies in complex environments, the adoption of responsive universal principles and regulatory frameworks be- comes an important agenda for authorities, governments and in- dustry. In the absence of dialogue and formal policy and regula- tions, it will become difficult to use emerging technologies, min- imise and mitigate risks, develop and use technologies for future missions within a security framework and to build robust defences against emerging technological threats. Therefore, it is essential to note that there are important, tech- nological challenges such as the use of AI-enabled DT technolo- gies with full performance. These challenges might depend on the scale and integration complexity of the applications, besides the uses for space missions. The main challenges to consider are is- sues related to data, including trust, privacy, cybersecurity, conver- gence and governance, acquisition and large-scale analysis. While DT promises many advantages, this technology is under develop- ment and far from maturity in the near future. The existing limita- tions for more mature and complex implementations of DTs across all domains, including both space and cyber, will also require over- coming communication network related obstacles on the techni-cal aspect, which also creates another difficulty for the widespread adoption of this technology and makes accessibility difficult. Trust in technology is another challenge, since the information flowing from various levels of indicator systems presents a challenge for developing common policies and standards. Therefore, lack of stan- dards, frameworks and regulations for DT implementations is one of the main challenges and has many aspects to consider. For com- plex implementations of this technology in specific environments, regulations will become more difficult in the future, considering the access related problems to sensitive data by private and mil- itary actors, and the adoption of uniform methodologies for data security and authenticity.

Reference details

DOI
10.1016/j.jsse.2023.08.002
Resource type
Journal Article
Year of Publication
2023
ISSN Number
2468-8967
Publication Area
Dual-use cybersecurity
Date Published
2023-12

How to cite this reference:

Carlo, A., Casamassima, F., Mantı, N. P., W.A.M, B. A. S., Breda, P., Rahloff, T., & Boschetti, N. (2023). The importance of cybersecurity frameworks to regulate emergent AI technologies for space applications. Elsevier BV. https://doi.org/10.1016/j.jsse.2023.08.002 (Original work published)