Today, Unmanned Aerial Vehicles (UAV), also known as drones, are increasingly used by organizations, businesses and governments in a variety of military and civilian applications, including reconnaissance, border surveillance, port security, transportation, public safety surveillance, agriculture, scientific research, rescue and more. However, drone cybersecurity has become a major concern due to the growing risk of cyberattacks aimed at compromising the confidentiality, integrity and availability of drone systems. These cyberattacks can have serious consequences, such as disclosure or theft of sensitive data, loss of drones, disruption of drone performance, etc. In the existing literature, little work has been devoted to the cybersecurity of UAV systems. To fill this gap, a taxonomy of cyberattacks in UAV is proposed focusing on the three main categories, namely interception attacks against confidentiality, modification or fabrication attacks against integrity and disruption attacks against data availability. Next, a survey of defense techniques that can be used to protect UAV systems is carried out. Finally, a discussion is held on technologies for improving drone cybersecurity, such as Blockchain and Machine Learning, as well as the challenges and future direction of research. © 2023 EverScience Publications. All rights reserved
This is a systematic review of over one hundred research papers about machine learning methods applied to defensive and offensive cybersecurity. In contrast to previous reviews, which focused on several fragments of research topics in this area, this paper systematically and comprehensively combines domain knowledge into a single review. Ultimately, this paper seeks to provide a base for researchers that wish to delve into the field of machine learning for cybersecurity.
This research employs both qualitative and quantitative systems engineering models that help prioritize key stakeholder values and shape recommendations for future consideration. The goal of this research is to equip the US Army with a framework that can successfully integrate and implement a civilian reserve cyber force that is prepared for 21st Century warfare in cyberspace.
Public and academic knowledge of cyber conflict relies heavily on data from commercial threat reporting. There are reasons to be concerned that these data provide a distorted view of cyber threat activity. Commercial cybersecurity firms only focus on a subset of the universe of threats, and they only report publicly on a subset of the subset. High end threats to high-profile victims are prioritized in commercial reporting while threats to civil society organizations, which lack the resources to pay for high-end cyber defense, tend to be neglected or entirely bracketed. This selection bias not only hampers scholarship on cybersecurity but also has concerning consequences for democracy. We present and analyze an original dataset of available public reporting by the private sector together with independent research centers. We also present three case studies tracing reporting patterns on a cyber operation targeting civil society. Our findings confirm the neglect of civil society threats, supporting the hypothesis that commercial interests of firms will produce a systematic bias in reporting, which functions as much as advertising as intelligence. The result is a truncated sample of cyber conflict that underrepresents civil society targeting and distorts academic debate as well as public policy. © 2020 The Author(s). Published with license by Taylor & Francis Group, LLC.
findings are reported from a four-year study of cybersecurity competency assessment and development achieved through the design of cyber defense competitions
Revelations about the Stuxnet computer worm raise the possibility of damaging cyberattacks on the US electrical power grid or other critical infrastructure. Several approaches might mitigate cyberthreats to critical infrastructure and information technology targets. Among them are efforts to defend the information systems that support important infrastructure, to deter enemies from cyberattacks on those systems, to reduce or eliminate the forces that might be used in cyberattacks, or to reach arms control agreements that limit or proscribe such cyberattacks.
The article examines one narrowly focused aspect of the government interagency cooperation on cyber defense that serves as a basis to achieve a cyber power. It reviews the civil-military interagency cooperation and aims to identify factors that could jeopardize it. First, it provides a theoretical background for the research and then according to interviews and surveys, the factors with the highest negative impact are recognized. Based on this research, the most significant challenges in bridging the gap between civilian and military worlds seems to be power and budget struggles and a lack of political direction on cyber matters from leaders. © 2021 Taylor & Francis Group, LLC.
Citizen Lab published research analyzing the use of Internet filtering technology in ten countries of interest: Afghanistan, Bahrain, India, Kuwait, Pakistan, Qatar, Somalia, Sudan, United Arab Emirates, and Yemen.1 In these countries, technology produced by a company called Netsweeper is implemented by national-level, consumer-facing Internet Service Providers (ISPs) to filter online content. Choices made by Netsweeper have a significant impact on the types of websites that users ina given country can ultimately access.
The outstanding performance of deep neural networks has promoted deep learning applications in a broad set of domains. However, the potential risks caused by adversarial samples have hindered the large-scale deployment of deep learning. In these scenarios, adversarial perturbations, imperceptible to human eyes, significantly decrease the model’s final performance. Many papers have been published on adversarial attacks and their countermeasures in the realm of deep learning. Most focus on evasion attacks, where the adversarial examples are found at test time, as opposed to poisoning attacks where poisoned data is inserted into the training data. Further, it is difficult to evaluate the real threat of adversarial attacks or the robustness of a deep learning model, as there are no standard evaluation methods. Hence, with this article, we review the literature to date. Additionally, we attempt to offer the first analysis framework for a systematic understanding of adversarial attacks. The framework is built from the perspective of cybersecurity to provide a lifecycle for adversarial attacks and defenses.
Modern artificial intelligence is inherently paradoxical in many ways. While AI aims to increase automation, it also requires more intimate human involvement to reflect on the insights generated (automation paradox). While AI results in job displacement, it also creates new jobs, some simply to provide the necessary support systems for those newly unemployed (transition paradox). And as generative AI takes away control over the creative process, it also offers new creative opportunities (creativity paradox). This article considers another paradox, that relates to the fact that computational systems created using AI can be used both for public good in civilian applications and for harm across a range of application areas and settings . This contradiction is explored within an organizational and governmental context, where modern AI relies on data which might be externally or internally-sourced .