Preview

Journal of Digital Technologies and Law

Advanced search

Artificial Intelligence and International Space Law: Dual-Use Challenges

https://doi.org/10.21202/jdtl.2026.5

EDN: KUZIEI

Contents

Scroll to:

Abstract

Objective: to propose an effective legal mechanism for regulating the use of artificial intelligence in the space sector with a focus on preventing harmful effects and preserving the peaceful and purposeful use of technology.

Methods: the research uses the method of comparative legal analysis, doctrinal legal reasoning and scenario analysis of escalation risks. It provides a normative historical analysis of the main treaties of international space law and state practices, comparing them with the approaches of international environmental law focused on achieving specific results. The author additionally relies on the analysis of precedent documents and public policy initiatives illustrating the actual practices of militarization and commercialization of space infrastructure.

Results: the study demonstrates that existing international treaty mechanisms do not provide sufficient regulation for dual-use artificial intelligence systems. It identifies gaps in definitions, codification of responsibility, and control mechanisms for autonomous actions. An alternative regulatory approach is proposed, focused not on regulating the technology per se, but on prohibiting specific harmful results (formation of orbital debris, uncontrolled autonomous attacks, signal suppression, etc.). Based on this logic, the author developed a concept of an international agreement with a mandatory annex listing prohibited uses of artificial intelligence and mechanisms for holding states accountable.

Scientific novelty: a result-oriented approach to regulating artificial intelligence in space was formalized and justified from a legal point of view, adapting the prohibition model to modern dual-use threats. A typology of prohibited consequences was proposed and correlated with the existing international responsibility institutions.

Practical significance: the proposal may serve as the basis for the development of an international treaty or an add-on to international space law. It provides a tool for national licensing and control, facilitates the coordination of positions between states and private operators, and is aimed at preserving innovation while minimizing risks to the sustainability of space activities.

For citations:


Koskina A. Artificial Intelligence and International Space Law: Dual-Use Challenges. Journal of Digital Technologies and Law. 2026;4(1):98-124. https://doi.org/10.21202/jdtl.2026.5. EDN: KUZIEI

Introduction

Space has become increasingly necessary for States’ economic and strategic interests, serving both commercial and protective functions. As Punnala et al. underline (Punnala et al., 2024), it now forms an “indispensable backbone of our global infrastructure” as it is enabling critical applications from national security to civilian needs. Indeed, to only mention a few examples, remote sensing has transformed numerous fields – such as agriculture, disaster management, and communications – demonstrating space’s ever-growing importance in modern society (Salin, 1992).

The above development has had two critical consequences of relevance to the present study. First, it generated increased interest in space activities, attracting more participants who were motivated by the sector’s emerging economic potential. As a result, the growing number of entities engaged in space activities invested more and more in innovations that will enhance safety, efficiency, and cost-effectiveness; although considerable potential remains for additional innovation and optimization in these areas (Aglietti, 2020; Enholm, 2024). Vice versa, the conditions for the conduct of space activities were further improved, especially thanks to artificial intelligence (AI) which plays a pivotal role in optimizing space operations, attracting further investments in space and expanding accessibility. In this sense, as a corollary to the above, space infrastructure became strategically necessary, but also increasingly important and expensive, necessitating robust protection; since a wide range of risks also emerged, from orbital debris and accidental collisions to potential threats posed by adversarial actors’ practices in the context of intensifying geopolitical competition in the new space race.

Hence, its second (and even more) significant consequence is the accelerating militarization of space1, driven by terrestrial and celestial geopolitical tensions –including competition over asteroid resources and potential mining rights – as well as by states’ strategic necessity to assert dominance and protect their substantial space infrastructure investments; which is distinct from “space weaponization”2, defined here as placing orbital or suborbital satellites to attack enemy satellites, using ground-based direct ascent missiles to shoot spacejets etc.3 Still, despite the ongoing debate over the exact differentiation between space militarization and weaponization4, this trend has emerged as a tremendous issue in international discussions; especially given that new technologies, including AI, allow for novel military uses5: e.g., reconnaissance satellite systems, space infrastructure which is used to support terrestrial conflicts, dual-use space objects etc. (noting that many of those appear to pose problems even as to their registration under existing space governance frameworks (Berrang, 2025)). Therefore, in essence, this development raises critical questions about the future of space security, the need to ensure the protection of space infrastructure and, finally, the adequacy of existing regulatory regimes.

Indeed, despite their current applicability, the foundational space treaties – most notably the Outer Space Treaty (OST), which is also regarded as the cornerstone of international space law (ISL) – were formulated in the 1960s and 1970s. As such, they are now widely regarded as being insufficient to meet the complexities of contemporary space activities. A fundamental aspect of this inadequacy stems from the fact that critical concepts, such as the definition of a “space object” were never fully elaborated. This lack of precision meant that ISL could not (and was possibly not ready to) evolve alongside rapid technological progress, which resulted in obsolete definitions. As a response, the existing ISL framework was complemented – at various points – though bottom-up initiatives. For instance, to regulate the space debris issue, which was not addressed by the above-mentioned treaties, instruments adopted through this particular procedure are now the point of reference (see e.g. the Inter-Agency Space Debris Coordination Committee (IADC) guidelines6). Still, the urgency of these legal gaps becomes now more critical, when considering the two above developments: i. e., the increasing importance of space, based on the growing use of new technologies such as Artificial Intelligence (AI) for space operations (for which existing legal frameworks provide no clear guidance); and the escalating militarization of space, which introduces novel risks to both terrestrial and orbital infrastructure in ways that the original treaty drafters could not have anticipated.

Therefore, given the current state of space activities –and taking, notably, into account their growing strategic importance and militarization– a critical question emerges: how should critical space technologies be regulated, considering that both economic and defense-related space activities largely depend on them? While this challenge is longstanding, its urgency and importance has intensified with the advent of AI7, as a dual-use technology capable of optimizing operations to unprecedented levels. Such advancements could result in exponentially amplified outcomes, whether positive or negative. However, the foundational ISL treaties adopted in the 1960s–70s failed to establish an effective governance framework for these systems; and today, the need to regulate said high-impact dual-use systems has become even more critical, due to the rising geopolitical tensions coupled with humanity’s increasing reliance on space infrastructure.

Against this background, and following on from the Introduction (Section 1), Section 2 of this paper will thoroughly analyze AI’s dual-use capabilities as well as their transformative potential for both offensive and defensive operations. Section 3 will conduct a critical analysis of the regulatory gaps in existing International Space Law (ISL), with a particular focus on the limitations of the Outer Space Treaty (OST) framework in addressing emerging technological realities. Then, Section 4 will assess the risks of deploying AI in space, during potential space conflict situations, and analyze risk escalation pathways. Following on from that, Section 5 will propose a new regulatory approach, aimed at mitigating the risks of AI being employed in ways that could compromise space security and sustainability. Section 6 presents the study’s concluding remarks.

1. Asset protection or warfare tool? The extreme dual-use potential of AI in space

From the outset, Space was used for military purposes, such as for reconnaissance operations (Muszyński-Sulima, 2023). During the Cold War, for example, the U.S. and the USSR deployed hundreds of satellites, many of which were secretly designed for intelligence-gathering. Yet, at the same time, space technology expanded beyond purely military applications, and it became indispensable for civilian uses as well. Indeed, as Bescheron and Gasnier mention (Bescheron & Gasnier, 2024), the Cold War rivalry accelerated developments in the field of missile and satellite technologies, leading as well to other important dual-use innovations; like the Global Positioning System (GPS), a military asset soon used for civilian purposes. Space maintained, during all the following years, its strategic military utility; and States recognized, e.g. through repeated debates on the Prevention of an Arms Race in Outer Space (PAROS) resolution at the UN General Assembly, the importance of existing military uses but also the collective desire to prevent space’s further weaponization8.

In recent decades, the militarization of space has escalated dramatically. In fact, the ongoing “new space race” taking place among the major spacefaring states – i. e., a race spanning all economic, environmental, communications and scientific domains – appears to result on more and more military implications (Bescheron & Gasnier, 2024). By way of illustration, space infrastructure is being increasingly used within the context of conflicts on Earth, such as was the case with the 1990 Gulf War (a situation in which space assets were deployed at unprecedented scale) and, even more, during the war in Ukraine (Berrang, 2025). Most alarmingly, though, this militarization is still accelerating. Indeed, a critical development is the Trump administration’s recent announcement of the ‘Golden Dome’ initiative9; that is a $175 billion missile defense system incorporating ground- but also space-based interceptors10. Such developments underscore the importance of space (and of space infrastructure, as well) for military operations, and raise questions about arms control, but also, and even more critically, about the protection of space infrastructure and the sustainable use of space and its resources.

In fact, among the most striking developments in this domain is the rapidly accelerating deployment of space-based defensive systems. Hence, as the Secure World Foundation underlines11: “The existence of counterspace capabilities is not new, but the circumstances surrounding them are12” as there are now greater incentives than ever to develop (and potentially to use) offensive counterspace weapons, with consequences that could ripple across the global economy due to our growing dependence on the space infrastructure13. What makes this situation particularly significant though, is Artificial Intelligence (AI)’s transformative role. AI dramatically enhances both the effectiveness and efficiency of space defense investments. By way of illustration, some scholars mention that Chinese researchers have already conducted AI-powered simulations of attacks on satellites to evaluate disruption capabilities; which suggests that similar projects are undoubtedly underway elsewhere (or that they will soon be), underscoring how AI is revolutionizing space warfare. In truth, mention is already made to this new reality, which is characterized as potentially paving the way for a Hyperwar (“AI, and the form of warfare it enables –Hyperwar – is fundamentally reshaping military strategy and operations. Its influence spans multiple domains, from enhancing situational awareness and sensor augmentation to predictive maintenance and autonomous decision-making”14); which is at the same time allowing the creation of an escalating cycle: as space systems become more complex and expensive, the need to protect them grows as well, proportionally. Indeed, given the global economy’s critical dependence on space infrastructure, nations, but also private entities, face mounting pressure to allocate greater resources to space assets security and space sustainability.

Amid these developments, AI has mainly emerged as both the most promising solution and most formidable challenge in space security – a paradox that reflects the dual-use nature of most of space technology itself. On one hand, AI systems are proving remarkably effective for Space Situational Awareness (SSA), such as for the mapping of orbital debris (Vansia, 2024); or for predicting and avoiding potential collisions15. On the other hand, the same AI capabilities may as well be utilized for offensive purposes, including the identification and targeting of adversary satellites16. Hence, this duality points to a fundamental security dilemma in the space domain (Shmigol, 2022), as the inherent ambiguity of dual-use systems creates a critical vagueness in space operations: civilian communications satellites can be repurposed for military reconnaissance; scientific probes may carry weapons-capable elements; and ostensibly commercial space stations could well host strategic assets. In fact, recent conflicts have demonstrated how nominally civilian space infrastructure can be rapidly weaponized (Berrang, 2025), while it may also happen that some systems or space objects used for military purposes may –deliberately– not have been registered or declared as such from the beginning (Muszyński-Sulima, 2023).

In other words, the distinction between civilian and weaponized technology being used in outer space is inherently ambiguous17. However, Artificial intelligence compounds this uncertainty; indeed, its effects depend entirely on implementation, as it may well be utilized to enable both defensive operations and offensive capabilities (Shmigol, 2022; Bernat, 2019). Importantly, this challenge currently intensifies with the growing proliferation of small satellites (which democratize access to space) as they can easily be used as weapons (Shmigol, 2022; Bernat, 2019). Precisely, both emerging states and non-state actors can deploy low-cost microsatellites capable of kinetic collisions, electronic jamming, or any other disruptive actions; not to mention that some launching States or entities try to also avoid their registration (Berrang, 2025), which evidently creates some additional layers of strategic risk by operating outside the established frameworks. In this sense, the proliferation of small satellites lowered the cost and technical barriers to space-based weapons, enabling even smaller nations to deploy orbital threats, and AI exacerbated this challenge due to its inherent dual-use nature.

Hence, the pressing need to efficiently regulate AI in space emerged because of its dual capacity for both huge harm and protection. On the one hand, it could enable or even escalate conflict in orbit (e. g., by facilitating space warfare); on the other hand, it is equally important for protecting critical space infrastructure and/or for enhancing deterrence, while enabling rapid, data-driven decision-making. In essence, AI is driving space capabilities in two divergent directions; namely, it redefines both warfare and security practices. Still, AI remains unregulated in International Space Law (ISL), which thus demands urgent scrutiny, particularly given the foundational principles of ISL: the OST established space as a domain for “peaceful purposes”, although this fundamental concept was never clearly defined. Thus, as AI challenges the traditional classifications of both space objects and threats caused by weapons in space (it is not, inherently, a weapon but it may produce equally catastrophic effects), an examination of its regulatory challenges is not only timely but also imperative.

2. Innovation outrunning regulation: AI’s role in widening ISL’s governance gaps

As presented above, AI is fundamentally reshaping space activities, operating though along two parallel and – to a certain extent – contradictory ways. Indeed, (i) AI enables advanced space weapons, such as autonomous anti-satellite systems and AI-guided weapons, raising the risks of escalation and conflict. Nevertheless (ii), it may likewise improve surveillance and tracking, in addition to decision-making capabilities, offering potential safeguards against space threats. Hence, this critical duality creates important gaps in the existing framework of ISL. As a result, governance challenges emerge, leaving key questions that must be addressed.

First of all, it is necessary to clarify that International Space Law (ISL) takes a specific, and to a certain extent restrictive, approach to defining what constitutes launchable and operable man-made technology in space. In particular, the foundational treaties –namely, the Outer Space Treaty (OST) adopted in 1967, the Rescue Agreement (1968), the Liability Convention (1972), the Registration Convention (1975) and the Moon Agreement (1979) – created a comprehensive legal framework governing human activity in space. In this context, though, the specific concept of “technology” was never mentioned; on the contrary, reference was only made to the objects launched and operating in space. At the same time, the above treaties never provided an explicit definition of “space objects”: instead of establishing categorical parameters, they only adopted a descriptive approach (see, for instance, the first articles of both the Liability Convention and the Registration Convention), and described those through numerous characteristics, evolving however in function across the different agreements.

Therefore, the space objects are all regarded as being man-made, tangible assets, given that ISL repeatedly mentions physical, human-made constructs launched or operating in outer space: the OST refers, in Art. VII and VIII, to “objects” launched in space, implying artificial, constructed entities; the Liability Convention and the Registration Convention (Art. I) include “component parts” and “launch vehicles”, reinforcing their material nature and the Moon Agreement (in Art. 8, 11, 12) makes reference to “space vehicles, equipment, facilities, stations, and installations”, confirming that space objects are tangible assets. However, ISL treaties did not restrict what qualifies as a space object based on size, mass, or complexity. For instance, the Registration Convention (Art. I and IV) requires reporting on “space objects” in general, regardless of dimensions, and the OST (Art. VII) holds states liable for damage caused by “such object or its component parts” irrespective of any characteristics, such as size etc. At the same time, it is explicitly stipulated that space objects are not necessarily single, indivisible units as they may consist of (several) components; e.g., structures that separate in orbit or upon landing (see the first articles of both the Liability Convention and the Registration Convention; Art. VIII of the OST, and Art. 13 of the Moon Agreement). Following on from that, they can be assembled or expanded, i.e., to form larger structures, such as on the Moon or other celestial bodies; indeed, an explicit reference is made to installations, stations or facilities (see Art. XII of the OST; Arts. 9, 11 and 12 of the Moon Agreement – note that the Moon Agreement makes specific reference to unmanned stations on the Moon in its Art. 9, and to structures connected with its subsurface in its Art. 11, to potentially encompass future structures on the Moon). Consequently, man-made objects sent in space may be both (i) capable of motion – orbital or non-orbital (i. e., beyond Earth-orbit, see Art. II of the Registration Convention) – as Art. IV of the Registration Convention requires reporting of orbital parameters (nodal period, inclination, apogee, perigee), which in reality confirms that space objects are expected to have velocity and trajectory; and (ii) permanently stationed (e.g., Art. 8 of the Moon Agreement mentions “facilities, stations and installations”, able to be anywhere on the surface). Finally, space objects could well bear dangerous payloads, in case they contain hazardous or radioactive materials (specific mention is made in Art. 7 of the Moon Agreement) or cause damage. Hence, it was precisely established that space objects remain under the legal authority of the launching state, regardless of their location (Art. VII and VIII of the OST; Art. 12 of the Moon Agreement).

In conclusion, reducing – in ISL – all human-made space constructs to “space objects” (in all tangible forms, though, as stated) overlooked the escalating value of technology and data in contemporary space activities. At the same time, this approach created a clear regulatory void regarding AI governance in space. Be that as it may, attempting to establish an internationally agreed definition of AI – in order to adopt rules for its uses in space–, and agree first on whether it fits into the established approach to “space objects” or whether a new term should be adopted, would now require rethinking all the ISL’s foundational concepts, as all the existing space law treaties are based on this traditional framework governing human-made physical constructs.

In fact, the rigid and limited definition of a “space object” makes clear that International Space Law (ISL) treaties – all drafted in an earlier technological era – are essentially ill-equipped to regulate (modern) space activities; indeed, space activities are based on technologies which are destined to continuously develop upon time, and the latter are now fundamentally different from the systems for which these treaties were originally designed. As a result, ISL now faces important challenges in regulating current technologies with full efficiency (not to mention the unresolved initial question of whether AI should even be classified as a space object18). To give an example from a slightly different background, current space systems, such as mega-constellations, present important difficulties in the application of ISL (Byers & Boley, 2023; Abbas, 2025), and these issues are further compounded by the proliferation of small satellites, particularly those under private ownership; which introduces several unresolved legal and operational ambiguities19 (Hertzfeld, 2021). In this sense, critical aspects of space governance – including registration and liability – remain inadequately addressed. Yet, the most pressing complications for ISL appear to stem from the increasing integration of AI in space activities, which introduces both a novel operational dimension and unprecedented ethical dilemmas to space technology governance.

To only mention a few concerns, scholars emphasize that AI has very critical features. Hence, it will require particular authorization – through national licensing – and ongoing State supervision to ensure compliance with ISL; specifically, States should guarantee the continuous security of AI-equipped spacecraft, to protect automation, navigation, and communication systems from hacking or creating risks (Martin & Freeland, 2021). The issue is inherently complex, but it becomes even more challenging due to two critical factors. First, AI’s autonomous decision-making capacity introduces uncertainty, especially in case of lack of clarity around how much autonomy operators will delegate to AI systems –and whether they will provide full information on said autonomy (particularly if we take into account that private operators have already attempted to circumvent ISL rules, as seen in the tardigrades incident (Gundersen, 2021)). Still, even more critical is AI’s dual-use nature, as it also raises security and transparency concerns, particularly for military or dual-use satellites in an increasingly militarized space domain; hence, a key question would be how easily an AI system in space could transition from civilian to military operations, shifting from defensive to offensive functions. This issue is in fact further complicated by the difficulty of ensuring AI operates correctly in rare or high-stakes scenarios, such as space conflict, as data collection and algorithmic training face severe limitations (Koskina, 2023).

Following on from the above, it appears that the existing regulatory gaps in the current ISL framework will be exacerbated by AI’s unique features; especially given that other legal frameworks – i. e., such as International Humanitarian Law (IHL), and/or the EU General Data Protection Regulation (GDPR) – address only specific aspects of AI, resulting in a fragmented approach20. As a result, although Article VI of the OST establishes state responsibility for space activities, ISL fails to clarify (or even provide guidelines for) liability for AI systems, even in simple cases. More complex scenarios (such as an AI-enabled satellite operated by one State but dependent on technology and infrastructure i.e., from multiple others) highlight how traditional liability regimes have become inadequate21. Furthermore, AI introduces a new category of space technologies that defy conventional classifications for weapons or dual-use systems, raising questions about whether new regulatory categories are needed22. These challenges are compounded by States asserting some kind of sovereign rights over their space assets under Article VIII of the OST, as is exemplified by the U.S. declaring its space systems “sovereign property with the right of passage through and operations in space without any interference”23. In other words, the accelerating advancement of AI technology (in conjunction with its growing use in outer space) – coupled with the absence of specific ISL regulations– raises critical questions that go beyond whether current rules remain adequate; such questions compel us to essentially reconsider whether space operations now demand an entirely new regulatory approach, one that will properly account for their inherent dual-use nature and even, in some cases, AI’s capacity to also function autonomously without human supervision. To determine, however, how urgently this issue requires attention, we must also examine the risk of escalation in space, i. e., which would force decisions about AI’s use – whether as a weapon or a defensive tool.

3. Risks of escalation in space to examine the need for rules on AI uses

The proposal of a legal framework to regulate armed conflict in space requires, first, an analysis of plausible scenarios as regards the potential occurrence of hostilities in space, as understanding the urgency of such regulatory measures is critical. Analysts increasingly contend that space warfare has now become a realistic threat, given the growing dependance of States on space infrastructure (namely for communications, navigation, as well as national security), combined with the strategic imperative to also defend assets while compromising adversaries’ access; all this could lead to conflicts involving both existing weapons systems and emerging AI technologies24. At the same time, although tensions could indeed push States toward a conflict, the catastrophic consequences suggest it may never really materialize.

Against this (theoretical) background, a first potential scenario would consist in the most optimistic perspective, where a large-scale space war will never materialize due to the risks involved25. Precisely, Penent26 argues that the critical consequences of a conflict in outer space –such as mainly the generation of orbital debris and the mutual vulnerability created by global dependence on space infrastructure – serve as powerful deterrents against open hostilities. These considerations push States toward strategic restraint. In this case, instead of developing overt space weapons, major space powers will adopt policies of “under-weaponization” and ambiguous tactics such as cyber operations, that will allow them to pursue strategic goals without crossing the threshold into direct conflict. One may note that, in practice, some States have already translated this approach into concrete efforts to prevent space weaponization, e.g.: the joint China-Russia draft Treaty on the Prevention of the Placement of Weapons in Outer Space, introduced at the Conference on Disarmament in 2008, and revised in 201427; political statements by several States mentioning that they will not be the first to deploy weapons in space28; or Russia’s “No First Placement” (NFP) initiative, which has gained support from thirty nations (Shmigol, 2022). All these initiatives culminated in the December 2023 adoption of UN General Assembly Resolution A/RES/78/21, which formally added the “No first placement of weapons in outer space” sub-item to the UN official agenda29. However, although this first scenario – in which a space war fails to materialize – would eliminate the need for specific regulations on the uses of AI in the context of hostilities in space, the inherent uncertainty of future developments necessitates a parallel examination of alternative scenarios.

On the opposite extreme, a second potential scenario would be that States will actively deploy and/or use weapons to conduct a war in space – including AI systems – as an extension of terrestrial warfare. Such a scenario remains plausible, given the current global tensions (such a space conflict would reflect Earth-bound geopolitical rivalries30), but also the strategic and financial value of space infrastructure. In this case, it is likely that few States would categorically avoid hostile acts in orbit, especially with AI’s expanding military applications. On a more practical ground, analysts already note a growing dependence on space systems (both state and commercial) for terrestrial warfare, blurring the traditional boundaries between civilian and military domains (Berrang, 2025). The U.S. exemplifies this trend, as it recently declared its space systems “sovereign property” with rights to unimpeded operation31 while explicitly integrating commercial assets into national security frameworks to enhance resilience (Berrang, 2025). Importantly, said trend was institutionalized through initiatives like the U.S. Space Force (established in 2019), in addition to the North Atlantic Treaty Organization (NATO) recognition of space as a warfighting domain (Peperkamp, 2020), while other key space faring nations followed suit32, 33, reflecting a broader normalization of space as a theater of operations. As a result, the line between commercial and military infrastructure seems to be now eroding. Hence, if States are growingly using private-sector space capabilities – from satellite communications to Earth observation – for defense purposes (to incorporate the “commercial space sector into the national security architecture”34) space-based AI will most probably be used to enhance terrestrial warfare (targeting, surveillance) and it could likewise enable direct orbital combat. Consequently, in this scenario, robust regulations governing the uses of AI systems in space for military purposes become imperative to prevent uncontrolled escalation – still, while urgent adoption of such rules would be needed, current realities suggest we have not yet reached this critical threshold.

Between the above extremes of full-scale space weaponization and complete demilitarization presented above, lies a more plausible scenario: States develop space weapons using AI-enabled systems essentially for deterrence; in fact, this approach mirrors nuclear deterrence logic where military capabilities are designed to prevent conflict rather than create it. While the two extreme scenarios mentioned above remain unlikely, this intermediate path is already unfolding. Indeed, evidence shows a sharp rise in military space expenditures. For instance, Erwin35 states that, “government space budgets grew 10 % from 2023, driven largely by defense-related investments that reached $73 billion. The surge reflects mounting concerns about space as a contested military domain alongside traditional theaters like air, sea, and cyber”. In essence, this trend mainly creates a paradox, given that nations compete for military advantage in orbit while continuing civilian space cooperation36. Be that as it may, AI seems to have a central role within this specific context. Effectively, it is now mentioned that “Information will be an essential weapon in future warfare and defense operations. <…> AI algorithms are being created to monitor, anticipate, and react in case of on-orbit and in-ground space conflicts”37. However, in this specific scenario, although the risk of space warfare is real – especially amid existing, rising terrestrial geopolitical tensions – its manifestation will likely differ from traditional hostilities. In reality, one could argue that States will try to avoid kinetic attacks (missile strikes on satellites), as these would generate catastrophic debris, endangering all kinds of space activities. Instead, states would likely focus on non-kinetic tactics, such as jamming, spoofing, or cyberattacks; tools already in use today (Bescheron & Gasnier, 2024). Still, even in that case, a critical issue would remain, given the unknown consequences that AI systems could have. Precisely, Swope38 underlines that Edgar J. Kingston-McCloughry’ s warning about airpower’s early days (“There has perhaps never in the history of warfare existed a comparable state of ignorance about the potentialities of available weapons”) may equally apply to space, today. Hence, the most probable outcome would be a continuation of military investments without direct conflict, where States will all develop AI for space dominance (such as surveillance, infrastructure protection etc.) while avoiding overt destruction. At the same time, this tolerance for “gray-zone” tactics – e.g., electronic warfare or dual-use AI – will also set a precedent: these actions could normalize hostility in space, paving however the way for a new era of non-kinetic but destabilizing warfare.

Consequently, this last developing scenario, whose foundations are already observable, reveals the urgent need for AI-space regulations. In essence, though, such a framework should take into account the fundamental challenge posed by AI inherent dual-use nature, considering that recent conflicts have demonstrated a growing erosion between civilian and military uses of space systems39. In other words, it is necessary to examine whether a completely new approach to governing AI space systems would be more efficient; one that would establish a functional framework in a rather short order, hence without having to revisit foundational ISL principles.

4. A new approach to regulating AI in space: efficiency, but also sustainability

Following on from the above, it is now clear that the regulation of AI systems in space must consider three key realities. First, AI’s dual-use nature, as this means that civilian systems can easily transition to military uses, blurring regulatory distinctions. Second, the fact that states frequently obscure their military space activities, when converting civilian infrastructure, undermining transparency efforts. Third, the fact that nations that are capable of space warfare are usually the ones that possess the required technological strength, and therefore create a near-peer dynamic distinct from terrestrial asymmetric conflicts40; and from a certain perspective, this situation mirrors Cold War tensions between nuclear-armed superpowers, that resulted in the 1967 Outer Space Treaty to prevent escalation. Hence, today, with AI as the novel technological challenge, one may argue that similarly strong regulatory frameworks are needed to manage competition between space rivals of comparable strength. Still, the issue lies in finding rules that efficiently account for AI’s dual-use applications while addressing states’ reluctance to provide information about military space capabilities, within a context where major players possess the same means to weaponize their infrastructure.

Still, despite the significance of this challenge, the international community has thus far failed to establish an effective and well adapted legal framework; such efforts are hindered by States’ inability to reach consensus on fundamental definitions and agree upon key conceptual parameters necessary for formulating actionable rules. For instance, Sönnichsen & Lambach (2020) mention the major issue of deciding on whether the concept of “space weapons” ought to be limited to systems designed only for orbital warfare or may additionally encompass dual-use technologies, with terrestrial military applications. These initial challenges are exacerbated by the inherent complexities of regulating AI in the context of space, per se41. Hence, based on the above, an alternative regulatory approach would be to shift focus towards prohibiting specific harmful outcomes of AI in outer space rather than regulating the uses or the programming of AI systems; in this case, a results-based framework would allow to circumvent persistent definitional disputes as well as ethical programming impasses, while still mitigating risks of conflict escalation.

In reality, this approach has already been adopted, to a certain extent, in Art. IV of the OST, which provides – in this regard – a compelling precedent. Indeed, said provision mentions that “States Parties to the Treaty undertake not to place in orbit around the Earth any objects carrying nuclear weapons or any other kinds of weapons of mass destruction, install such weapons on celestial bodies, or station such weapons in outer space in any other manner42”. In other words, Art. IV operates on the key principle that certain outcomes ought to be prohibited categorically, without requiring the existence of any unethical intention, or the non-compliance with conduct rules. Importantly, while Art. IV explicitly bans Weapons of Mass Destruction (WMD), it fails to define them – a gap that renders its application to AI systems in space particularly problematic, given the AI technology’s post-treaty emergence. However, this clear prohibition nevertheless provides a really valuable regulatory model that could well be adapted, mutatis mutandis, to govern AI uses in space today. In other words, rules of governance for the uses of AI in space could focus on identifying and prohibiting clearly dangerous acts, such as AI used in a military purpose where the risks are most acute. This outcome-based approach would offer important advantages, as it would provide clear compliance standards, and also allow to avoid protracted theoretical debates like those on the issue of ensuring an ethical use of AI in space.

For the sake of precision, it is important to mention that this approach has been widely used in the context of International Environmental Law, as the definition of concrete objectives – i. e., rather than general rules of behavior –, allows for targeted prohibitions of specific harmful activities. For instance, the Montreal Protocol on Substances that Deplete the Ozone Layer43 states, in its Preamble, the unambiguous goal “to protect the ozone layer by taking precautionary measures to control equitably total global emissions of substances that deplete it, with the ultimate objective of their elimination”. This clear environmental objective then translates into practical action through Annex A, which enumerates, precisely, which chemical substances fall under regulatory control. Similarly, the Kyoto Protocol to the UN Framework Convention on Climate Change44 likewise adopts the very explicit aim of achieving “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system” (its Preamble directly mentions Art. 2 of the UNFCCC). Having established this clear objective, Article 3 then clearly mandates that “Parties included in Annex I shall ... ensure that their aggregate anthropogenic carbon dioxide equivalent emissions of the greenhouse gases listed in Annex A do not exceed their assigned amounts”. Therefore, all these examples reveal a consistent regulatory pattern: first establishing a concrete (environmental protection) goal, then laying down very specific controls on precisely defined acts. From a policy perspective, such an approach allows to avoid potentially divisive debates e.g., about defining key concepts, ethical technology uses and codes of conduct.

Following on from the above one may argue that a regulatory approach that will establish as an explicit objective the prevention of a war in outer space (that is, using any technology), while being based on the prohibition of clear harmful outcomes, could govern AI systems (in space) more efficiently. This method would address many current gaps in the OST, whose specific provisions on WMD are poorly suited to the current challenges (Ferreira-Snyman, 2015); especially taking into account that the absence of an agreed definition of space weapons only complicates their practical regulation45, while the dual-use nature of AI as well as of other space technologies creates additional difficulties46. Importantly, adopting –in ISL – the prohibition of specific outcomes, rather than some general rules of conduct, would as well modernize Article IV’s ban on WMD to include comparable contemporary threats, like autonomous space weapons systems, while maintaining the OST’s fundamental principle that space activities must be conducted “in the interest of maintaining international peace and security” (Art. III). Precisely, such outcome-based regulations should specify the prohibited military applications of AI in space, while explicitly permitting its civilian as well as commercial uses; this distinction would thus protect beneficial AI applications like satellite maintenance, space debris management, and scientific research, safeguarding investments in space (AI) infrastructure. Such approach would also allow to avoid unproductive debates about technology definitions by focusing instead on verifiable threats to security in outer space; it would additionally provide clearer guidelines for compliance than current proposals to regulate AI through ethical principles or programming requirements, which often prove difficult both to agree upon and to enforce. Finally, by creating specific prohibitions on AI-enabled outcomes that could lead to conflict in space, it would pave the way for a more stable security environment in space. At the same time, by delineating the permitted uses, it would encourage continued innovation in peaceful space applications.

Building on this specific approach, the concrete proposal would be to draft and adopt an international treaty with two major components. First, it would reaffirm fundamental principles of international space law, particularly those established in the OST on the peaceful uses of outer space. Second, it would include an annex listing the prohibited uses of AI in space –such as those posing threats to peaceful space activities. This structure would create a balanced framework, allowing to address security risks, while also preserving beneficial AI applications for space exploration. The annex would target AI uses with measurable harmful effects, like the ones generating space debris (to align with Art. IX of the OST), as well as intangible uses of AI, when necessary, like signal jamming. Given the dual nature of AI – i. e., both as a potential weapon and a tool for exploration advancement – the international treaty would require broad international participation, but also consensus. Whether initiated through soft law or best practices, the end goal should be to adopt binding rules of law with clear consequences for violations, to really deter from harmful uses of AI in space.

In this context, to efficiently regulate AI in space while achieving the stated objectives – i.e., preventing threats while permitting beneficial applications, given AI’s inherent dual-use nature – establishing a precise typology of prohibited activities and outcomes would be essential. This typology of prohibited outcomes could include e.g., the creation of space debris, enabling lethal signal jamming, operating uncontrolled AI systems in orbit, or causing non-debris related damage. Notably, the framework should additionally ban specific technologies capable of enabling such harmful outcomes, regardless of user(s) intent, thus creating an obligation of result rather than intention. Such an approach would simultaneously help to define fault in space activities, while avoiding the need for States to disclose any sensitive AI research, hence preserving competitive advantages and security concerns. By focusing on concrete prohibitions, this approach would allow to resolve the current legal paradox where AI avoids weapon classification despite its potential for catastrophic harm. Violations of these enumerated AI restrictions would also automatically engage State responsibility under international space law, since such infringements would constitute a fault under Art. VI of the Outer Space Treaty, and Art. III of the Liability Convention (providing much-needed clarity to said provisions; which should, however, by no means imply that State liability would be limited to violations of the prohibited acts listed in this document, or annex). Overall, by mentioning all the banned outcomes rather than focusing on AI technology per se, this type of instrument would allow to prevent harmful consequences.

Conclusions

The growing militarization of space, accelerated by rapid advancements in artificial intelligence (AI) technologies, is one of the most pressing challenges of our times. The existing international law framework, essentially composed of treaties that were negotiated and adopted during the period of the 1960s and 1970s, proves increasingly inadequate to address the complex realities of modern space operations. Indeed, ISL treaties, although progressive when they were drafted, fail to efficiently regulate the inherent dual-use dilemma of space technology, let alone sustainably address the novel complexities introduced by AI. This regulatory gap becomes particularly concerning if we consider that major spacefaring nations are already heavily investing in AI-enabled space technology, while international consensus on basic definitions – such as what constitutes a space weapon – remains elusive.

In reality, the potential of AI in space operations is exceptional; AI enhances space missions in ways that were unimaginable when the ISL treaties were drafted. Importantly though, these same capabilities present significant risks when applied to military applications in space. Precisely, unlike conventional space systems, AI-driven technologies can operate with varying degrees of autonomy, which makes them particularly susceptible to unintended escalation scenarios and potentially catastrophic system failures. At the same time, specific features of AI may exacerbate these concerns (inter alia, the “black box” nature of machine learning algorithms makes it difficult to predict or explain their decision-making process, while their vulnerability to adversarial manipulation creates new avenues for conflict). Moreover, the speed at which AI can process information and execute commands – far exceeding human reaction times – may dangerously compress decision-making timelines during crises, paving the way to uncertainty as regards the outcomes of space operations in general.

In light of such developments, it is thus evident that Cold War-era ISL needs a few substantial revisions to address AI-enhanced space technology, which is becoming increasingly prevalent. Indeed, the new space paradigm will likely be dominated by technologically advanced nations possessing comparable military capabilities, creating dangerous scenarios of mutual destruction with potentially catastrophic consequences for terrestrial populations. Following on from that, a solution would be to adopt a result-oriented approach. More precisely, rather than attempting to regulate the programming of AI systems – an approach that would quickly become obsolete given the pace of innovation – this new perspective would focus on prohibiting clearly defined harmful outcomes; e. g., the creation of debris, autonomous attacks against space infrastructure, etc. At the same time, it would allow to create verifiable compliance standards (i. e., an approach that is already being used in instruments aimed at avoiding the creation of space debris – see e.g., the IADC guidelines).

Such a regulatory instrument should incorporate two key parts (similarly to major treaties adopted within the context of International Environmental law, that focus on the result to achieve, rather on the effort or intention of States). First, it should reiterate and reaffirm core principles of existing ISL, with an emphasis on ensuring – as a fundamental and non-negotiable objective – the peaceful uses of space, and States’ responsibility for their activities. Second, it should additionally establish a binding annex listing prohibited AI behaviors, uses and/or outcomes, rather than attempting (for instance) to regulate their programming. Hence, by focusing on the concrete risks and results that should be avoided, this approach would bypass definitional disputes while also establishing clear compliance standards. Importantly, it would deter conflict by holding the States accountable under existing liability frameworks, as violations of AI outcomes prohibitions would ipso facto constitute a “fault” in outer space – clarifying the application of Art. VI of the Outer Space Treaty (OST) as well as of Article III of the Liability Convention. For a much better impact, such an instrument should be inclusive, based on the participation of both traditional and emerging space actors, private entities etc. Indeed, its effectiveness would, in any case, ultimately depend on enforceable consequences for violations, ensuring compliance rather than mere participation. This targeted, effects-based approach would balance legal clarity with practical deterrence, addressing urgent threats resulting from harmful uses of AI in space, without however impeding innovation or uses of AI in space that would be beneficial to space operations and to humanity.

The critical importance of the need to regulate the harmful uses of AI in the context of space operations cannot be emphasized enough. While AI has become undoubtedly necessary for space activity, its autonomous nature at the same time creates unprecedented dangers –from unintended escalation to permanent orbital damage –; especially when/if used in the context of hostilities, with the intent to cause damage, since AI uses in space may result in threats that ISL never envisioned. In other words, without rapid intervention, this legal void could trigger an AI arms race, threatening both global security and space sustainability. A pragmatic, effects-based framework would offer a viable path to mitigate risks while still safeguarding AI’s benefits for the peaceful exploration and use of space and space resources to the benefit of all.

1. “Space-based system, which provides significant support to the military, is generally called Militarization of space. Such support includes intelligence, surveillance, mapping, charting, communications, navigation, missile warning, and environmental data. It is the placement of military technology in outer space <…>. Space militarization is wider term than space weaponization. The militarization of space does not necessarily entail its weaponization”, see (Sheer & Shouping, 2019).

2. “Currently, space is not weaponized. There are no weapons deployed in space or terrestrially (in air, sea, or on the ground) meant to attack space objects, such as satellites; nor are satellite weapons deployed against terrestrial targets”, see (Saperstein, 2002).

3. Rajan, V. (2024, April 19). Space weaponization: All you need to know about. ClearIAS. https://clck.ru/3RhJXK

4. See inter alia, Whitehead, R. (2025, February 27). How the modern space race is shaping and shaped by AI. IOAGlobal. https://clck.ru/3RhJiP

5. Ibid.

6. Inter-Agency Space Debris Coordination Committee. (2025). IADC space debris mitigation guidelines (U.N. Doc. A/AC.105/C.1/2025/CRP.9). United Nations Committee on the Peaceful Uses of Outer Space, Scientific and Technical Subcommittee. https://clck.ru/3RhJte

7. International Institute of Space Law. (2024). Balancing innovation and responsibility: International recommendations for AI regulation in space (Report of the Working Group on Legal Aspects of AI in Space). International Institute of Space Law. https://clck.ru/3RhJyK

8. Acheson, R., & Fihn, B. (n.d.). Outer space: Militarization, weaponization, and the prevention of an arms race. Reaching Critical Will, Women’s International League for Peace and Freedom. https://clck.ru/3RheH5

9. U.S. Department of Defense. (2025, May 20). Secretary of Defense Pete Hegseth statement on Golden Dome for America [Press release]. https://clck.ru/3RhKTE

10. Brennan, D., & Yiu, K. (2025, May 21). Trump’s ‘Golden Dome’ risks weaponization of space, China says. ABC News. https://clck.ru/3RhKXx

11. Secure World Foundation. (2025, April). Global counterspace capabilities: An open source assessment [Report]. Secure World Foundation. https://creativecommons.org/licenses/by-nc/4.0

12. Emphasis added.

13. Ibid.

14. Husain, A. (2024, August 19). The military applications of artificial intelligence in space. Forbes. https://clck.ru/3RhLGq

15. Pultarova, T. (2021, April 29). Artificial intelligence is learning how to dodge space junk in orbit. Space.com. https://clck.ru/3RhLP9

16. See Husain, supra note 14. See also Easley, M. (2024, June 5). DARPA harnesses AI to keep tabs on space weapons, spy satellites on orbit. DefenseScoop. https://clck.ru/3RhLRR

17. “The inherent nature of any technology is dual use”, see International Institute of Space Law. (2024). Balancing innovation and responsibility: International recommendations for AI regulation in space (Report of the Working Group on Legal Aspects of AI in Space). International Institute of Space Law. https://clck.ru/3RhLTV

18. International Institute of Space Law. (2024). Balancing innovation and responsibility: International recommendations for AI regulation in space (Report of the Working Group on Legal Aspects of AI in Space). P. 239. https://clck.ru/3RhLTV

19. Palkovitz Menashy, N. (2019). Regulating a revolution: Small satellites and the law of outer space [Master’s thesis, Leiden University]. Leiden University Scholarly Publications. https://clck.ru/3Rhe7c

20. International Institute of Space Law. (2024). Balancing innovation and responsibility: International recommendations for AI regulation in space (Report of the Working Group on Legal Aspects of AI in Space). P. 239. https://iisl.space/iisl-working-group-on-legal-aspects-of-ai-in-space/

21. Ibid.

22. Ibid.

23. Berkowitz, M. (2024, December 16). Countering threats to US commercial space systems. The Space Review. https://www.thespacereview.com/article/4910/1

24. Weichert, B. J. (2024, December 9). There will be a war in space. This is what it will look like. Popular Mechanics. https://clck.ru/3RhLuW

25. Penent, G. (2021, June). The space war will not happen. Vortex: Studies on Air and Space Power, 1, 8. French Air and Space Force, Centre for Strategic Aerospace Studies (CESA). https://clck.ru/3RhLxH

26. Ibid.

27. Conference on Disarmament. (2014, June 12). Letter dated 10 June 2014 from the Permanent Representatives of the Russian Federation and China transmitting the updated draft treaty on prevention of the placement of weapons in outer space (PPWT) (CD/1985). United Nations. https://clck.ru/3RhLyb

28. United Nations General Assembly. (2023). No first placement of weapons in outer space (Resolution A/RES/78/21).https://clck.ru/3RhM2n

29. United Nations General Assembly. (2023). No first placement of weapons in outer space (Resolution A/RES/78/21).https://clck.ru/3RhM2n

30. Penent, G. (2021, June). The space war will not happen. Vortex: Studies on Air and Space Power, 1, 8. French Air and Space Force, Centre for Strategic Aerospace Studies (CESA). https://clck.ru/3RhM6m

31. Berkowitz, M. (2024, December 16). Countering threats to US commercial space systems. The Space Review. https://clck.ru/3RhM7y

32. Harvey, B. (2022). Military space - how worried should we be? ROOM: The Space Journal of Asgardia, 1(31). https://clck.ru/3RheCh

33. See also China Military Online. (2024, April 19). Chinese PLA embraces a new system of services and arms: Defense spokesperson [Press release]. https://goo.su/Bia2

34. Richard, T. (2025, January 6). Year ahead – U.S. Department of Defense and Space Force commercial space strategies. Lieber Institute, West Point. https://clck.ru/3RhMTB

35. Erwin, S. (2025, January 15). Defense spending propels government space budgets to new heights. SpaceNews. https://clck.ru/3RhMoK

36. Ibid.

37. ProcureAM. (n.d.). 5 ways AI is used in space. Nasdaq. https://clck.ru/3RhMrs

38. Ibid.

39. International Institute of Space Law. (2024). Balancing innovation and responsibility: International recommendations for AI regulation in space (Report of the Working Group on Legal Aspects of AI in Space). International Institute of Space Law. https://clck.ru/3RhN7U

40. Stojanovic, B. (2025, January 20). Astropolitics and the militarisation of space: The new arms race? DiploFoundation. https://clck.ru/3RhN8z

41. International Institute of Space Law. (2024). Balancing innovation and responsibility: International recommendations for AI regulation in space (Report of the Working Group on Legal Aspects of AI in Space). International Institute of Space Law. https://clck.ru/3RhNQg

42. Emphasis added.

43. United Nations. (1987). Montreal Protocol on substances that deplete the ozone layer. https://clck.ru/3RhNUP

44. United Nations. (1998). Kyoto Protocol to the United Nations Framework Convention on Climate Change. https://clck.ru/3RhNX3

45. Mutschler, M. M. (2010). The danger of an arms race in space. In Keeping space safe: Towards a long-term strategy to arms control in space. Peace Research Institute Frankfurt. https://clck.ru/3RhiJd

46. International Institute of Space Law. (2024). Balancing innovation and responsibility: International recommendations for AI regulation in space (Report of the Working Group on Legal Aspects of AI in Space). International Institute of Space Law. https://clck.ru/3RhNpc

References

1. Abbas, S. (2025). Challenges to space activities in the context of mega satellite constellations: A focus on environmental impacts. Journal of Astronomy and Space Sciences, 42(1), 1–13. https://doi.org/10.5140/JASS.2025.42.1.1

2. Aglietti, G. S. (2020). Current challenges and opportunities for space technologies. Frontiers in Space Technologies, 1, Article 1. https://doi.org/10.3389/frspt.2020.00001

3. Bernat, P. (2019). The inevitability of militarization of outer space. Safety & Defense, 5(1), 49–54.

4. Berrang, S. (2025). Does the dual-use of space objects necessitate a new Geneva Convention? Case Western Reserve Journal of International Law, 57(1), 315.

5. Bescheron, J., & Gasnier, P. (2024). Space as warfighting domain: from education to an enhanced global space strategy. PRISM, 10(4), 82–101.

6. Byers, M., & Boley, A. (2023). Mega-constellations and international law. In Who owns outer space?: International law, astrophysics, and the sustainable development of space (pp. 77–113). Cambridge University Press. https://doi.org/10.1017/9781108597135.004

7. Enholm, M. (2024). The security dilemma and conflict in space: Impossible or inevitable? Journal of Strategic Security, 17(4), 48–70. https://doi.org/10.5038/1944-0472.17.4.2255

8. Ferreira-Snyman, A. (2015). Selected legal challenges relating to the military use of outer space, with specific reference to Article IV of the Outer Space Treaty. Potchefstroom Electronic Law Journal, 18(3), 1–30. https://doi.org/10.4314/pelj.v18i3.02

9. Gundersen, K. (2021). Beyond the tardigrades affair: Planetary protection, COSPAR, and the future of private space regulation. New York University Journal of International Law & Politics, 53(3), 871–917.

10. Hertzfeld, H. R. (2021). Unsolved issues of compliance with the registration convention. Journal of Space Safety Engineering, 8(3), 238–244. https://doi.org/10.1016/j.jsse.2021.05.004

11. Koskina, A. (2023). The use of AI weapons in outer space: Regulatory challenges. In A. Kornilakis, G. Nouskalis, V. Pergantis, & T. Tzimas (Eds.), Artificial intelligence and normative challenges. Springer. https://doi.org/10.1007/978-3-031-41081-9_13

12. Martin, A.-S., & Freeland, S. (2021). The advent of artificial intelligence in space activities: New legal challenges. Space Policy, 55(3), Article 101408. https://doi.org/10.1016/j.spacepol.2021.101408

13. Muszyński-Sulima, W. (2023). Cold War in space: Reconnaissance satellites and US-Soviet security competition. European Journal of American Studies, 18(2). https://doi.org/10.4000/ejas.20427

14. Peperkamp, L. (2020). An arms race in outer space? Atlantisch Perspectief, 44(4), 46–50.

15. Punnala, M., Punnala, S., Ojala, A., & Kuusniemi, H. (2024). The space economy: Review of the current status and future prospects. In A. Ojala & W. W. Baber (Eds.), Space business. Palgrave Macmillan. https://doi.org/10.1007/978-981-97-3430-6_2

16. Salin, P. A. (1992). Proprietary aspects of commercial remote-sensing imagery. Northwestern Journal of International Law & Business, 13(2), 349.

17. Saperstein, A. M. (2002). «Weaponization» vs. «Militarization» of space. APS Physics & Society Newsletter, 31(2), 5–7.

18. Sheer, A., & Shouping, L. (2019). Emergence of the international threat of space weaponization and militarization: Harmonizing international community for safety and security of space. Frontiers in Management Research, 3(3), 56–63. https://dx.doi.org/10.22606/fmr.2019.33003

19. Shmigol, V. (2022). The United States is enabling an outer space arms race: An overview of the current framework and recommendations for abating an outer space arms race. Seattle University Law Review, 46, 175.

20. Sönnichsen, A., & Lambach, D. (2020). A developing arms race in outer space? De-constructing the dynamics in the field of anti-satellite weapons. Sicherheit und Frieden / Security and Peace, 38(1), 5–9. https://doi.org/10.5771/0175-274X-2020-1-5

21. Vansia, D. A. (2024). Role of AI (Artificial Intelligence) in space debris management. International Journal of Novel Research and Development, 2024(35), Article IJNRD2307431.


About the Author

A. Koskina
University of Crete ; National Observatory of Athens
Greece

Anthi Koskina – PhD, Adjunct Professor of Public International Law, and Space Law, Department of Political Science, School of Social Sciences, Gallos Campus; Scientific Associate

Scopus Author ID: https://www.scopus.com/authid/detail.uri?authorId=57222100332

Google Scholar ID: https://scholar.google.com/citations?user=0nrJMDUAAAAJ

74150 Rethymno, Greece; Lofos Nymphon, PO Box 20048, 11810, Athens 


Competing Interests:

The author declares no conflict of interest. 



  • outer space has entered a qualitatively new phase of development, characterized by the expanded range of space activities’ subjects, the rapid increase of commercial investments, and the growing strategic competition between the leading powers;
  • artificial intelligence has taken a central place in modern space operations, providing optimization of navigation, communications and remote sensing; however, it is its dual nature that makes it a fundamentally different technology compared to any previously known classes of space objects;
  • the existing system of international space law, based on the 1967 Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies, is structurally unsuited to regulate artificial intelligence;
  • the deployment of dual-use artificial intelligence systems on the Earth orbit is fraught with significant risks of conflict escalation, since autonomous decision-making under uncertainty and high geopolitical tension can lead to unintended hostile actions.

Review

For citations:


Koskina A. Artificial Intelligence and International Space Law: Dual-Use Challenges. Journal of Digital Technologies and Law. 2026;4(1):98-124. https://doi.org/10.21202/jdtl.2026.5. EDN: KUZIEI

Views: 305

JATS XML


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2949-2483 (Online)