The Case For The ARS

Should India have a automatic nuclear control weapons systems.


  • Total voters
    23

Haldilal

लड़ते लड़ते जीना है, लड़ते लड़ते मरना है
Senior Member
Joined
Aug 10, 2020
Messages
29,498
Likes
113,311
Country flag
After that its the NCA Bunker or Aircraft and countdown option that would act as an ARS. Oh, did you mean a submarine could act as a NCA Bunker?
Ya'll Nibbiars Yeah Heard that a Subamrine of the Russian or the American Navy could act as a NCA bunker.

Ya'll Nibbiar if there is the Continuation of the Government then don't then please at any cost don't make SuSu the designates PM. Even Pappu bhi chalega but not SuSu.
 

Okabe Rintarou

Senior Member
Joined
Apr 23, 2018
Messages
2,337
Likes
11,988
Country flag
Ya'll Nibbiar if there is the Continuation of the Government then don't then please at any cost don't make SuSu the designates PM. Even Pappy bhi chalega but not SuSu.
:rofl:
SuSu is highly unlikely. Most likely it will be a Cabinet Minister from Defence/Finance/Home Affairs ministries. But again, its not predefined. That is a problem.
 

Haldilal

लड़ते लड़ते जीना है, लड़ते लड़ते मरना है
Senior Member
Joined
Aug 10, 2020
Messages
29,498
Likes
113,311
Country flag
:rofl:
SuSu is highly unlikely. Most likely it will be a Cabinet Minister from Defence/Finance/Home Affairs ministries. But again, its not predefined. That is a problem.
Ya'll Nibbiars That's why the ARS matter even more in case a incompetent person becomes a Designated PM. Atleast we should some capabilities to strike atleast in revenge. But thats why may be both the USSR now Russia and the Americans were more cautious.

Imagine if we had such a system deployed during the Ladakh Stand off. Would the Chinese would have been so balant and aggressive?.
 

cannonfodder

Senior Member
Joined
Nov 23, 2014
Messages
1,552
Likes
4,354
Country flag
The skynet is a AI based system. Here we are talking about a Semi Automated system. Not a AI based system. Then the strikes still be manned but more faster and advance system and in the case of the Higher Command destroyed. The Retaliatory Strike capabilities. This is a must that we have nuclear missile within few minutes of strike range over India.
I see what you are saying. Over time, I really imagine this will give evolve into some kind of AI based system to make sure no false alarms & the complexity to cover all possible scenarios to make it effective deterent for adversary. Might gain some situational awareness as humans will keep feeding it data and adding complexity.

When reading into the gist, skynet popped into the head. Sorry for the deflection.
 

Haldilal

लड़ते लड़ते जीना है, लड़ते लड़ते मरना है
Senior Member
Joined
Aug 10, 2020
Messages
29,498
Likes
113,311
Country flag
I see what you are saying. Over time, I really imagine this will give evolve into some kind of AI based system to make sure no false alarms & the complexity to cover all possible scenarios to make it effective deterent for adversary. Might gain some situational awareness as humans will keep feeding it data and adding complexity.

When reading into the gist, skynet popped into the head. Sorry for the deflection.
Ya'll Nibbiars there are already papers written on it's.

A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial Intelligence

December 2019

Michael C. Horowitz, Paul Scharre, and Alexander Velez-Green1

Abstract: The potential for advances in information-age technologies to undermine nuclear
deterrence and influence the potential for nuclear escalation represents a critical question for
international politics. One challenge is that uncertainty about the trajectory of technologies such as autonomous systems and artificial intelligence (AI) makes assessments difficult. This paper evaluates the relative impact of autonomous systems and artificial intelligence in three areas: nuclear command and control, nuclear delivery platforms and vehicles, and conventional applications of autonomous systems with consequences for nuclear stability. We argue that countries may be more likely to use risky forms of autonomy when they fear that their second-strike capabilities will be undermined. Additionally, the potential deployment of uninhabited, autonomous nuclear delivery platforms and vehicles could raise the prospect for accidents and miscalculation. Conventional military applications of autonomous systems could simultaneously influence nuclear force postures and first-strike stability in previously unanticipated ways. In particular, the need to fight at machine speed and the cognitive risk introduced by automation bias could increase the risk of unintended escalation. Finally, used properly, there should be many applications of more autonomous systems in nuclear operations that can increase reliability, reduce the risk of accidents, and buy more time for decision-makers in a crisis.

Introduction

Nuclear weapons are arguably the single most significant weapon system invented in modern
history, meaning uncertainty about the viability of nuclear deterrence in the 21st century
constitutes one of the most important security risks facing the world. This uncertainty is both a product and source of increased tensions in nuclear dyads worldwide. The proliferation of conventional military technologies, such as hypersonic weapons, could further undermine deterrence by potentially undermining traditional modes of escalation management, and as a consequence, nuclear stability. The impact of autonomous systems and artificial intelligence (AI) for nuclear stability remains understudied, however. In early 2017, Klaus Schwab of the World Economic Forum argued that the world is on the cusp of a Fourth Industrial Revolution, wherein several technologies but most prominently AI –could reshape global affairs. Many defense experts around the world share Schwab’s recognition of the potentially transformative effects of AI. The most prominent statements about the impact of AI on warfare, however, tend to be extreme. Elon Musk, for instance, has vocally contended that AI run amok could risk World War III. This overheated rhetoric masks the way that advances in automation, autonomous systems, and AI may actually influence warfare, especially in the vital areas of nuclear deterrence and warfighting. The intersection of nuclear stability and artificial intelligence thus raises critical issues for the study of international politics. Relative peace between nuclear-armed states in the 20th century arguably relied in part on mutually assured destruction (MAD). The MAD prevails when each side recognizes that both it and its opponent have an assured nuclear second strike capability, or that either side can impose unacceptable damage on the other in retaliation against a nuclear attack.

The Threat of mutual destruction ultimately led both the United States and the Soviet Union to deprioritize the role of preemption in their nuclear war plans. Furthermore, as Albert Wohlstetter found, the threat of mutual destruction “offer[ed] every inducement to both powers to reduce the chance of accidental war.” While there are no known instances of accidental war, there are historical examples of unintended escalation, either in pre conflict crises or once a conflict is underway. Accidental escalation is when a state unintentionally commits an escalatory act (i.e. due to technical malfunction, human error, or incomplete control over military forces).The Inadvertent escalation can also occur, whereby a state unknowingly commits an escalatory act (i.e., an intentional act that unknowingly crossing an adversary’s red line). The Accidents have increased tensions between countries on numerous occasions, but have not led to escalation. The Nuclear-armed states have expended vast resources to minimize the risk of unintentional escalation, knowing that it could lead to catastrophe should it occur. The Automation may complicate the risks of escalation, deliberate or unintended, in a number of ways. Automation has improved safety and reliability in other settings, from nuclear power plants to commercial airliners. Used properly, many applications of automation in nuclear operations could increase reliability, reduce the risk of accidents, and buy more time for decision-makers in a crisis. Automation can help ensure that information is quickly processed, national leaders’ desires are swiftly and efficiently conveyed, and launch orders are faithfully executed. On the other hand, poor applications of automation could render nuclear early warning or command-and-control (C2) systems more opaque to users, leading to human-machine interaction failures. Human users could fall victim to automation bias, for example, surrendering their judgment to the system in a crisis. Automation is often brittle and lacks the flexibility humans have to react to events in their broader context.

The states most likely to be willing to tolerate these risks for the perceived capability gains would be those that have significant concerns about the viability of their second strike deterrents. Thus, the more a country fears that, in a world without using autonomous systems, its ability to retaliate to a nuclear strike would be at risk, the more attractive autonomous systems may appear. Uninhabited nuclear delivery platforms could undermine nuclear surety, as they could be hacked or slip out of control, potentially leading to accidental or inadvertent escalation. Automated systems could end up reducing decision-maker flexibility by narrowing options, hampering attempts to manage escalation. These dynamics suggest that autonomous systems could influence the potential for nuclear escalation in three ways. First, while many aspects of the nuclear enterprise are already automated in many countries, from early warning and command and control to missile targeting, as autonomous systems improve, states may elect to automate new portions of the early warning and C2 processes to improve both performance and security. From a security standpoint, for instance, increased automation in nuclear early warning may allow operators to identify threats more rapidly in a complex environment. Likewise, automation may help to ensure the dissemination of launch orders in a timely manner in a degraded communications environment. States may also automate – or threaten to automate – nuclear launch procedures in the belief that doing so would provide them with a coercive advantage over adversaries. Second, as military robotics advance, nuclear powers could deploy uninhabited nuclear delivery platforms for a variety of reasons. For instance, a state might deploy nuclear-armed long endurance uninhabited aerial vehicles (UAVs) in the belief that doing so would provide additional nuclear signaling or strike options. They might also look to uninhabited nuclear delivery platforms to bolster their secure second-strike capabilities. Nuclear delivery vehicles such as torpedoes capable of autonomously countering enemy defenses or selecting targets might be seen to do likewise. Alternatively, a government might choose to automate its nuclear forces so that a small number of trusted agents can maintain control. This might could be especially attractive for a nuclear-armed regime that fears a coup or other forms of interference by its nation’s military elite.

The Third, the increased automation of conventional military systems might influence nuclear
stability in direct and indirect ways.16 It may enable or more likely yet, be seen to enable improved counterforce operations by technologically-advanced states. The ineffectiveness of
counterforce operations and hence the survivability of second strike deterrents presently
hinges in large part on the difficulty of finding and tracking adversary nuclear launch platforms
(mobile missiles or submarines) long enough for ordnance to be delivered. Machine learning
algorithms and other applications of artificial intelligence could, in principle, improve states’
abilities to collect and sift through large amounts of data in order to locate and track such targets, though it is important to recognize limitations to any developments given the real-time
requirements for a disarming strike. Likewise, military autonomy could enable the deployment of conventional autonomous systems designed to shadow and/or attack nuclear-armed submarines. Furthermore, if automation gives (or is perceived to give) one side in a competitive dyad a significant conventional military advantage, the weaker side may feel compelled to rely more heavily on nuclear weapons for deterrence and warfighting. These issues surrounding the potential impacts of artificial intelligence are magnified by uncertainty about the trajectory of technological developments. This article first proceeds by clarifying what autonomous systems are and clarifying often-tricky definitional issues surrounding artificial intelligence. It then lays out some key theoretical expectations. Second, the article explores the impact of autonomous systems on early warning and nuclear command and control, as well as intelligence, surveillance, and reconnaissance (ISR) relevant for nuclear systems, in the context of recent research. Third, the article discusses the potential for uninhabited nuclear delivery platforms and vehicles featuring new kinds of automation.

Fourth, the article describes the way conventional autonomous systems could both directly and indirectly influence nuclear stability. Finally, the article concludes by assessing the net likely impact of autonomous systems on nuclear stability and describing potential pathways for future research. The analysis argues that the impact of autonomous systems could depend on the specific application – both where automation falls in the nuclear enterprise but also how it is implemented in terms of design, human-machine interfaces, training, and operator culture.

Autonomous Systems and Artificial Intelligence

The field of artificial intelligence, which dates back to the 1950s, has seen tremendous growth in recent years. Much of these recent gains have come from “deep learning,” a machine learning technique that uses deep (multi-layer) neural networks. Deep learning is relatively new and, while a powerful technique, has certain insecurities. Deep neural networks are vulnerable to a form of spoofing attack that uses adversarial data to fool the network into misidentifying false data with high confidence. This vulnerability is prevalent across neural networks in wide use today. While adversarial training can somewhat mitigate these risks, there is currently no known solution to this vulnerability. Additionally, machine learning is vulnerable to “data poisoning” techniques that manipulate the data used to train a machine learning system, thus causing it to learn the wrong thing. Finally, artificial intelligence systems today, including those that do not use deep learning, have a set of safety challenges broadly referred to as “the control problem.” Under certain conditions, for example, artificial intelligence tools can learn in unexpected and counterintuitive ways that may not be consistent with their users or designer’s intent. These machine learning tools are powerful and are being used in novel experimental applications. But, given these vulnerabilities, they are not sufficiently mature to operate independent of human control for high-risk tasks, such as nuclear operations. These vulnerabilities are fairly well-understood in AI circles and among technical experts within the U.S. defense community. As a result, while AI and machine learning tools are already being incorporated into a variety of commercial applications, it seems unlikely that risk-averse government bureaucracies will be at the forefront of adoption, particularly for high-risk applications such as nuclear operations. However, older “first wave” AI systems that employ rule-based decision-making logic have been used in automated and autonomous systems for decades, including in nuclear operations. These expert AI systems use handcrafted knowledge from humans to create a structured set of if-then rules to determine the appropriate action in a given setting. Automated systems of this type are widely used, including in high-consequence operations such as commercial airline autopilots and automation in nuclear power plant operations. Rule-based expert AI systems can often improve reliability and performance when used in predictable settings. However, because such systems can only follow the rules they’ve been given, they often perform poorly in novel situations or unpredictable environments. In this paper, unless otherwise specified, we generally use the terms automated or autonomous system to refer to “first wave” expert AI systems that perform various tasks on their own, sometimes under human supervision (supervised autonomy) and sometimes absent human supervision for a period of time (full autonomy).

To Trust or Not to Trust: Autonomous Systems

Automation has been used in high-risk applications for decades, in both civilian and military
capacities. Nuclear power plants, commercial airlines, and private space ventures, for instance, all use automation to perform complex operations. Automation also serves niche roles in nuclear operations, including in early warning, targeting, launch control, delivery platforms, and delivery vehicles. Each of these applications, however, relies on mature technology and often retains human control over decision-making prior to the launch of the delivery vehicle.Questions about adopting autonomous systems require potential adopters to grapple with how to balance the risk that humans will not trust machines to operate effectively against the risk that humans will trust machines too much. Trust gaps occur when people do not trust machines to do the work of people, even if the machine outperforms humans in benchmark tasks. This can lead to an unwillingness to deploy or properly use systems. It can also lead to a preference for using human-controlled systems. Dietvorst, Simmons, and Massey show that when humans have to choose between using human forecasters or algorithms to make assessments about the future, they trust humans even when they can see an algorithm outperform humans. Moreover, when algorithms make mistakes, humans are faster to lose confidence in their effectiveness than when they see humans make mistakes.
There is contested evidence that a trust gap exists when it comes to military remotely-piloted
aircraft (i.e., drones). Surveying Ground Tactical Air Controllers (GTACs) about their preference
for inhabited aircraft versus drones for close air support, Schneider and MacDonald find that
GTACs tended to prefer inhabited aircraft. They argue that GTACs believed that pilots in
inhabited aircraft had more “skin in the game” and were thus more likely to perform effectively
(even though there was no evidence that that was the case). Their results are controversial –
other military personnel argue that their trust gap findings do not reflect the reality on the
ground, where military personnel are learning to trust remotely-piloted systems Regardless of
whether a trust gap exists in the GTAC community, the theoretical point has relevance for thinking about potential end users of autonomous systems. For applications of artificial intelligence, the alternative to a trust gap is automation bias. While humans are slow to trust information from a “machine,” such as data from radar, research demonstrates that once they believe in the effectiveness of the system, they become more willing to surrender judgment, even when there is evidence that the machine may be incorrect in a given situation. For example, in flight simulation experiments, participants given very good, but not perfect, automated flight systems tend to make more mistakes. Participants became more likely
to miss problems unless explicitly prompted by the autonomous system; they also tend to trust the autonomous system over their own judgment even when their training suggested the plane might be at risk (e.g. errors of both omission and commission). The Automation bias, whereby humans effectively surrender judgment to machines, therefore represents one important risk from automation. For example, Army investigators found that automation bias was a factor in the 2003 Patriot fratricides, in which Army Patriot air and missile defense operators shot down two friendly aircraft during the opening stages of the Iraq War. In
both instances, humans were “in the loop” and retained final decision authority for firing, but
operators nevertheless trusted the (incorrect) signals they were receiving from their automated radar systems. Army investigators found that automation bias was pervasive throughout the Patriot community at the time, which had a culture of “trusting the system without question.” According to Army researchers, Patriot operators exhibited an “unwarranted and uncritical trust in automation. In essence, control responsibility [was] ceded to the machine.” In addition to the problems of trust gap and automation bias, human-machine interaction failures can manifest in other ways. The opacity of complex machines can be a hurdle to operators fully understanding them, and this can lead to accidents. As systems increase in complexity, human users may not fully comprehend how automated systems will behave in response to certain inputs coming either from the environment or from human operators themselves. This complexity and breakdown in human-machine integration appears to have been a factor in the 2016 fatal accident involving a Tesla car on autopilot and the 2009 crash of Air France Flight 447, which killed everyone onboard. In both cases, the human users failed to understand how their respective automated systems would respond to their environments, leading them to take actions that, had they known otherwise, they would likely not have taken. A breakdown in human-machine integration can have disastrous consequences even if human users retain manual control over the system, as they did in the case of the Air France 447 crash. Finally, automated systems can pose risks because of their complexity, tight-coupling, and ability to take actions at machine speed. Complex automated systems are generally powered by software, making them potentially vulnerable to bugs and hacking. As one example, a 2007 software malfunction caused eight F-22 fighter jets to lose navigation, fuel subsystems, and some communications when they crossed the international dateline.35 Rigorous software testing can reduce error rates, but error-free software is not realistic outside of extremely narrow applications.36 Software vulnerabilities can also leave the door open for hackers. Security researchers have demonstrated the ability to remotely hack automobiles, for example, disabling or taking control of critical driving functions such as steering and brakes. Autonomous systems are also vulnerable to so-called “normal accidents” arising from the interaction of components of complex systems. While these risks are not unique to automation – normal accidents occur inmanual systems automation can increase the “tight coupling” between components. Tight coupling is a condition in which one action in a system directly causes another action with little “slack” in the system – time, visibility, and opportunity for human intervention to manage unanticipated events. Automation can increase the coupling between components and, moreover, accelerate the pace of actions to machine speed. The “brittleness” inherent to emerging forms of automation, along with omnipresent risk of automation bias, human-machine interaction failures, and unanticipated machine behavior, all potentially limit the roles that automation can safely fill.
 
Last edited:

Haldilal

लड़ते लड़ते जीना है, लड़ते लड़ते मरना है
Senior Member
Joined
Aug 10, 2020
Messages
29,498
Likes
113,311
Country flag
Ya'll Nibbiars

Autonomous Systems and Artificial Intelligence.

Risk, Reliability, and Safety

There are many models for coping with these risks. One model is to eschew automation entirely, which forgoes its potential benefits. Another model is to retain humans either “in the loop” of automation, requiring positive human input, or “on the loop,” overseeing the system in a supervisory capacity. Human-machine teaming is no panacea for these risks. Even if automated systems have little “autonomy” and humans remain in control, the human users could fall victim to automation bias, leading them to cede judgment to the machine. Appropriately determining when to intervene is not sufficient to ensure safe operation. Human users may lack the ability to effectively intervene and take control if their skills have atrophied due to relying on automation or if they do not sufficiently understand the automation. Properly integrating humans and machines into effective and reliable joint cognitive systems that harness the advantages of each requires proper design, testing, training, leadership, and culture, which is not straightforward to achieve. Reliability and predictability are significant factors in determining whether automation is a net positive or negative in high-risk applications. Constructing reliable joint cognitive systems may be especially difficult for military organizations. Research suggests that organizations’ ability to accurately assess risks and deploy reliable systems depends as much, if not more, on bureaucratic and organizational factors as on technical ones. It is not sufficient for safe and reliable operation to be technically possible. Militaries must also be willing to pay the cost in time, money, and operational effectiveness – of investing in safety. These are challenges for any high risk applications of automation, but nuclear operations pose certain unique challenges. The potentially cataclysmic consequences of nuclear war makes it difficult to estimate how safe is
safe enough.

Competition and Autonomous Systems

A challenge to safety in military settings is that operations occur in a competitive environment. Unlike in other areas where safety is paramount, such as airline travel or nuclear power plant
operations, in military settings, safety is balanced against operational effectiveness. For nuclear operations, this balancing is captured in the “always-never dilemma.” Nuclear organizations are expected to always be ready to launch a nuclear weapon at a moment’s notice and, at the same time, never allow unauthorized or accidental launch. As Scott Sagan points out, meeting both criteria is effectively “impossible.” On one level, the obvious destructive potential of nuclear weapons naturally induces caution in military professionals and policymakers who may be considering whether or not to use automation in nuclear operations. In this sense, a strong organizational bias towards maintaining positive human control over nuclear weapons is likely to mitigate against any risks from adding automation. The track record of safety for nuclear operations might lead one to be less sanguine about the ability of bureaucracies to successfully manage the risks of automation, however. Examples like the Soviet Perimeter system, discussed below, also demonstrate that some nations are likely to view the risks and benefits of automation differently from others.What might lead to variation in how countries make choices about the relative utility of autonomous systems? The answer could depend on how secure they feel about their non autonomous nuclear systems. States that feel extremely secure in their second strike capabilities at present may see fewer advantages to automation. In that case, the advantages greater speed and precision but might not appear worth the potential risk of accidents. Instead, states like the United States would likely prefer to use existing non-autonomous systems for nuclear command and control and delivery. In contrast, countries whose nuclear arsenals are more insecure may be more accepting of risk and may find the perceived advantages of automation more valuable. If a country thinks that its nuclear command and control might be at risk of severe degradation or destruction, it might be more likely to automate early warning to increase its response speed, deploy autonomous nuclear delivery platforms with higher endurance, automate new aspects of target selection for nuclear delivery vehicles, or shift towards more automated nuclear launch postures. All may not happen in unison, of course, but as a general relationship, countries whose arsenals are more insecure may be more willing to take risks to better enhance their arsenal’s survivability.

Autonomous Systems and Nuclear Stability

This section outlines the three areas described in the introduction of the article – nuclear early
warning and command and control, nuclear delivery platforms and vehicles, and non-nuclear
autonomous systems – and how they could influence nuclear stability, particularly in crisis
situations.Automation in Early Warning and Nuclear Command and Control. There are many places where automation is already used in early warning and nuclear command and control. Early warning systems rely heavily on automation to quickly warn human operators about potential inbound missiles. Command-and-control systems also have a number of automated functions, such as rapid retargeting or communication rockets to beam launch codes down to missile silos. Some forms of automation in early warning and nuclear command andcontrol have been non-controversial. Others have generated significant controversy and have even been involved in near-accidents. The most famous automation-related near-accident is the 1983 Stanislav Petrov incident in which the Soviet Oko satellite-based early warning system registered a false alarm of five U.S. intercontinental ballistic missile (ICBM) launches. Lieutenant Colonel Stanislav Petrov was the watch officer on duty who was responsible for alerting Soviet leadership of a U.S. attack. Petrov has said that the automated alert system reported a missile strike with the “highest” confidence. Automated alerts included an audible siren and a large red backlit screen that flashed “launch” and “missile strike.” While these automated alerts serve the purpose of gaining human operators’ attention, they can also exacerbate the risk of automation bias. Petrov subsequently reported that he “had a funny feeling in my gut” and estimated the odds of the strike being real as 50/50. The Petrov did not fall victim to automation bias and reported a system malfunction to his superiors, rather than reporting a U.S. nuclear strike was underway. A related highly-controversial application of automation in NC2 is the use of “dead hand” systems that could launch a nuclear counterattack in the event that a nation’s leadership was wiped out by a surprise nuclear first strike. The concept of a dead hand “doomsday machine” was a plot point in Stanley Kubrick’s 1964 film Dr. Strangelove, but there are reports that the Soviet Union may have built a semi-automated dead hand system called “Perimeter.” According to primary source interviews with former Soviet officers, the system was intended to be activatedin a crisis as a “fail deadly” mechanism to ensure retaliation against the United States in the event of a successful U.S. first strike. Specific accounts of Perimeter’s functionality differ, but the essential concept was a network of light, radiation, seismic, and pressure sensors that could detect any nuclear detonations in Soviet territory. According to accounts of the system, if it was activated and it sensed a nuclear detonation, it would check for communications to the Soviet military General Staff. If there was no order to halt, after a pre-determined period of time ranging from on the order of 15 minutes to an hour, the system would act like a “dead man’s switch” and transfer nuclear launch authority directly to human duty officers in an underground bunker. These individuals would then be tasked with launching communication rockets that would fly over Soviet territory, beaming down launch codes to hardened missile silos. A human would remain “in the loop” for the final decision to launch, but Perimeter would bypass normal layers of command and transfer authority directly to the watch officer on duty. If the Soviets did indeed decide to develop and deploy a dead hand system, they did so without telling their American counterparts. This would appear, on the face of it, counterproductive. If the intent of a dead hand system was to enhance deterrence by ensuring an adversary that retaliation was certain, then keeping it a secret would appear to undermine the whole point of the system. According to reports from former Soviet officers, however, the point of Perimeter was not to change the decision-making calculus of U.S. leaders, but rather that of Soviet leadership themselves. A dead hand system was intended to take the pressure off of Soviet leaders to “use or lose” their nuclear weapons in the event of ambiguous warning of a U.S. surprise attack. The logic of the Soviet approach illustrates how differently countries may view the role of automation in nuclear command and control. These differences in perspective, in turn, may lead a nuclear-armed state to misestimate or misunderstand the risks an adversary is willing to run in order to fortify their nuclear deterrent, thereby increasing chances of accidental or inadvertent escalation.

There are other examples of automation in NC2 systems. Bruce Blair has discussed how automation was used in the United States and Soviet nuclear enterprise. For example, Moscow used an “automated broadcast network”55 to deliver battle orders in the case of a crisis or a first strike. In the United States, the Strategic Air Command Control System came online in 1968 to transmit Emergency Action Messages to U.S. forces in the field in case of a crisis. While the system had to be operated by a person, once activated and initiated, the system offered the United States the ability to transmit the message even if, subsequent to activation, U.S. command centers were destroyed. Russian missiles that were de-targeted following the end of the Cold War have been reportedly programmed to automatically revert to their wartime targets in the United States if launched without a flight plan. In the event of a deliberate decision by Russian leadership to launch, automation cut the time needed to re-target and launch all of their missiles to 10 seconds. Similarly, while U.S. missiles were reportedly set to targets in the middle of the ocean during peacetime, the entire U.S. missile arsenal could allegedly be retargeted in 10 seconds. The automation-enabled ability to rapidly retarget missiles undoubtedly was a factor in Russian and American leadership being comfortable with de-targeting their missiles. Even so, if a Russian missile set to automatically revert to wartime targets were launched accidentally or without authorization, it could spark nuclear war. Dead hand systems or rapid retargeting of a nation’s entire missile arsenal could thereby exacerbate the consequences of an accident or unauthorized use by making it easier for such events to automatically lead to catastrophe. In addition, automation of a state’s nuclear command-and-control systems could be used to enhance deterrence by effectively tying one’s hands, for instance, by communicating that any attack on a nation’s homeland defense systems would trigger nuclear escalation. That is, a nuclear-armed state could view an explicit threat of automated retaliation as useful for escalation management. Autonomous systems – in particular, the expanded automation of a state’s NC2 apparatus could be used to increase uncertainty on the part of all involved in a conflict as to what it would take to trigger nuclear launch. A nuclear-armed state that is sufficiently risk tolerant and is confronted by a conventionally superior adversary may use this uncertainty to limit the scale or scope (i.e. geographic or targeting) of an attack on its interests. Credibly communicating such a threat to an adversary might be challenging, however. Given that automation resides in software, its effects can be difficult to demonstrate prior to crisis or conflict. If a state were to use automation to tie its hands but could not show that it had done so, it would be like “tearing out the steering wheel” in a game of chicken but being unable to throw it out the window. The net effect of automation in this instance would be to reduce flexibility and increase crisis instability.

Opportunity for Improved Reliability?

These challenges with existing automation of nuclear command and control illustrate the way
that automation can become a double-edged sword. Shortening the time and steps needed to
launch nuclear weapons can help buy more time for decision-makers to weigh ambiguous
information and make an informed judgment before acting. On the other hand, in the event of
accidents or misuse, there may be fewer steps and consequently fewer safeguards in place.
A critical question is thus how militaries will employ advances in AI to influence their early
warning and NC2 systems. There may be many places where militaries could employ new forms of autonomous systems to bolster the reliability and effectiveness of early warning and NC2. Human-machine teaming could help offset automation bias and thus enable the use of more autonomous systems. More advanced automation in nuclear early warning systems could allow greater situational awareness, reducing the risk of false alarms. It could also play a valuable role in helping human decision-makers process large amounts of information quickly. In this regard, automated data processing may play a critical role in helping human nuclear early warning operators to identify threats – and false cues – in an increasingly data-saturated and complex strategic environment. Increased automation in NC2 could also help to reduce the risk of accidents or unauthorized use. And an expanded role for automation in communications could help ensure that command-and-control signals reach their targets quickly and uncorrupted in highly contested electromagnetic environments. Automation could also be used to enhance defenses – physical or cyber – against attacks on nuclear early warning, command-and-control, delivery, and support systems, thereby enhancing deterrence and fortifying stability. It could also be used to bolster the resilience of vulnerable NC2 networks. For instance, long-endurance uninhabited aircraft that act as pseudo-satellites (“pseudo-lites”) to create an airborne communications network could increase NC2 resilience by providing additional redundant communications pathways in the event of satellite disruption. Automation could even enable autonomously self-healing networks in physical or cyberspace in response to jamming or kinetic attacks against command-and-control nodes, thereby sustaining situational awareness and command and control and enhancing deterrence. Many of these ways that autonomous systems could increase the resiliency and accuracy of NC2 are speculative, however. Existing automation, as the Petrov incident shows, already creates the risk of automation bias. Knowledge of this will probably make most nuclear-armed states unlikely to further automate the early warning or command-and-control processes, with two exceptions: first, in situations where human-machine teaming might be further integrated to mitigate potential false alarms; second, in situations where a state fears for its secure second strike, and believes that further automation would reinforce deterrence of a potential aggressor. It is also possible, though less likely, that more automation could occur via a highly risk-tolerant nuclear-armed state that believes automated NC2 protocols would improve its ability to manage escalation. Strategic Decision Support Systems Strategic decision support systems could also affect nuclear stability by influencing how policymakers perceive and react to nuclear or strategic non-nuclear threats. States have long
relied on computational methods to better understand threat environments and design solutions to emerging or imminent national security problems. During the Cold War, for instance, the Kremlin tasked the KBG with developing a computer program named “VRYAN” (the Soviet acronym for “Surprise Nuclear Missile Attack”) that would track the U.S.-USSR correlation of forces and notify Soviet leaders when a preemptive nuclear strike would be required to prevent the United States from achieving decisive military superiority.

The VRYAN’s role in providing strategic warning was tested during the Soviet “war scare.” As earlyas the late 1970s, Soviet leaders were increasingly concerned that the United States had
abandoned détente and had instead committed itself to achieving decisive military superiority.
These fears climaxed in 1983 during NATO’s annual command post exercise – codename “Able Archer 83” – with Moscow allegedly placing forces on higher readiness out of fear that the exercise was in fact the start of U.S. nuclear preemption. The conformed to the leadership’s view that the United States was pursuing first-strike superiority. The VRYAN’s assessments therefore reinforced Soviet leaders’ fears about the United States, driving a feedback loop. This loop may have been exacerbated by Soviet leaders’ predominately
engineering backgrounds, which may have predisposed them toward viewing the program’s
quantitative analysis as more credible than alternatives a precursor to automation bias.64 This feedback loop amplified and intensified those perceived threats, rather than providing Soviet leaders with a clearer understanding of U.S. intentions. States’ reliance on computational models today may be growing due to the AI revolution. In 2014, Russia erected the National Defense Control Center (NDCC) in Moscow. One of the NDCC’s primary functions is information fusion in support of conventional and nuclear operations.The Russian government is simultaneously investing heavily in AI, in part to better analyze the large quantities of data being delivered to NDCC and other agencies.

As in the 1980s, these investments are occurring at a time when Russian leaders see their nation as increasingly insecure due to the U.S. pursuit of military advantage, and the Russian military is more seriously evaluating the strategic merits of preemptive strike operations. China is similarly investing in AI-enabled decision support systems. As Lora Saalman writes, Chinese officials fear that the People’s Liberation Army (PLA) would be unable to detect and counter a low-signature, prompt “bolt-from-the-blue” attack on its nuclear forces. This fear reflects a combination of the perceived inadequacy of Chinese early warning systems and advances in U.S. prompt strike capabilities. The threat of a successful disarming attack on Chinese nuclear forces has led Chinese officials to prioritize avoiding “false negatives” over “false positives.” That is, whereas U.S. officials are concerned firstly by the potential for a “false positive” whereby early warning systems show incorrectly that an attack is underway their Chinese counterparts are more concerned by the possibility that early warning systems will show that an attack is not underway, when in fact it is. China is investing heavily in AI-enabled decision support systems in part to help avoid false negatives by accelerating troops’ ability to identify and respond to a disarming attack.Officials’ strong public emphasis on AI-supported decision-making as a potentially decisive innovation suggests that they may be prone to automation bias. Chinese military-theoretical writing on information dominance and the notion of victory through scientific planning, eacho of which will rely on AI, according to researchers in China, also may make Chinese officials potentially susceptible to automation bias. Decision support systems are not inherently destabilizing. However, if states comes to over rely on these systems for strategic decision-making, this could undermine nuclear stability. This risk may grow, especially if decision-makers and their advisors believe that AI could serve as a panacea for the myriad informational problems (e.g. incomplete data or inadequate analysis) that have stymied their efforts at national defense over the years.

Moreover, AI-based decision-support systems may fail to deliver accurate information to
decisionmakers precisely at the moment they are needed: in a crisis. Automated decision-support systems, whether rule-based or using machine learning, are only as good as the data they rely on. Building an automated decision-support tool to provide early warning of a preemptive nuclear attack is an inherently challenging problem because there is zero actual data of what would constitute reliable indicators of an imminent preemptive nuclear attack . This is most acutely challenging when trying to warn against a “bolt-from-the-blue.” Intelligence services can track indicators of large-scale military mobilization. But these indicators cannot provide insight into the minds of senior decision-makers, who may not have yet made a decision whether to attack. Indeed, a nation readying its nuclear forces to launch a preemptive attack might appear similar to one preparing itself to “launch under attack” in response to what it perceives as indications that its adversary is preparing a nuclear first-strike.Automation could be valuable in allowing intelligence agencies to scan large swaths of data quickly for anomalous behavior at a scale and speed that would not be possible with human analysts.

This could be done through rule-based indicators where intelligence services set up the equivalent of automated alerts to warn when certain indicators are tripped. This signature-based approach is similar to how malware detection works today, where automation looks for known signatures of malicious software. Newer approaches using unsupervised machine learning can even assist in identifying anomalous activity when signatures are not yet known. These tools could be valuable in increasing the ability of intelligence services to track the digital footprint of military forces. However, compiling AI-based indicators into an assessment of the likelihood of a preemptive attack would be extremely difficult, as the 1983 Stanislav Petrov incident highlights. Humans can rely on a multitude of contextual factors to help interpret indicators and assess an adversary’s intent. Any rule-based system that attempted to make an assessment of the likelihood of an attack based on pre-specified indicators would be limited by the fact that human analysts who write the nuclear attack. Machine learning-based systems would similarly lack sufficient data to learn the signatures of a preemptive attack and could, at best, only indicate that some behavior was outside the norm. Even simple alert systems can be problematic if the manner in which they convey information is overconfident about the interpretation of that data and encourages automation bias, such as the Soviet early warning system communicating to Petrov “launch” and “missile strike.”Automated systems that more directly conveyed the information actually measured (in that instance, “flash”) would decrease the risk of automation bias, by being more transparent to the human user. This tradeoff comes at a cost, however, as the human must take an additional step to interpret the data. Governments thus face tradeoffs not only in whether or not to use automated decision-support tools, but in how that information is conveyed to human leaders. Automated decision support tools could be stabilizing if they help decision-makers gain better insights adversary’s operations. This could help reassure leaders that an adversary is not planning an attack and could help make surprise attacks less feasible, reducing the incentives for preemption. On the other hand, false positives and automation bias could cause leaders to overreact to innocuous or ambiguous information, increasinginstability. A major factor in how leaders calibrate such systems is likely to be their risk tolerance for false positives vs. false negatives. The more secure a country’s second strike capabilities, the less likely it may be to take excessive risk with automating command and control, because the consequences of a false negative would be relatively lower. A country confident in its ability to retaliate in response to a first strike should be, on average, more likely to calibrate in ways that do not over rely on autonomous systems. These risks of using automated decision-support systems are compounded by the fact that leaders won’t have sufficient data about the systems’ performance in a crisis to calibrate their degree of trust in it. Even worse, the system may perform adequately in peacetime, causing leaders to be lulled into a false sense of security about the system’s reliability. Peacetime accuracy may cause leaders to place excessive faith in the systems’ abilities to accurately identify and recommend responses to emergent threats. In this case, over time, leaders could become lulled into a sense of security about the efficacy of the system, even though they would have little actual data to support its value in warning of preemptive nuclear attack. Similar human-machine interaction failures have occurred in other settings where seemingly flawless performance leads humans to overtrust in automation, as in several fatal crashes involve Tesla autopilots. If a state does come to over rely on AI-enabled decision support systems for strategic decision-making, then it may fall subject to many of the limitations demonstrated in the VRYAN case. For instance, biased instructions for data collection to feed AI-enabled decision support systems may drive feedback loops that reinforce preexisting fears and amplify international tensions, potentially to the point of nuclear escalation. The potential for such loops may be increased if leaders believe that AI cannot be biased, and so take less care to remove their own biases from the design and use of the systems.
 
Last edited:

Haldilal

लड़ते लड़ते जीना है, लड़ते लड़ते मरना है
Senior Member
Joined
Aug 10, 2020
Messages
29,498
Likes
113,311
Country flag
Ya'll Nibbiars

The Need for the ARS.

The Indian nuclear command, control, and communications (NC3) system comprises many component systems that were designed and fielded during the Cold War — a period when nuclear missiles were set to launch from deep within chinese territory, giving the sufficient time to react. That era is over. Today, Chinese and Pakistan nuclear modernization is rapidly compressing the time Indian leaders will have to detect a nuclear launch, decide on a course of action, and direct a response.

Technologies such as HypersonicWeapons, stealthy Nuclear Armed Cruise Missile, and weaponized artificial Intelligence mean India's legacy NC3 Systems may be too slow for the president to make a considered decision and transmit orders. The challenges of attack-time compression present a destabilizing risk to India's deterrence strategy. Any potential for failure in the detection or assessment of an attack, or any reduction of decision and response time, is inherently dangerous and destabilizing.

If the ultimate purpose of the NC3 system is to ensure India's senior leadership has the information and time needed to command and control nuclear forces, then the penultimate purpose of a reliable NC3 system is to reinforce the desired deterrent effect. To maintain the deterrent value of India's strategic forces, the India may need to develop something that might seem unfathomable — an automated strategic response system based on artificial intelligence.

Admittedly, such a suggestion will generate comparisons to Dr. Strangelove’s doomsday machine, War Game's War Operation Plan Response, and the Terminator’s Skynet, but the prophetic imagery of these science fiction films is quickly becoming reality. A rational look at the NC3 modernization problem finds that it is compounded by technical threats that are likely to impact strategic forces. Time compression has placed India's senior leadership in a situation where the existing NC3 system may not act rapidly enough. Thus, it may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the India in an impossible position.

Threats Are the Problem

The compression of detection and decision time is not a new phenomenon. In the 1970's, Chinese bombers would take hours to reach the India. With the advent of the missile age, that time was compressed to about 30 minutes for a land-based intercontinental ballistic missile and about 15 minutes for a submarine-launched ballistic missile. These technologies fostered the development of both space-based and underwater detection and communication, as well as advanced over-the-horizon radar. Despite this attack-time compression, Indian officials remained confident that India's senior leaders could act in sufficient time. The India believed the Chinese would be deterred by its ability to do so.

However, over the past decade Chinese has vigorously modernized its nuclear arsenal, with a particular emphasis on developing capabilities that are difficult to detect because of their shapes, their materials, and the flight patterns they will take to Indian. targets. Examples of the systems include the JL 3 and Hypersonic cruise missiles, DF 17 Glide vehicle, and the hypersonic weapon, which all have the potential to negate the India's’ NC3 system before it can respond. This compression of time is at the heart of the problem. The India has always expected to have enough time to detect, decide, and direct. Time to act can no longer be taken for granted, nor can it be assumed that the Chinese or Pakistan, for that matter, will act tactically or strategically in the manner expected by the India. In fact, policymakers should expect adversaries to act unpredictably. Neither the Indian intelligence community nor Beltway intellectuals predicted the Chinese invasion of Tibet, among other recent Chinese acts of aggression. The Chinese, to their credit, are adept at surprising the India on a regular basis.

These new technologies are shrinking India's senior-leader decision time to such a narrow window that it may soon be impossible to effectively detect, decide, and direct nuclear force in time. In the wake of a nuclear attack, confusion and paralysis by information and misinformation could occur when the NC3 system is in a degraded state. Understanding the new technologies that are reshaping strategic deterrence is instructive.

Two types of nuclear-armed hypersonic weapons have emerged: hypersonic glide vehicles and hypersonic cruise missiles. Rich Moore, RAND Corporation senior engineer, notes, “Hypersonic cruise missiles are powered all the way to their targets using an advanced propulsion system called a SCRAMJET. These are very, very, fast. You may have six minutes from the time it’s launched until the time it strikes.” Hypersonic cruise missiles can fly at speeds of Mach 5 and at altitudes up to 100,000 feet.

Hypersonic glide vehicles are launched from an intercontinental ballistic missile and then glide through the atmosphere using aerodynamic forces to maintain stability, flying at speeds near Mach 20. Unlike ballistic missiles, glide vehicles can maneuver around defenses and to avoid detection if necessary, disguising their intended target until the last few seconds of flight — a necessary capability as nations seek to develop ever better defenses against hypersonic weapons.

In addition to the hypersonic cruise missile threat, there is the proliferation of offensively postured, nuclear-armed, Low Observable Cruise Missiles. Whereas the hypersonic cruise missile threat is looming because adversary systems are still in the developmental stage, low-observable cruise missiles are here and the Chinese understand how to employ these weapons on flight paths that are hard to track, which makes them hard to target. Land-attack cruise missiles are a challenge for today’s detection and air defense systems. Cruise missiles can fly at low altitudes, use terrain features, and fly circuitous routes to a target, avoiding radar detection, interception, or target identification. Improved defensive capabilities and flight paths have made low-observable or land-attack cruise missiles (LACMs) even less visible. They can also be launched in a salvo to approach a target simultaneously from different directions.

The use of automation in the NC3 system is not entirely new. In fact, beginning in the 1960s, the United States and the Soviet Union pursued the development of automated systems within the areas of threat detection, logistical planning, message traffic, and weapon-system guidance. Sometime in the late 1980s, the Soviet Union developed and deployed the Perimeter system.

Options for Escaping the Dilemma

There are three primary options we see for escaping the dilemma presented. First, the India can refocus its nuclear modernization effort to field a much more robust second-strike capability that allows the India to absorb an unexpected first strike before deciding on a response. This option would pose a myriad of ethical and political challenges, including accepting the deaths of many Americans in the first strike, the possible decapitation of India's leadership, and the likely degradation of the India's nuclear arsenal and NC3 capability. However, a second-strike-focused nuclear deterrent could also deter an adversary from thinking that the threats discussed above provide an advantage sufficient to make a first strike worth the risk.

Second, nuclear modernization could focus on improvements to pre-launch strategic warning, such as improved surveillance and reconnaissance, as part of a larger preemption strategy. This approach would also require instituting a damage prevention or limitation first-strike policy that allowed the president to launch a nuclear attack based on strategic warning. Such an approach would be controversial, but could deter an adversary from approaching the India's perceived red lines.

Refocusing on strategic warning, specifically all-source intelligence that provides indication that an adversary is preparing to attack the India, would necessarily be accompanied by a policy of preemptive attack by the India. In essence, once intelligence revealed that the India was facing an imminent attack, “kill or be killed” would become the new motto of nuclear forces. Absent sufficient time to detect the launch of an adversary’s weapons, decide on a response, and then direct a retaliatory response, preemption may be the only viable policy for saving Indian lives. This approach to the use of nuclear weapons is antithetical to Indian values, but if the alternative is the destruction of Indian society, preemption may be the more acceptable option.

Third, nuclear modernization could focus on compressing the time available to an adversary to detect, decide, and direct. This would be done in an effort to force an adversary to back away from destabilizing actions and come to the negotiating table. Such a strategy is premised on the idea that mutual vulnerability makes the developing strategic environment untenable for both sides and leads to arms control agreements that are specifically designed to force adversaries to back away from fielding first-strike capabilities. The challenge with this approach is that if a single nuclear power (China, for example) refuses to participate, arms control becomes untenable and a race for first-strike dominance ensues.

There is a fourth option. The India could develop an NC3 system based on artificial intelligence. Such an approach could overcome the attack-time compression challenge.
 
Last edited:

Okabe Rintarou

Senior Member
Joined
Apr 23, 2018
Messages
2,337
Likes
11,988
Country flag
Imagine if we had such a system deployed during the Ladakh Stand off. Would the Chinese would have been so balant and aggressive?.
Just imagine us passing them the message over the hotline that our Deadman's Hand is active and unless our guy in The Bunker presses the keys to delay the countdown again, the nukes will fly. Xi would $hit his pants.
 

porky_kicker

Senior Member
Joined
Apr 8, 2016
Messages
6,023
Likes
44,574
Country flag
Ya @porky_kicker Your Thought's.
AFAIK based on very old information , there is a list of people who can authorise launch orders on the basis of a tiered hierarchy. Incase top hierarchy is eliminated . Those next in line access a designated safe , where sealed instructions are kept , they open it and follow the instructions given there. Incase they get eliminated , those next in line open their specific safe and access the instructions. How many tiers are there I don't know.

This is all I can say with some guarantee . If the same protocol is being followed now I don't know.
 

Haldilal

लड़ते लड़ते जीना है, लड़ते लड़ते मरना है
Senior Member
Joined
Aug 10, 2020
Messages
29,498
Likes
113,311
Country flag
AFAIK based on very old information , there is a list of people who can authorise launch orders on the basis of a tiered hierarchy. Incase top hierarchy is eliminated . Those next in line access a designated safe , where sealed instructions are kept , they open it and follow the instructions given there. Incase they get eliminated , those next in line open their specific safe and access the instructions. How many tiers are there I don't know.

This is all I can say with some guarantee . If the same protocol is being followed now I don't know.
Ya'll Nibbiars yeah but anyway. ARS reduces the time and also considering the Hypersonic cruise missiles and Stealth Cruise missiles. We need to adapt. ARS make more sense than ever.

And also considering the chinses. A more aggressive Nuclear posture is required. And ARS fits here perfectly even without giving up the No First Use Policy.
 
Last edited:

Haldilal

लड़ते लड़ते जीना है, लड़ते लड़ते मरना है
Senior Member
Joined
Aug 10, 2020
Messages
29,498
Likes
113,311
Country flag
Ya'll Nibbiars Atleast officialy India is not going to give up the No First use policy. May be soften but not fully. In that case the ARS. Make a necessity for out Second Strike capabilities. The Submarine now will range from 4 to 7 will make our deterrence even more effective.
 

asaffronladoftherisingsun

Dharma Dispatcher
Senior Member
Joined
Nov 10, 2020
Messages
12,207
Likes
73,688
Country flag
Meanwhile....

Dyatlov: Raise the power.
@Haldilal : It isnt safe
Dyatlov : Safety first, always, I've been saying that for 25 years.
Now raise the power.
@Haldilal : (Raises power)
Reactor: Boom
Dyatlov: What did you DO?
Fomin: "I apologise for this unsatisfactory result."
Akimov : It Exploded! The Core exploded! I see graphite in this thread.
Dyatlov :What??? There aint no Graphite in this thread. rbmk cores dont explode (Also pukes on the table).
 
Last edited:

Marliii

Better to die on your feet than live on your knees
Senior Member
Joined
Nov 22, 2020
Messages
5,551
Likes
34,028
Country flag
Meanwhile....

Dyatlov: Raise the power.
@Haldilal : It isnt safe
Dyatlov : Safety first, always, I've been saying that for 25 years.
Now raise the power.
@Haldilal : (Raises power)
Reactor: Boom
Dyatlov: What did you DO?
Fomin: "I apologise for this unsatisfactory result."
Akimov : It Exploded! The Core exploded! I see graphite in this thread.
Dyatlov :What??? There aint no Graphite in this thread. rbmk cores dont explode (Also pukes on the table).
HBOs chernobyl
 

DocK

Regular Member
Joined
Feb 16, 2019
Messages
269
Likes
1,487
Country flag
Ya'll Nibbiars

The Need for the ARS.

The Indian nuclear command, control, and communications (NC3) system comprises many component systems that were designed and fielded during the Cold War — a period when nuclear missiles were set to launch from deep within chinese territory, giving the sufficient time to react. That era is over. Today, Chinese and Pakistan nuclear modernization is rapidly compressing the time Indian leaders will have to detect a nuclear launch, decide on a course of action, and direct a response.

Technologies such as HypersonicWeapons, stealthy Nuclear Armed Cruise Missile, and weaponized artificial Intelligence mean India's legacy NC3 Systems may be too slow for the president to make a considered decision and transmit orders. The challenges of attack-time compression present a destabilizing risk to India's deterrence strategy. Any potential for failure in the detection or assessment of an attack, or any reduction of decision and response time, is inherently dangerous and destabilizing.

If the ultimate purpose of the NC3 system is to ensure India's senior leadership has the information and time needed to command and control nuclear forces, then the penultimate purpose of a reliable NC3 system is to reinforce the desired deterrent effect. To maintain the deterrent value of India's strategic forces, the India may need to develop something that might seem unfathomable — an automated strategic response system based on artificial intelligence.

Admittedly, such a suggestion will generate comparisons to Dr. Strangelove’s doomsday machine, War Game's War Operation Plan Response, and the Terminator’s Skynet, but the prophetic imagery of these science fiction films is quickly becoming reality. A rational look at the NC3 modernization problem finds that it is compounded by technical threats that are likely to impact strategic forces. Time compression has placed India's senior leadership in a situation where the existing NC3 system may not act rapidly enough. Thus, it may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the India in an impossible position.

Threats Are the Problem

The compression of detection and decision time is not a new phenomenon. In the 1970's, Chinese bombers would take hours to reach the India. With the advent of the missile age, that time was compressed to about 30 minutes for a land-based intercontinental ballistic missile and about 15 minutes for a submarine-launched ballistic missile. These technologies fostered the development of both space-based and underwater detection and communication, as well as advanced over-the-horizon radar. Despite this attack-time compression, Indian officials remained confident that India's senior leaders could act in sufficient time. The India believed the Chinese would be deterred by its ability to do so.

However, over the past decade Chinese has vigorously modernized its nuclear arsenal, with a particular emphasis on developing capabilities that are difficult to detect because of their shapes, their materials, and the flight patterns they will take to Indian. targets. Examples of the systems include the JL 3 and Hypersonic cruise missiles, DF 17 Glide vehicle, and the hypersonic weapon, which all have the potential to negate the India's’ NC3 system before it can respond. This compression of time is at the heart of the problem. The India has always expected to have enough time to detect, decide, and direct. Time to act can no longer be taken for granted, nor can it be assumed that the Chinese or Pakistan, for that matter, will act tactically or strategically in the manner expected by the India. In fact, policymakers should expect adversaries to act unpredictably. Neither the Indian intelligence community nor Beltway intellectuals predicted the Chinese invasion of Tibet, among other recent Chinese acts of aggression. The Chinese, to their credit, are adept at surprising the India on a regular basis.

These new technologies are shrinking India's senior-leader decision time to such a narrow window that it may soon be impossible to effectively detect, decide, and direct nuclear force in time. In the wake of a nuclear attack, confusion and paralysis by information and misinformation could occur when the NC3 system is in a degraded state. Understanding the new technologies that are reshaping strategic deterrence is instructive.

Two types of nuclear-armed hypersonic weapons have emerged: hypersonic glide vehicles and hypersonic cruise missiles. Rich Moore, RAND Corporation senior engineer, notes, “Hypersonic cruise missiles are powered all the way to their targets using an advanced propulsion system called a SCRAMJET. These are very, very, fast. You may have six minutes from the time it’s launched until the time it strikes.” Hypersonic cruise missiles can fly at speeds of Mach 5 and at altitudes up to 100,000 feet.

Hypersonic glide vehicles are launched from an intercontinental ballistic missile and then glide through the atmosphere using aerodynamic forces to maintain stability, flying at speeds near Mach 20. Unlike ballistic missiles, glide vehicles can maneuver around defenses and to avoid detection if necessary, disguising their intended target until the last few seconds of flight — a necessary capability as nations seek to develop ever better defenses against hypersonic weapons.

In addition to the hypersonic cruise missile threat, there is the proliferation of offensively postured, nuclear-armed, Low Observable Cruise Missiles. Whereas the hypersonic cruise missile threat is looming because adversary systems are still in the developmental stage, low-observable cruise missiles are here and the Chinese understand how to employ these weapons on flight paths that are hard to track, which makes them hard to target. Land-attack cruise missiles are a challenge for today’s detection and air defense systems. Cruise missiles can fly at low altitudes, use terrain features, and fly circuitous routes to a target, avoiding radar detection, interception, or target identification. Improved defensive capabilities and flight paths have made low-observable or land-attack cruise missiles (LACMs) even less visible. They can also be launched in a salvo to approach a target simultaneously from different directions.

The use of automation in the NC3 system is not entirely new. In fact, beginning in the 1960s, the United States and the Soviet Union pursued the development of automated systems within the areas of threat detection, logistical planning, message traffic, and weapon-system guidance. Sometime in the late 1980s, the Soviet Union developed and deployed the Perimeter system.

Options for Escaping the Dilemma

There are three primary options we see for escaping the dilemma presented. First, the India can refocus its nuclear modernization effort to field a much more robust second-strike capability that allows the India to absorb an unexpected first strike before deciding on a response. This option would pose a myriad of ethical and political challenges, including accepting the deaths of many Americans in the first strike, the possible decapitation of India's leadership, and the likely degradation of the India's nuclear arsenal and NC3 capability. However, a second-strike-focused nuclear deterrent could also deter an adversary from thinking that the threats discussed above provide an advantage sufficient to make a first strike worth the risk.

Second, nuclear modernization could focus on improvements to pre-launch strategic warning, such as improved surveillance and reconnaissance, as part of a larger preemption strategy. This approach would also require instituting a damage prevention or limitation first-strike policy that allowed the president to launch a nuclear attack based on strategic warning. Such an approach would be controversial, but could deter an adversary from approaching the India's perceived red lines.

Refocusing on strategic warning, specifically all-source intelligence that provides indication that an adversary is preparing to attack the India, would necessarily be accompanied by a policy of preemptive attack by the India. In essence, once intelligence revealed that the India was facing an imminent attack, “kill or be killed” would become the new motto of nuclear forces. Absent sufficient time to detect the launch of an adversary’s weapons, decide on a response, and then direct a retaliatory response, preemption may be the only viable policy for saving Indian lives. This approach to the use of nuclear weapons is antithetical to Indian values, but if the alternative is the destruction of Indian society, preemption may be the more acceptable option.

Third, nuclear modernization could focus on compressing the time available to an adversary to detect, decide, and direct. This would be done in an effort to force an adversary to back away from destabilizing actions and come to the negotiating table. Such a strategy is premised on the idea that mutual vulnerability makes the developing strategic environment untenable for both sides and leads to arms control agreements that are specifically designed to force adversaries to back away from fielding first-strike capabilities. The challenge with this approach is that if a single nuclear power (China, for example) refuses to participate, arms control becomes untenable and a race for first-strike dominance ensues.

There is a fourth option. The India could develop an NC3 system based on artificial intelligence. Such an approach could overcome the attack-time compression challenge.
Guys correct me if I am wrong. I had read that India's NFS comes with riders. Especially one that states that if a Nuclear capable missile is launched it will be considered as Nuclear attack and followed by full spectrum retaliation. That makes me believe that we are already keeping eyes open on any kind of launch from both our untrustworthy neighbours.

Just my opinion.
 

Global Defence

New threads

Articles

Top