The bad and the good of technologies is determined by how we, humans use them. Technology itself is neither good nor bad. In this context, general-purpose technologies like electricity and computing have been put to both good and bad use, but it is an undeniable fact that they help us dramatically improve science and understand nature. Electricity and computing are technologies we have had for some time, now. What about the technologies of the future, already showing up here and there in concept and existing products?

Let’s consider two such technologies in this article: quantum computing and autonomous weapons.

Quantum computing

A quantum computer (QC) is not an enhanced classical computer. It is a different type of computing architecture, which performs computation through the process of quantum mechanics. While classical computers are based on “bits” – units of data with a single binary value of either a zero or one, which make up all apps and websites and their content, quantum computers use subatomic particles like electrons or photons. They are called “qubits” – quantum bits. They give quantum computing two super-processing capabilities, which classical computers do not possess.

One of these capabilities, superposition enables a QC to do simultaneous processing of a huge number of outcomes. It is possible because every subatomic particle is able to be in multiple states at any given time. The other super capability is entanglement, or the ability of a quantum machine to increase its computing power exponentially with every added qubit. That is possible because two qubits remain connected, and every action performed on one affects the other regardless of the distance between them. In 2019, Google proved that a 54-qubit QC takes minutes to solve a problem that would take classical computers years.

In 2020, there were only 65 qubits. The issue here is the sensitivity of quantum computing to even small disturbances in the computer itself and its surroundings. Electrical interferences, temperature and other changes in the environment cause damage – “decoherence” – to quantum coherence. In order to prevent that, researchers need to store QCs in unprecedented vacuum chambers and use superconductors and supercooling refrigerators.

The cost is not the only challenge in the development of quantum computing. The bigger the number of qubits, the harder it is to control errors caused by decoherence, and each such error will require all logical qubits to be represented by more physical ones to secure stability (e.g.a million physical qubits would be required to deliver the performance of 4,000 logical qubits). The quantum computing process itself is challenging too because a different architecture requires different algorithms and software tools that we have not developed yet.

Application of QC

All these challenges will take years to overcome, however, it is possible to have a functional 4,000-qubit QC by 2041. Let’s imagine that we are already in 2045 and such QCs exist. How can they be used?

QCs will be able to analyze multiple molecules at the same time and model complex natural phenomena. This means they can be very efficient in a crucial industry we all depend on – discovery of drugs. Other key areas of focus, where QCs’ super-processing capabilities can be applied, include security, climate change action, prediction of pandemic risks, exploring space, modeling the human brain, and understanding quantum physics.

Let’s focus on the security. Quantum computers would be able to break computation that classical computers have found impregnable, such as the RSA algorithm – the one used to guarantee Bitcoin transactions. It is an asymmetric cryptography algorithm that uses two keys – public and private – to ensure the security of the transactions. The keys are long sequences of characters, which are mathematically related, and QCs can generate the private key from any public key.

The current storage format of bitcoins, called P2PKH, does not reveal the public key, but up to 2010, all transactions’ public keys were available. The model was designed to prevent corporate and government control, but the security issue soon became obvious. However, there are still 2 million bitcoins stored in the old, P2PK format, and they collectively are worth $120 billion (if calculated at the January 2021 price per bitcoin). A quantum computing algorithm that is capable of breaking the RSA cryptography already exists, which puts those 2 million bitcoins at risk of a heist.

As usual, development of one application of the technology leads to development of another. QCs can not only break but also build cryptography, and the one built on a QC will be impregnable to other QCs.

Autonomous weapons

Autonomous weapons are not yet a sizable part of modern armies’ arsenal, but they are present in it and delivering results. The most famous examples include Israel’s Harpy drone that strikes with precisions only the targets programmed in it, and the “Slaughter bot” which is notoriously cheap to build – around $1,000.

The use of autonomous weaponry now and in the future carries a range of benefits and liabilities.

The most obvious “advantage” that autonomous weapons could provide, if wars are fought by machines, is that soldiers will not lose their lives. If wars are fought by machines and humans, these weapons could be helpful in precise targeting and prevent friendly fire and civilian casualties. As with any arms, autonomous weapons could be used for protective purposes as well, making defense stronger and more efficient.

The main liability is the moral one – the taking of a human life, which ethical systems all over the globe consider a crime and a sin, which can be excused only in extraordinary situations. Next, we need to consider the moral cost of the killing. If the act is done by an intelligent machine, that cost is lower. And the accountability is vague. What if an error occurs? Who will be held responsible: the engineer, the manufacturer, the Ministry of Defense?

The things that make autonomous weaponry so efficient – the precision and the power – can also lead to dire consequences, if used for assassinations and genocide. On the example of the “Slaughter bot,” production of autonomous weapons could also be cheap, and with information available online, anyone could build small deadly machines outside of any government’s control.

When talking about weapons, we inevitable come to the topic of arms race. Autonomous weaponry is unlikely to become the exception, and the deterrence theory does not apply to them because there is no threat of mutually assured destruction (MAD).

Three solutions have been circulated so far as ways to prevent the possible catastrophe. First, the “human-in-the-loop” approach could be applied to ensure that lethal decisions are made by humans. The disadvantages of this approach are the difficulty of enforcement, governments’ likely unwillingness to adopt it, and the fact that loopholes can be easily found. Second, countries could impose a ban like the ones on chemical and biological weapons, and third, regulations could be established for autonomous weaponry, but both these solutions require global consensus – a hard thing to achieve.

If we consider this issue in a long term, it is yet unclear how future wars will be fought – with robots or humans, or a combination of both, and what (if any) part human soldiers will play in them.

Promsopeak Sean Nuon
Sean Promsopeak Nuon
Lead engineer
Sean is technology-driven and passionate about working with technology that helps people. Now he finds himself as an executive member of Slash, executing the technology operation side from an entrepreneurship point of view. He has over 9 years of working experience dealing with technical problems, project management and team mindset building. He splits time between Solution Architect & Lead developer for enterprise clients and as part of the management team, he helps build future-proof architecture, define quality standards, team culture, and hiring & training practices.
In this article

Explore more resources

4 most frequent reasons digital innovations fail
Despite the potential benefits, some digital innovations fail to achieve their objectives. In this article, we explore the main reasons why digital innovations may fail and how to avoid them.
8 minute read·
by Marc Gamet ·
March 30, 2023
Marc Gamet
From workshop to book: crafting years of design wisdom
3 minute read·
by Marc Gamet ·
April 26, 2024
Skip to content