Quantum revolution or militaristic bubble?

The world's press is buzzing with news of the supposedly infinite possibilities of quantum computers, on a scale only comparable to the attention given to nuclear fusion. Celebratory articles exclaim that these computers will enable new processes for producing fertilizers, solving climate change and finding new drugs. Meanwhile, supposedly serious publications claim that scientists at Google have created nothing less than a wormhole using a few dozen qubits. Add to all this the news of a week ago that Chinese engineers have developed a method to break one of the most widely used encryption keys and you are left with a breeding ground for a media bubble (hype) chock full of sensationalism. Is there any truth in all these claims? What is behind the media hype circus? And, what do these computers offer to humanity?
"Quantum" hype
Chinese quantum computer.
For more than a year, mountains of newspaper articles on quantum computing have been appearing. The media amplify exaggerations by large companies and states, and if one were to trust what the media say, quantum computing:
Changes everything from the moment it reinvents computing from the ground up.
In the real world, this is not true. Quantum computers are not a total reinvention of computing, in fact they don't even meet the definition of a computer in most people’s minds. No one is going to own a quantum PC or CPU, these are ultra-specialized machines that are only good, potentially, at accomplishing a much narrower set of tasks than a classical computer.
The figureheads of quantum computing have been complaining for years about the hype these machines receive, a hype they themselves consider dangerous because of how misleading it is. In fact, it has managed to convince some scientists of the supposed immediate viability of quantum computing. David Deutsch put it like this a month ago:
When I recently asked David Deutsch, the visionary physicist who in 1985 laid out what quantum computing might look like, whether he was surprised at how quickly the idea became a practical technology, he replied with characteristic terseness: ‘It hasn’t.’
And Peter Shor, whose important algorithms we will mention later in the article, summarized the scientific importance of the results announced by Google:
The theory tested on the Google processor only has a very tangential relationship to any possible theories of quantum gravity in our Universe.
Scott Aaronson, a well-known professor of quantum computing, was even more blunt:
Tonight, David Nirenberg, Director of the IAS and a medieval historian, gave an after-dinner speech to our workshop, centered around how auspicious it was that the workshop was being held a mere week after the momentous announcement of a holographic wormhole on a microchip (!!)—a feat that experts were calling the first-ever laboratory investigation of quantum gravity, and a new frontier for experimental physics itself. Nirenberg asked whether, a century from now, people might look back on the wormhole achievement as today we look back on Eddington’s 1919 eclipse observations providing the evidence for general relativity.
I confess: this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair. Before, I had implicitly assumed: no one was actually hoodwinked by this. No one really, literally believed that this little 9-qubit simulation opened up a wormhole, or helped prove the holographic nature of the real universe, or anything like that. I was wrong.
The weariness and even anger of scientists is understandable. The quantum race has become a pure spectacle of raving and misleading claims. In today's academic physics world, it is considered necessary and normal to upload a version of the articles to a public repository -usually arXiv- before submitting them to journals, which allows some prior discussion of their content within the community. Curiously this article, supposedly so important, skipped the step and was published by surprise. Not only that, but the New York Times had already been preparing an impact story for over a month to inflate the hype.
From creating "hype" in order to attract capital to wielding "Fake news" as a trade war weapon
Core of Google's quantum computer. The US is falling behind in the quantum race.
No one harbors much hope about the mainstream press, but when a supposedly serious journal blurted out that the experiment had created instead of simulated a wormhole, as well as insisting that this was as important as the confirmation of the existence of the Higgs boson, everyone suspected a shameless campaign by Google to raise capital. It would not be the first time it happened.
A second wave of consternation came when the US Department of Energy jumped on the wormhole bandwagon and began spreading the fake news of its creation at Google Labs.
The reality is that Chinese quantum computers are capable of running quantum algorithms 2 or 3 times faster than Google's Sycamore computer, although for some mysterious reason these computers are never mentioned in the press of the US bloc, which only has eyes for Google and IBM computers. The US needs something that can give it a technological edge and that can serve as a type of "moonshot" to hoist over China's head at all costs.
But China is far from falling behind when it comes to misleading publications either. A few days ago, a Chinese paper was uploaded to arXiv claiming the possibility of breaking the most common encryption keys with an algorithm for quantum computers of current size (in number of qubits). Initially, this caused concern in cybersecurity circles, but a careful reading of the text eventually showed that the article does not demonstrate any feasibility, merely stating that it might work… somehow. In fact, the algorithm they suggest for factoring the keys is not even quantum, the quantum part is a dodgy algorithm that would apply to the first one. Of course, the same press that showed no critical thought with respect to the Google paper (even though it was published in a journal), was quick to denounce the Chinese scientists' paper as a fraud even though it is an unpublished preprint and has not been peer-reviewed.
But apart from the media hype, what is going on? Does the answer have something to do with the strangeness of studying cosmological phenomena using a table-top quantum experiment?
Not at all, studying black holes in particular through analogies in atomic systems is precisely one of the most interesting fields of experimental physics. That, in itself, is not the problem.
Two ways of understanding Physics
From the point of view of current cosmology, one of its main problems is the paradoxical existence of black holes. On the one hand, black holes grow by accumulating matter that cannot escape again, but on the other hand, black holes have a temperature and emit light particles by severely distorting the space around them, through the famous Hawking radiation.
The energy of these particles must come from the mass of the black hole itself. The mass problem itself can still be reconciled with quantum mechanics and gravity, but the real puzzle is that the properties of the light emitted by black holes are completely independent of what has fallen into them. And this breaks the universal laws of conservation, it precisely is as if entropy worked in reverse within black holes. There are several problems with testing Hawking radiation, the first of which is that this light is caused by the deformation of space itself by the hole and therefore has an enormous wavelength of several times the diameter of the star. That is, it is ultraweak and cannot be observed directly.
Here, in the most unexpected place, is where the power of table-top experiments comes to the fore. A research program - curiously Chinese - cleverly uses the emergent properties of matter to create an analog to black holes and Hawking radiation inside materials.
How? The classical picture of particle theory as a collection of particles with their well-defined and essential properties is misleading; several of the properties that particles seem to exhibit are not intrinsic to them.
Let's consider mass for example. We would never find a subparticle of mass inside an electron. In fact we could never find its mass if we were to consider the electron as an entity isolated from its environment. Even in the most absolute vacuum - which is not really empty at all - the electron is constantly interacting with its surrounding fields and it is these interactions that give it mass. The phenomenon should be familiar to our readers.
In other words, if we change the properties of the material within which the electrons move, we can change the mass of these charged particles. In metals with heavy atoms, electrons can become more than 1000 times more massive than in a vacuum, while in certain materials such as graphene or black phosphorus the mass of electrons becomes zero under certain energy conditions.
Electrons are not only equivalent to the massless photons of Hawking radiation, but their speed of light is much lower and, by playing with material properties, one can create analog black holes into which electrons can fall but not escape again. The effect of these black holes on the fields around them is equivalent to that of stars on the space around them and they should emit equivalent radiation, but in an electrically measurable and experimentally adjustable manner.
Did the Google scientists do something similar? No, they did something completely different in principle. The procedure described above is what an experimental physicist would do, but the Google experiment came from the more speculative branch of physics.
Starting with Hawking himself, attempts to solve the paradox from a purely theoretical point of view have led to a list of increasingly wild solutions. From the existence of parallel universes through black holes made of strings, to holographic theories of the universe that attempt to explain how information can be found at the same time inside a 3D space (inside the black hole) and causally affect the space outside the hole projected onto a 2D surface (the space around the surface of the horizon).
A version of the latter theory is precisely what Google is simulating in its Sycamore quantum computer. Hence Shor's blunt response about the simulation's dubious relation to the real world. The rather cold reception from most physicists was to be expected. And it did not help that, as Aaronson pointed out, using a quantum computer does not seem to have improved the accuracy of the simulation.
This tension between condensed matter physics and high-flying theoretical physics is not new, and the subordination of the former to the latter is not accidental. Hawking himself used a method originally devised to explain the complex behavior of matter inside superconductors to explain how the curvature of space by the black hole could create particles in its near space. The presupposition of the domination of idea over matter carries, however, a price for knowledge.
The elegance of the mental and mathematical tricks devised by the theoretician ends up getting favored over plausibility and experimental confirmation, and the accumulation of ad-hoc hypotheses proposing an increasing number of new and unprovable components -a true parody of empiricism- ends up being imposed above explanations involving emergent and complex phenomena of interaction between the whole and the parts. As the co-discoverer of one of these phenomena said:
The fractional quantum Hall effect is fascinating for many reasons, but it is important, in my view mainly for one: it establishes experimentally that two central postulates of the standard model of elementary particles can appear spontaneously as emergent phenomena. [...] I don't know whether the properties of the universe as we know it are fundamental or emergent, but I think the mere possibility of the latter should give string theorists pause for thought.
Robert T. Laughlin, Nobel prize lecture
And yet the Google experiment is important for reasons that have little to do with the theoretical speculations it simulates. To begin with, in its formulation of the holographic principle, the model simulates certain types of particles that are extremely important for the future of quantum computing. In 2021, the same team demonstrated the usefulness of using a type of error correction strategy that opens the door to efficient quantum computing thanks to using emergent phenomena. But only the wildest and most dubious experiment was worth a concerted publicity campaign by the US media.
And all this is far from just a digression, today's quantum computers have a serious problem.
The noise that cancels quantum operations
In the quantum world, when particles interact, they create states where the whole and the parts have very different relationships from what we are used to in our day-to-day lives. For instance, an entangled whole contains all the information we can know about the system, while the parts behave as if they were in a sum of several states at the same time. Quantum computers take advantage of these properties to perform operations that a classical computer cannot do efficiently, either because of time or resources.
But these states are extremely fragile; when the parts interact with a much larger external whole, cosmic rays fall, interact with metallic parts of the device or are measured, the quantum state collapses and each part acquires a 100% defined state.
On the one hand, this collapse is absolutely necessary to carry out the computation, but on the other hand it greatly limits the tiny time during which quantum states can be maintained and operated with. This causes a tremendous amount of noise in the computer, and is the main design problem of quantum computers. No matter how many qubits the computer has, it is useless unless a way is found to reduce the noise. That is why it is misleading to measure the power of a quantum computer from its number of qubits, and why Google can do more than IBM even though the Californians have a computer 6 times smaller.... And for the same reason Chinese researchers, in turn, can get better results than Google.
But first let's organize things a bit. Behind all the hype, the fact is that quantum computers serve (at least) three main and distinct functions:
-
Manipulating quantum bits to transmit encrypted information. This is already technically possible and is of interest especially from a military and communication network control point of view.
Read also: The quantum race, from militarism to the privatization of Internet, 27/7/2021 -
Simulate quantum systems better than classical computers. For example, to help in the design of molecules and their electronic clouds. This is the original application for which the first quantum computers were theorized in the 1980s. Theoretically it should be possible soon.
-
Using quantum computers to do tasks faster and more efficiently than any algorithm possible on a classical computer. It is the most publicized function and by far the most distant and least clear today.
The problem with the latter two functions is that they are still not viable at the current noise level.... And that is not something that can simply be covered up with more media noise. For example, for simulating atomic systems:
Given how the available resources have grown in just the past few years, one might expect we can do a lot more now. But a new study by Garnet Chan of the California Institute of Technology and his coworkers puts that – and Deutsch’s comment – into perspective. They have used a 53-qubit chip related to Google’s Sycamore to simulate a molecule and material of real interest. They chose their test cases without any attempt to identify problems well suited to a quantum +approach. One was the eight-atom cluster of iron and sulfur in the catalytic core of the nitrogenase enzyme, which fixes atmospheric nitrogen into biologically usable forms. Understanding that process could be valuable for developing artificial nitrogen-fixing catalysts.
How did the chip perform? Frankly, rather indifferently. Chan admits she initially thought that with 53 qubits at their disposal, they would be able to simulate these systems with aplomb. But getting to grips with the problem disabused him of that idea. By mapping them onto the quantum circuit, the researchers could make a reasonable stab at calculating, say, the energy spectra of the FeS cluster and the heat capacity of α-RuCl3 – but nothing that classical methods couldn’t do at least equally well. One of the key problems is noise: current qubits are error-prone, and there are not yet ways of correcting such quantum errors.
The challenge nowadays is to silence this noise using all kinds of clever designs. How can one counteract a noise that comes from the smallest local variations? The most advanced tricks - like the one used by Google - take advantage of the locality of this noise, imposing a global order on the system that no local noise can break. How is this possible? By using ideas from a branch of mathematics that describes the relationships between the whole and the parts: Topology.
The whole and its parts
Devising a branch of mathematics that deals with the whole and its parts is not at all straightforward. Much of classical mathematics simply treats the whole as a sum of parts and local properties. When Euclid wanted to show how to construct polyhedrons from their constituent polygons, he could only do so with a multitude of polygons with perfectly equal and specific faces and angles. Calculus decomposes complex figures into the sum of an infinity of extremely small parts, and algebra, in its classical version, does not seem to help much.
At the beginning of the 18th century this did not seem to worry Newton too much, who organized his Principia in a strictly geometrical way much like Euclid. However, Leibniz, who had much broader concerns, could not accept the limitations of the mathematics of the time. He wanted a method for describing the whole which was capable of dealing with relationships between the parts without having to consider their exact positions and values.
There is something that hovers above local geometry. If we squeeze the side of a sphere, for example, not only does the part we squeeze collapse, but other parts of the sphere bulge outward as if counteracting the collapsed parts. And indeed it does, there is a property of the whole that is being conserved by adjusting the parts. However, quantifying these properties was not so evident a task.
When Euler was contacted to try to explicitly solve one of these problems, he replied that he did not even understand what was being asked of him:
Thus you see, most noble Sir, how this type of solution bears little relationship to mathematics, and I do not understand why you expect a mathematician to produce it, rather than anyone else, for the solution is based on reason alone and its discovery does not depend on any mathematical principle.
Leonhard Euler’s letter to Carl Ehler, 1736
Ironically, it would be the same Euler who gave birth to the new branch of mathematics, Topology, without even realizing it.
The eighteenth and a good part of the nineteenth century passed by without mathematicians being yet able to put the new branch on a unified footing. To describe the whole one must employ a series of algebraic operations on it instead of measuring fixed values as geometry would do.
Let's use the classical example because it is also the one used in quantum computing. Imagine that we have 2D surfaces on 3D objects, such as the surface of a globe. We can mark an arbitrary point on its surface and draw a loop as big as we want from that point (image below). Our operation will be to contract that loop by pulling it until we collapse it inside the original point, always following the surface and without detaching the loop from it.
As you can see, on a sphere all the loops can be contracted to the initial point. But on a torus (a donut or toroid, if you prefer), the presence of the central hole means that the loops are trapped and cannot contract in two directions. This is a reflection of the difference in global properties between a sphere and a toroid and is literally what the error correction strategy tested by Google uses.
The scheme works like this: first we lay qubits (let's imagine they can be 0 or 1 for now) on a board. Since it is a toroid, the top side is connected to the bottom side (it goes all the way around) from behind and the left side is connected to the right side. That is, the board has no edges, it turns back on itself. We have two operations, the square - which changes the value of all its surrounding cubits (from 0 to 1, or from 1 to 0) - and the cross, which gives an error when the number of sides converging to a vertex is odd, signaling the classical computer coupled to the quantum computer to find the shortest path to close the loop.
With these two operations applied continuously on the surface we can
- collapse any loop to a point and
- convert any broken line -caused by local noise- into a loop to be contracted by the square.
But let's remember: there are 2 loops that cannot be contracted in a toroid no matter what we do. The ones that are stuck around the central hole. It is in these loops that we store the real quantum information that we want to keep in the system instead of doing so in the individual qubits.
The number of loops in the horizontal or vertical direction will change with the continuous movement of the operators, but their parity (whether the number of loops is odd or even) will be maintained regardless of the continuous noise in the individual qubits of the board. If the board is large enough, there is no local variation that can break the global order. To some readers, this reign of the global over varying local parts probably reminds them of the very nature of the quantum phenomenon we remarked in the previous section. The apparent relationship between topology and entanglement is not coincidental.
This is the simplest verified version, but much more advanced versions of topological mechanisms have been proposed that use global properties and more complex operations to allow an increased stability to errors... Ironically using the physical version of the same particles used to tune the equations in Hawking's theory.
But there is much more, the capabilities of Topology to impose global effects on a varying local situation are not just worthwhile as a thought experiment or in order to create stabilization codes or quantum algorithms. So far, improvements in information and electrical power transmission systems have focused on reducing local defects and impurities as much as possible. From ultrapure silicon crystals to the need to bring superconducting cables and circuits down to ultra-cold temperatures to eliminate the deleterious effect of thermal fluctuations, all focus on eliminating local effects that spoil the large-scale coordination of particles.
Instead of obsessing over the level of homogeneity of the local geometry, endowing the transmission system with global topological properties is more reasonable and much less wasteful energetically. It may sound almost unbelievable, but that's also what the first scientists who discovered topological effects decades ago believed. These discoveries did not follow from high-flying theoretical considerations, but were experimental and by chance. While working with semiconductors and insulators, researchers found that the material response was almost impossibly perfect regardless of how poor its preparation was. Right now topology-based design is being tested for data transmission, and in the future possibly for power transmission.
What is the real use of a quantum computer?
This is all well and good, but what is quantum computing really good for in the real world? After all, at the same time that quantum computers show up in the news, discoveries of new classical algorithms that negate the supposed competitive advantage of quantum computers are also being announced.
Yes, quantum computers would accomplish tasks much faster than classical computers if they did not suffer from a relatively high error rate. For many problems, classical algorithms can be found that allow a classical supercomputer to match the speed of a quantum computer even with error correction. There is no doubt about this. However, the way of quantifying this alleged quantum supremacy is somewhat misleading.
In fact, classical and quantum computers are even more different devices than we have let on so far. A classical digital computer moves around abstract units of information - bits - and processes them step by step in its processing units. In a quantum one, the qubits (at least the physical ones) are not abstract at all, they are in fact the equivalent of transistors.
And the differences do not stop there, many quantum computers do not follow algorithmic programs in a step by step fashion like a digital computer does. There exist many problems, mainly of distribution, and also of finding the shortest or least costly path between several points or options that are particularly difficult for classical computers. But quantum computers can take advantage of the entangled whole to find the global solution without having to go part by part.
How to use global properties to find the fastest paths with quantum annealing.
A quantum computer such as D-Wave, which uses the quantum properties of entangled wholes to converge to a solution without doing any explicit computation, is hundreds to thousands of times more energy efficient than a classical huge supercomputer. Although the latter may theoretically be just as fast.
However, this may prove to be more of a disadvantage than an advantage in today's world. In the midst of a race among the most enormous national capitals to produce classic microprocessors with trillions of transistors, a mode of computing that does not require sinking mountains of capital into it is much less attractive than military uses.
Rather than making distribution problems much more tractable and efficient, proposals are advanced to create quantum money or privatize the internet.
Promises to solve hunger or design pharmaceuticals are absurd. First, designing the structure of a drug is not the main difficulty in its development, a huge number of promising molecules failed because they were not easily processed by the body or because they interacted with several unforeseen targets. Something that is not easily deducible from the structure of the compounds.
A similar thing occurs with the development of industrial-scale chemical processes for fertilizers, which are much more than a catalyst problem. Contrary to what a large part of the ruling class seems to believe, it is not enough to invest more and more capital to solve problems. And neither does the solution lie with magical machines, be they lithographs for microchips or quantum computers, in order to solve the commercial problems of national capitals.
But the main problem is that the reason why the pharmaceutical industry does not produce new antibiotics or drugs against many other diseases is the same reason why it charges egregious amounts for insulin to much of the world. Because producing new drugs is not profitable in most cases. It has nothing to do with computing power. And the same applies to the fertilizer industry, there are already many alternatives at smaller and less polluting scales, but what capitalism requires are hyper-concentrated plants in a handful of countries. Again, the problem is not technological or computational per se, but one of social relations and the overall organization of society.
Quantum computing will have much potential for a world that values large-scale distribution and production according to human needs, but in a system that crushes human labor, squanders resources and energy by the bucketful, and only values accumulating more than rival capitals, military uses and speculative bubbles will mark its development.