The Doomsday Clock is now at 90 seconds to midnight — the closest we have ever been to global catastrophe
Limits to computing: A computer scientist explains why even in the age of AI, some problems are just too difficult
Young port workers in the maritime city of Makassar lack digital skills. Vocational schools can be the solution
Installing solar-powered refrigerators in developing countries is an effective way to reduce hunger and slow climate change
Google claims to have invented a quantum computer, but IBM begs to differ
On Oct. 23, 2019 Google published a paper in the journal Nature entitled “Quantum supremacy using a programmable superconducting processor.” The tech giant announced its achievement of a much vaunted goal: quantum supremacy.
This perhaps ill-chosen term (coined by physicist John Preskill) is meant to convey the huge speedup that processors based on quantum-mechanical systems are predicted to exhibit, relative to even the fastest classical computers.
Google’s benchmark was achieved on a new type of quantum processor, code-named Sycamore, consisting of 54 independently addressable superconducting junction devices (of which only 53 were working for the demonstration).
Each of these devices allows the storage of one bit of quantum information. In contrast to the bits in a classical computer, which can only store one of two states (0 or 1 in the digital language of binary code), a quantum bit – qbit — can store information in a coherent superposition state which can be considered to contain fractional amounts of both 0 and 1.
Sycamore uses technology developed by the superconductivity research group of physicist John Martinis at the University of California, Santa Barbara. The entire Sycamore system must be kept cold at cryogenic temperatures using special helium dilution refrigeration technology. Because of the immense challenge involved in keeping such a large system near the absolute zero of temperature, it is a technological tour de force.
The Google researchers demonstrated that the performance of their quantum processor in sampling the output of a pseudo-random quantum circuit was vastly better than a classical computer chip — like the kind in our laptops — could achieve. Just how vastly became a point of contention, and the story was not without intrigue.
An inadvertent leak of the Google group’s paper on the NASA Technical Reports Server (NTRS) occurred a month prior to publication, during the blackout period when Nature prohibits discussion by the authors regarding as-yet-unpublished papers. The lapse was momentary, but long enough that The Financial Times, The Verge and other outlets picked up the story.
A well-known quantum computing blog by computer scientist Scott Aaronson contained some oblique references to the leak. The reason for this obliqueness became clear when the paper was finally published online and Aaronson could at last reveal himself to be one of the reviewers.
Challenges to Google’s story
The story had a further controversial twist when the Google group’s claims were immediately countered by IBM’s quantum computing group. IBM shared a preprint posted on the ArXiv (an online repository for academic papers that have yet to go through peer review) and a blog post dated Oct. 21, 2019 (note the date!).
While the Google group had claimed that a classical (super)computer would require 10,000 years to simulate the same 53-qbit random quantum circuit sampling task that their Sycamore processor could do in 200 seconds, the IBM researchers showed a method that could reduce the classical computation time to a mere matter of days.
However, the IBM classical computation would have to be carried out on the world’s fastest supercomputer — the IBM-developed Summit OLCF-4 at Oak Ridge National Labs in Tennessee — with clever use of secondary storage to achieve this benchmark.
While of great interest to researchers like myself working on hardware technologies related to quantum information, and important in terms of establishing academic bragging rights, the IBM-versus-Google aspect of the story is probably less relevant to the general public interested in all things quantum.
For the average citizen, the mere fact that a 53-qbit device could beat the world’s fastest supercomputer (containing more than 10,000 multi-core processors) is undoubtedly impressive. Now we must try to imagine what may come next.
The reality of quantum computing today is that very impressive strides have been made on the hardware front. A wide array of credible quantum computing hardware platforms now exist, including ion traps, superconducting device arrays similar to those in Google’s Sycamore system and isolated electrons trapped in NV-centres in diamond.
These and other systems are all now in play, each with benefits and drawbacks. So far researchers and engineers have been making steady technological progress in developing these different hardware platforms for quantum computing.
What has lagged quite a bit behind are custom-designed algorithms (computer programs) designed to run on quantum computers and able to take full advantage of possible quantum speed-ups. While several notable quantum algorithms exist — Shor’s algorithm for factorization, for example, which has applications in cryptography, and Grover’s algorithm, which might prove useful in database search applications — the total set of quantum algorithms remains rather small.
Much of the early interest (and funding) in quantum computing was spurred by the possibility of quantum-enabled advances in cryptography and code-breaking. A huge number of online interactions ranging from confidential communications to financial transactions require secure and encrypted messages, and modern cryptography relies on the difficulty of factoring large numbers to achieve this encryption.
Quantum computing could be very disruptive in this space, as Shor’s algorithm could make code-breaking much faster, while quantum-based encryption methods would allow detection of any eavesdroppers.
The interest various agencies have in unbreakable codes for secure military and financial communications has been a major driver of research in quantum computing. It is worth noting that all these code-making and code-breaking applications of quantum computing ignore to some extent the fact that no system is perfectly secure; there will always be a backdoor, because there will always be a non-quantum human element that can be compromised.
More appealing for the non-espionage and non-hacker communities — in other words, the rest of us — are the possible applications of quantum computation to solve very difficult problems that are effectively unsolvable using classical computers.
Ironically, many of these problems emerge when we try to use classical computers to solve quantum-mechanical problems, such as quantum chemistry problems that could be relevant for drug design and various challenges in condensed matter physics including a number related to high-temperature superconductivity.
So where are we in the wonderful and wild world of quantum computation?
In recent years, we have had many convincing demonstrations that qbits can be created, stored, manipulated and read using a number of futuristic-sounding quantum hardware platforms. But the algorithms lag. So while the prospect of quantum computing is fascinating, it will likely be a long time before we have quantum equivalents of the silicon chips that power our versatile modern computing devices.