top of page
  • Writer's pictureChris Ferrie

The “killer app” of quantum machine learning that you might not expect


Artistic depiction of futuristic computer chip.

Quantum computing was conceived in the 1980s. At the time, the hypothetical device's potential applications were purely speculative – and a little scary. 


Some predicted a looming “Y2Q” or “Quantum Apocalypse,” where quantum computers would suddenly “break the internet” by easily cracking the cryptographic codes unbreakable by conventional computers. 


The quantum apocalypse has not arrived, and probably won’t. There is no “quantum switch” that will abruptly transition us into a post-quantum world. Instead, we will likely see incremental improvements as quantum solutions slowly earn favour for a growing class of problems.


The quantum computer seems forever five years away – even when we have quantum computers now. The challenge, as I wrote in a previous post for FirstPrinciples, is understanding how we’ll know when the quantum “killer app” has emerged. 


That killer app may be quantum machine learning – but perhaps not for the reasons you think. Is it a little scary too? That depends on how we humans make ethical decisions today. 


The machine learning part

Machine learning, or artificial intelligence (AI), is a paradigm of computing in which the algorithm “learns” from the top down rather than being designed from the bottom up. The top-down approach is achieved by letting the algorithm try solutions, rewarding it for good behaviour, and penalizing it for bad behaviour — a primitive form of “learning.”


An “artificial” neural network, meant to mimic the human brain, tasked with labeling pictures of cats and dogs learns by being shown “training data” — correctly labeled pictures — while its internal parameters are tuned until it gets the labels right. 


Ideally, the algorithm becomes capable of correctly identifying the content of unseen and unlabelled pictures. Because the algorithm is not designed to carry out steps known to solve the problem, it often doesn't — and even when it does, the reason it makes any particular decision is elusive, underscoring the thorny problem of AI safety (that’s a bit of foreshadowing).


Machine learning currently happens using digital technology, which is not the most efficient means to that end. Existing AI applications already consume a significant chunk of the world's energy, and with the demand for AI showing no signs of letting up, mitigating the cost is a pressing need.  


So random! 

Current AI is inefficient for many reasons. Some are fundamental, including the contortions a digital computer must undergo to produce “fake” random numbers.


Digital algorithms are deterministic — they output the same thing every time they are run. This is useful, for example, as I want the same letter “t” to appear on my screen every time I press the key labeled “t” on my keyboard. 

Representation of a digital algorithm.

But AI performs better when a bit of randomness is introduced. (You may also note that ChatGPT, for example, produces different outputs for the same prompt. It does so via the same utilization of fake randomness.) 


Digital computers produce fake randomness by running otherwise pointlessly complicated deterministic programs that consume an enormous amount of computer time and memory – and therefore enormous amounts of energy. All this computation renders the output practically unpredictable, but not truly random.


Quantum computers, on the other hand, produce truly random numbers by default because they operate based on the laws of quantum mechanics, which have quantum uncertainty built right in.


One aspect of the broad field of quantum machine learning (also known as “quantum AI”) is the direct implementation of conventional AI models on quantum computers, such as quantum classifiers or quantum neural networks. If nothing else, running these on a quantum computer could be more energy efficient.


Quantum machine learning benefits

While the fine print has not been fully fleshed out, many arguments have been made to support quantum machine learning in addition to its apparently “free” randomness.


Conventional AI models are huge, requiring vast amounts of computer memory (and thus energy). In many cases, it is suspected that these models are needlessly large


Quantum machine learning may help in two ways. 


First, qubits — the fundamental units of quantum information — may allow for more efficient data representation, leading to smaller memory requirements. 


Second, quantum learning models might better capture complex patterns and structures in data, leading to smaller models that require less time and energy to train and use.


The benefits mentioned so far could be realized in the near term, even with pre-error-correction quantum computers. This means quantum machine learning has the potential to become the first "killer app" of quantum technology, driving widespread adoption and investment. 


However, the often-cited advantage of quantum speedup in computation, which could revolutionize AI's ability to process massive datasets or tackle computationally intensive tasks, remains a long-term goal dependent on the development of large-scale, error-corrected quantum computers.


Back to safety

Everything from your news feed to whether you get a bank loan is dictated by systems incorporating AI decisions. Understanding why these decisions are made can help ensure that AI aligns with human values.


While quantum machine learning has immense potential, it is unlikely to alleviate AI safety concerns without considerable effort.


In fact, quantum machine learning models are likely to be more opaque because of the already counterintuitive nature of quantum physics. The interpretability, or explainability, of quantum machine learning is likely to require novel tools. This work has already started in research groups including my own. 


Optimistically, unlike conventional AI where ethical damage is already being done, we have an opportunity with quantum AI to benefit from hindsight by proactively investigating and mitigating potential ethical concerns now. 


Ensuring safe and equitable access to the benefits of AI, quantum or otherwise, is paramount as we navigate this uncharted territory.


Chris Ferrie is an associate professor at the University of Technology Sydney and Centre for Quantum Software and Information. He is the author of the successful Baby University series, including the breakout success Quantum Physics for Babies.


Comments


iStock-1357123095.jpg
iStock-1357123095.jpg

Subscribe to our newsletter

Join the Community of Curious Minds

bottom of page