Thursday, August 5, 2010

Quantum-connected computers overturn the uncertainty principle


Introduction


Back in the mid eighties when completing a pure mathematics degree and using the then state of the art mini-computers, the PDP11 family of processors (we couldn’t afford the services of the more, much more, brawny nitrogen-cooled Crays to solve sparse Hadamard matrices - I contributed an article in a UK computer journal discussing the future of computers, artificial intelligence, human interfaces and visualization – I guess you can call it a naive attempt to predict how systems will/MAY evolve. Of course this was through my own lens of experience. Watching processor speeds rapidly increasing and memory, disk and everything else growing almost exponentially.

Of course the implicit question was - If all that has happened in the first 50 years of computer history, what will happen in the next 50 or so years?

Moore's Law is an empirical formula describing the evolution of processors which is often cited to predict future progress in the field, as it's been proved quite accurate in the past: it states that the transistor count in an up-to-date processor will double each time every some period of time between 18 and 24 months, which roughly means that computational speed grows exponentially, doubling every 2 years. As processors become faster the science of computability, amongst other things describes a class called 'NP-hard problems' which are also sometimes referred to 'unacceptable', 'unsustainable' or 'binomially exploding' whose complexity and therefore computation grow exponentially with time.

An example of NP-hard algorithm is the one of finding the exit of a labyrinth: it doesn't require much effort if you only find one crossing, but it gets much more demanding in terms of resources when the crossings become so large that it becomes either impossible to compute because of limited resources, or computable, but requiring an unacceptable amount of time.

Many, if not all, of the Artificial Intelligence related algorithms are extremely demanding in terms of computational resources because they are either NP-hard or involve combinatorial calculus of growing complexity.

Not all developments in processing architecture stem from a single genesis. For example, recently, IBM researchers have made huge strides in mapping the architecture of the Macaque monkey brain. They have traced long-distance connections in the brain - the "interstate highways" which transmit information between distant areas of the brain. Their maps may help researchers grasp how and where the brain sends information better than ever before, and possibly develop processors that can keep up with our brain's immense computational power and navigate its complex architecture.

Artificial intelligence and cognitive modeling try to simulate some properties of neural networks. While similar in their techniques, the former has the aim of solving particular tasks, while the latter aims to build mathematical models of biological neural systems.

Another trajectory – that of Quantum Computers

The uncertainty principle is a key underpinning of quantum mechanics. A particle's position or its velocity can be measured but not both. Now, according to five physicists from Germany, Switzerland, and Canada, in a letter abstract published in Nature Physics(1) quantum computer memory could let us violate this principle

Paul Dirac who shared the 1933 Nobel Prize in physics with Erwin Schrödinger, "for the discovery of new productive forms of atomic theory” provided a concrete illustration of what the uncertainty principle means. He explained that one of the very, few ways to measure a particle's position is to hit it with a photon and then chart where the photon lands on a detector. That gives you the particle's position, yes, but it's also fundamentally changed its velocity, and the only way to learn that would consequently alter its position.

That's more or less been the status quo of quantum mechanics since Werner Heisenberg first published his theories in 1927, and no attempts to overturn it - including multiple by Albert Einstein himself - proved successful. But now the five physicists hope to succeed where Einstein failed. If they're successful, it will be because of something that wasn't even theorized until many years after Einstein's death: Quantum Computers.

Key to quantum computers are qubits, the individual units of quantum memory. A particle would need to be entangled with a quantum memory large enough to hold all its possible states and degrees of freedom. Then, the particle would be separated and one of its features measured. If, say, its position was measured, then the researcher would tell the keeper of the quantum memory to measure its velocity.

Because the uncertainty principle wouldn't extend from the particle to the memory, it wouldn't prevent the keeper from measuring this second figure, allowing for exact, or possibly, for obscure mathematical reasons, almost exact measurements of both figures.

It would take lots of qubits - far more than the dozen or so we've so far been able to generate at any one time - to entangle all that quantum information from a particle, and the task of entangling so many qubits together would be extremely fragile and tricky. Not impossibly tricky, but still way beyond what we can do now.


(1) Nature Physics
Published online: 25 July 2010 doi:10.1038/nphys1734
The uncertainty principle in the presence of quantum memory
Mario Berta, Matthias Christandl, Roger Colbeck, Joseph M. Renes & Renato Renner

No comments:

Post a Comment