Monday, August 9, 2010

The Rubik Cube and God's Algorithm

Jack Dikian
Sep 2006


In 1974 Erno Rubik, an admirer of geometry and 3-D forms created the world's most perfect puzzle. More than 30 years later the Rubik's cube is still a best selling brainteaser. By the mid-eighties almost every child had the puzzle. Did I say “Child”? I certainly was playing with one when I was about 12 or 13 with little success in solving.

So you can imagine our astonishment when a whole first year pure mathematics class was asked to go out and purchase the puzzle from the university co-op bookshop. I think, like others, I had always suspected that there would a mathematical approach to solving the puzzle. After all, many real-world, albeit, obscure problems such as the four-color map(1) and The Seven Bridges of Konigsberg(2) inspired rigorous mathematical solutions. I, off course didn’t have the mathematical repertoire – instead the brute force method, not in the mathematical sense, was always adopted.


The Rubik’s Cube has a straightforward basis. The faces of the cube are covered by nine stickers in six colours. The puzzle is solved, when each face is of one solid colour. When you start to rotate the rows and columns and see the different mix of colour rows/columns you begin to appreciate the difficulty involved. There is, in-fact, 43 million million possible pattern combinations - but just one right one.

it's been proven that you can solve a Rubik's cube in 26 moves. Computer science professor Gene Cooperman and graduate student Dan Kunkle of the Northeastern University in Boston used algebra and fast parallel computing to show that no matter how scrambled the cube is, it is possible to solve (generate a cube with solid colors on each face, or the home state) in 26 steps. There are however claims that it can be solved in 22 or even 20 steps (see later).

The Rubik's cube has served as a guinea pig for testing techniques to solve large-scale combinatorial problems. "The Rubik's cube has been a testing ground for problems of search and enumeration. Search and enumeration is a large research area encompassing many researchers working in different disciplines — from artificial intelligence to operations. The Rubik's cube allows researchers from different disciplines to compare their methods on a single, well-known problem.


There are many algorithms to solve scrambled Rubik's cubes. It is not known how many moves is the minimum required to solve any instance of the Rubik's cube, although the latest claims put this number at 22. This number is also known as the diameter of the Cayley graph of the Rubik's Cube group. An algorithm that solves a cube in the minimum number of moves is known as 'God's algorithm'. Most mathematicians think it may only takes 20 moves to solve any Rubik's cube – and it’s a question of proving this to be true.

Any state of the cube corresponds to a permutation of its facelets — a rule that assigns to each facelet in the home state a new location on the cube in the scrambled state. Not every permutation of facelets can be achieved by a sequence of moves — for example no move can shift the centre facelet of a face — so when trying to solve Rubik's cube you restrict your attention to a collection of legal permutations.

These form a self-contained system. If you follow one legal permutation by another, the end result is also a legal permutation. If you've done one legal permutation, you can always find another one — namely the reverse of what you've just done — that will get you back to the position you started with. Doing absolutely nothing to the cube is also a legal move and corresponds to the permutation that leaves every facelet where it is, known to mathematicians as the identity permutation.

The problem of solving Rubik's cube can be visualised using what is called the group's Cayley graph — a network whose nodes are the legal states of the cube (corresponding to legal permutations) and which has two nodes linked up if you can get from one to the other by a legal move, which, incidentally, is itself a permutation. The home state of the cube corresponds to one of the nodes (and the identity permutation) and solving the cube corresponds to finding a path from one of the nodes to the home state along a sequence of linked-up nodes.

A brute-force approach to showing that you can always solve the cube in N moves would be to show that no such path involves more than N steps. This sounds good in theory, but there is a huge problem as the Cayley graph of the group of legal permutations has 43,252,003,274,489,856,000 nodes, a number that challenges even the fastest of supercomputers.

The (3 x 3 x 3) Rubik's Cube has eight corners and twelve edges. There are 8! (40,320) ways to arrange the corner cubes. Seven can be oriented independently, and the orientation of the eighth depends on the preceding seven, giving 37 (2,187) possibilities.

There are 12!/2 (239,500,800) ways to arrange the edges, since an odd permutation of the corners implies an odd permutation of the edges as well. Eleven edges can be flipped independently, with the flip of the twelfth depending on the preceding ones, giving 211 (2,048) possibilities.

There are exactly 43,252,003,274,489,856,000 permutations. The full number is 519,024,039,293,878,272,000 or 519 quintillion possible arrangements of the pieces that make up the Cube, but only one in twelve of these are actually solvable. This is because there is no sequence of moves that will swap a single pair of pieces or rotate a single corner or edge cube.

(1) This problem is sometimes also called
Guthrie's problem after F. Guthrie, who first conjectured the theorem in 1852. The was then communicated to de Morgan and thence into the general community. In 1878, Cayley wrote the first paper on the conjecture.

(2) One of these areas is the topology of networks, first developed by
Leonhard Euler in 1735. His work in this field was inspired by the following problem: The Seven Bridges of Konigsberg.

Friday, August 6, 2010

Virtual Reality Technology and Alzheimer's Disease

The use of virtual reality technology in the assessment, study of, and possible assistance in the rehabilitation of memory deficits associated with patients with Alzheimer disease.


Alzheimer's disease is the common cause of dementia, and is particularly common in older people. Because it is the most common cause of dementia, Alzheimer's disease is commonly equated with the general term dementia. However, there are many other causes of dementia. Alzheimer's disease is therefore a specific form of dementia having very specific microscopic brain abnormalities.

Alzheimer disease is not merely a problem of memory. Additional mental and behavioral problems often affect people who have the disease, and may influence quality of life, caregivers, and the need for institutionalization.

Depression for example affects 20–30% of people who have Alzheimer’s, and about 20% have anxiety. Psychosis (often delusions of persecution) and agitation/aggression also often accompany this disease. Common symptoms and comorbidities include:

• Mental deterioration of organic or functional origin

• Loss of cognitive ability in a previously-unimpaired person, beyond what might be expected from normal aging. Areas particularly affected include memory, attention, judgement, language and problem solving.

• Loss (usually gradual) of mental abilities such as thinking, remembering, and reasoning. It is not a disease, but a group of symptoms that may accompany some diseases or conditions affecting the brain.

• Deteriorated mental state due to a disease process and the result from many disorders of the nervous system.

• Cognitive deficit or memory impairment due to progressive brain injury.

Distinguishing Alzheimer's disease from other causes of dementia is not always as easy and straightforward as defining these terms. In practice, people and their disorders of behaviour, or behaviours of concern are far more complex than the simple definitions sometimes provided.

Establishing patient history, abilities, the natural course of disorder development such as that involving short-term memory, speech and language, personality, decision-making and judgment, and others is often needed in the diagnosis of the disease. Routine diagnostic steps therefore include a careful history, mental status screening, laboratory and imaging studies, and neuropsychologic testing.

Differential diagnosis of Alzheimer's disease

It is sometimes difficult to differentiate dementia caused by Alzheimer's from delirium and in addition several features distinguish dementia from depression, but the two can coexist and the distinction may be uncertain.

Whilst prominent motor signs such as Gait disturbance is a characteristic feature of patients with vascular dementia - In contrast, the NINCDS-ADRDA criteria for Alzheimer's disease state that: `gait disturbance at the onset or very early in the course of the illness' makes the diagnosis of probable Alzheimer's disease uncertain or unlikely. However, clinical studies suggest that gait disturbance is not restricted to the later stages of Alzheimer's disease. Also, studies have identified abnormalities of gait and balance in patients with early Alzheimer's disease(1).

It had been thought that Dementias without prominent motor signs included Alzheimer's disease, frontotemporal dementia, and Creutzfeld-Jakob, and others and the clinical pattern of gait disturbance in patients with early Alzheimer's disease has attracted less attention to date.


Differential diagnosis between the types of dementia and treatments available for Alzheimer's - while limited in their effectiveness usually have best patient outcomes when begun early in the course of the disease. Diagnosis and/or diagnostic tools include:

Taking medical history.

Physical examination including evaluations of hearing and sight, as well as blood pressure and pulse readings, etc.

Standard laboratory tests including blood and urine tests designed to help eliminate other possible conditions.

Neuropsychological testing including assessing memory, problem-solving, attention, vision-motor coordination and abstract thinking, such as performing simple calculations.

Brain-imaging or structural brain scan such as CT or MRI to help rule out brain tumors or blood as the reason for symptoms and more recently

The use of virtual reality

The use of virtual reality technology in the assessment, study of, and possible assistance in the rehabilitation of memory deficits associated with patients with Alzheimer disease.

Using virtual reality to simulate real-word environments and test patient’s ability to navigate these environments. Work has been carried out to compare previously described real-world navigation tests with a virtual reality version simulating the same navigational environment (2). Authors of this research work conclude that virtual navigation testing reveals deficits in aging and Alzheimer disease that are associated with potentially grave risks to patients and the community.

In another study in the United Kingdom (3), researchers’ aimed to examine the feasibility of virtual reality technology for use by people with dementia (PWD). Data was obtained directly from six PWD regarding their experiences with a virtual environment of a large outdoor park. A user-centered method was developed to assess:

(a) presence;
(b) user inputs;
(c) display quality;
(d) simulation fidelity; and
(e) overall system usability.

The extent to which PWD could perform four functional activities in the virtual enviroment was also investigated (e.g., mailing a letter). In addition, physical and psychological well-being of PWD while interacting with the virtual environment was assessed objectively by recording heart rate during the virtual reality sessions and subjectively with discrete questionnaire items and real-time prompts.

(1) Gait disturbance in Alzheimer's disease: a clinical studyS.T. O'Keeffe, H. Kazeem, R.M. Philpott, J.R. Playfer, M. Gosney, M. Lye July, 1996

(2) NEUROLOGY 2008;71:888-895
Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality
Laura A. Cushman, PhD, Karen Stein and Charles J. Duffy, MD, PhD
From the Departments of Neurology, Brain and Cognitive Sciences, Neurobiology and Anatomy, Ophthalmology, and Psychiatry (L.A.C.) and Center for Visual Science,
The University of Rochester Medical Center, Rochester, NY.

(3) Flynn D, van Schaik P, Blackman T, Femcott C, Hobbs B, Calderon C.
School of Social Sciences and Law, The University of Teesside, Middlesbrough, United Kingdom

Thursday, August 5, 2010

Quantum-connected computers overturn the uncertainty principle


Back in the mid eighties when completing a pure mathematics degree and using the then state of the art mini-computers, the PDP11 family of processors (we couldn’t afford the services of the more, much more, brawny nitrogen-cooled Crays to solve sparse Hadamard matrices - I contributed an article in a UK computer journal discussing the future of computers, artificial intelligence, human interfaces and visualization – I guess you can call it a naive attempt to predict how systems will/MAY evolve. Of course this was through my own lens of experience. Watching processor speeds rapidly increasing and memory, disk and everything else growing almost exponentially.

Of course the implicit question was - If all that has happened in the first 50 years of computer history, what will happen in the next 50 or so years?

Moore's Law is an empirical formula describing the evolution of processors which is often cited to predict future progress in the field, as it's been proved quite accurate in the past: it states that the transistor count in an up-to-date processor will double each time every some period of time between 18 and 24 months, which roughly means that computational speed grows exponentially, doubling every 2 years. As processors become faster the science of computability, amongst other things describes a class called 'NP-hard problems' which are also sometimes referred to 'unacceptable', 'unsustainable' or 'binomially exploding' whose complexity and therefore computation grow exponentially with time.

An example of NP-hard algorithm is the one of finding the exit of a labyrinth: it doesn't require much effort if you only find one crossing, but it gets much more demanding in terms of resources when the crossings become so large that it becomes either impossible to compute because of limited resources, or computable, but requiring an unacceptable amount of time.

Many, if not all, of the Artificial Intelligence related algorithms are extremely demanding in terms of computational resources because they are either NP-hard or involve combinatorial calculus of growing complexity.

Not all developments in processing architecture stem from a single genesis. For example, recently, IBM researchers have made huge strides in mapping the architecture of the Macaque monkey brain. They have traced long-distance connections in the brain - the "interstate highways" which transmit information between distant areas of the brain. Their maps may help researchers grasp how and where the brain sends information better than ever before, and possibly develop processors that can keep up with our brain's immense computational power and navigate its complex architecture.

Artificial intelligence and cognitive modeling try to simulate some properties of neural networks. While similar in their techniques, the former has the aim of solving particular tasks, while the latter aims to build mathematical models of biological neural systems.

Another trajectory – that of Quantum Computers

The uncertainty principle is a key underpinning of quantum mechanics. A particle's position or its velocity can be measured but not both. Now, according to five physicists from Germany, Switzerland, and Canada, in a letter abstract published in Nature Physics(1) quantum computer memory could let us violate this principle

Paul Dirac who shared the 1933 Nobel Prize in physics with Erwin Schrödinger, "for the discovery of new productive forms of atomic theory” provided a concrete illustration of what the uncertainty principle means. He explained that one of the very, few ways to measure a particle's position is to hit it with a photon and then chart where the photon lands on a detector. That gives you the particle's position, yes, but it's also fundamentally changed its velocity, and the only way to learn that would consequently alter its position.

That's more or less been the status quo of quantum mechanics since Werner Heisenberg first published his theories in 1927, and no attempts to overturn it - including multiple by Albert Einstein himself - proved successful. But now the five physicists hope to succeed where Einstein failed. If they're successful, it will be because of something that wasn't even theorized until many years after Einstein's death: Quantum Computers.

Key to quantum computers are qubits, the individual units of quantum memory. A particle would need to be entangled with a quantum memory large enough to hold all its possible states and degrees of freedom. Then, the particle would be separated and one of its features measured. If, say, its position was measured, then the researcher would tell the keeper of the quantum memory to measure its velocity.

Because the uncertainty principle wouldn't extend from the particle to the memory, it wouldn't prevent the keeper from measuring this second figure, allowing for exact, or possibly, for obscure mathematical reasons, almost exact measurements of both figures.

It would take lots of qubits - far more than the dozen or so we've so far been able to generate at any one time - to entangle all that quantum information from a particle, and the task of entangling so many qubits together would be extremely fragile and tricky. Not impossibly tricky, but still way beyond what we can do now.

(1) Nature Physics
Published online: 25 July 2010 doi:10.1038/nphys1734
The uncertainty principle in the presence of quantum memory
Mario Berta, Matthias Christandl, Roger Colbeck, Joseph M. Renes & Renato Renner