August 26, 2015
University of Alberta paleontologists have discovered a new species of lizard, namedGueragama sulamericana, in the municipality of Cruzeiro do Oeste in Southern Brazil in the rock outcrops of a Late Cretaceous desert, dated approximately 80 million years ago.
“The roughly 1700 species of iguanas are almost without exception restricted to the New World, primarily the Southern United States down to the tip of South America,” says Michael Caldwell, biological sciences professor from the University of Alberta and one of the study’s authors. Oddly however, iguanas closest relatives, including chameleons and bearded dragons, are all Old World. As one of the most diverse groups of extant lizards, spanning from acrodontan iguanians (meaning the teeth are fused to the top of their jaws) dominating the Old World to non-acrodontans in the New World, this new lizard species is the first acrodontan found in South America, suggesting both groups of ancient iguanians achieved a worldwide distribution before the final break up of Pangaea.
A terrestrial Noah’s Arc
“This fossil is an 80 million year old specimen of an acrodontan in the New World,” explains Caldwell. “It’s a missing link in the sense of the paleobiogeography and possibly the origins of the group, so it’s pretty good evidence to suggest that back in the lower part of the Cretaceous, the southern part of Pangaea was still a kind of single continental chunk.”
Distributions of plants and animals from the Late Cretaceous reflect the ancestry of Pangaea when it was whole. “This Gueragama sulamericana fossil indicates that the group is old, that it’s probably Southern Pangaean in its origin, and that after the break up, the acrodontans and chameleon group dominated in the Old World, and the iguanid side arose out of this acrodontan lineage that was left alone on South America,” says Caldwell. “South America remained isolated until about 5 million years ago. That’s when it bumps into North America, and we see this exchange of organism north and south. It was kind of like a floating Noah’s Arc for a very long time, about 100 million years. This is an Old World lizard in the new world at a time when we weren’t expecting to find it. It answers a few questions about iguanid lizards and their origin.”
The University of Alberta is a world leader in paleontology. This study was a collaboration between the University of Alberta and scientists in Brazil. Caldwell says of the collaboration, “It’s providing an opportunity for our students and research groups to expand our expertise and interests into an ever-increasing diversity of organisms within this group of animals called snakes and lizards.”
The lead author of the paper is Caldwell’s PhD student, Tiago Simoes, a Vanier scholar. “As with many other scientific findings, this one raises a number of questions we haven’t previously considered,” says Simoes. “This finding raises a number of biogeographic and faunal turnover questions of great interest to both paleontologists and herpetologists that we hope to answer in the future.”
In terms of next steps, Caldwell notes “Each answer only rattles the questions harder. The evolution of the group is much older than has been previously thought, which means we can push an acrodontan to 80 million years in South America. We now need to focus on much older units of of rock if we’re going to find the next step in the process.”
The findings, “A stem acrodontan lizard in the Cretaceous of Brazil revises early lizard evolution in Gondwana,” were published in the journal Nature Communications, one of the world’s top multidisciplinary scientific journals.
August 16, 2015
Koko the gorilla is best known for a lifelong study to teach her a silent form of communication, American Sign Language. But some of the simple sounds she has learned may change the perception that humans are the only primates with the capacity for speech.
In 2010, Marcus Perlman started research work at The Gorilla Foundation, where Koko has spent more than 40 years living immersed with humans — interacting for many hours each day with psychologist Penny Patterson and biologist Ron Cohn.
“I went there with the idea of studying Koko’s gestures, but as I got into watching videos of her, I saw her performing all these amazing vocal behaviors,” says Perlman, now a postdoctoral researcher in the lab of University of Wisconsin-Madison psychology Professor Gary Lupyan.
The vocal and breathing behaviors Koko had developed were not necessarily supposed to be possible.
“Decades ago, in the 1930s and ’40s, a couple of husband-and-wife teams of psychologists tried to raise chimpanzees as much as possible like human children and teach them to speak. Their efforts were deemed a total failure,” Perlman says. “Since then, there is an idea that apes are not able to voluntarily control their vocalizations or even their breathing.”
Instead, the thinking went, the calls apes make pop out almost reflexively in response to their environment — the appearance of a dangerous snake, for example.
And the particular vocal repertoire of each ape species was thought to be fixed. They didn’t really have the ability to learn new vocal and breathing-related behaviors.
These limits fit a theory on the evolution of language, that the human ability to speak is entirely unique among the nonhuman primate species still around today.
“This idea says there’s nothing that apes can do that is remotely similar to speech,” Perlman says. “And, therefore, speech essentially evolved — completely new — along the human line since our last common ancestor with chimpanzees.”
However, in a study published online in July in the journal Animal Cognition, Perlman and collaborator Nathaniel Clark of the University of California, Santa Cruz, sifted 71 hours of video of Koko interacting with Patterson and Cohn and others, and found repeated examples of Koko performing nine different, voluntary behaviors that required control over her vocalization and breathing. These were learned behaviors, not part of the typical gorilla repertoire.
Among other things, Perlman and Clark watched Koko blow a raspberry (or blow into her hand) when she wanted a treat, blow her nose into a tissue, play wind instruments, huff moisture onto a pair of glasses before wiping them with a cloth and mimic phone conversations by chattering wordlessly into a telephone cradled between her ear and the crook of an elbow.
“She doesn’t produce a pretty, periodic sound when she performs these behaviors, like we do when we speak,” Perlman says. “But she can control her larynx enough to produce a controlled grunting sound.”
Koko can also cough on command — not particularly groundbreaking human behavior, but impressive for a gorilla because it requires her to close off her larynx.
“The motivation for the behaviors varies,” Perlman says. “She often looks like she plays her wind instruments for her own amusement, but she tends to do the cough at the request of Penny and Ron.”
These behaviors are all learned, Perlman figures, and the result of living with humans since Koko was just six months old.
“Presumably, she is no more gifted than other gorillas,” he says. “The difference is just her environmental circumstances. You obviously don’t see things like this in wild populations.”
This suggests that some of the evolutionary groundwork for the human ability to speak was in place at least by the time of our last common ancestor with gorillas, estimated to be around 10 million years ago.
“Koko bridges a gap,” Perlman says. “She shows the potential under the right environmental conditions for apes to develop quite a bit of flexible control over their vocal tract. It’s not as fine as human control, but it is certainly control.”
Orangutans have also demonstrated some impressive vocal and breathing-related behavior, according to Perlman, indicating the whole great ape family may share the abilities Koko has learned to tap.
August 12, 2015
Using an innovative method, EPFL scientists show that the brain is not as compact as we have thought all along.
To study the fine structure of the brain, including its connections between neurons, the synapses, scientists must use electron microscopes. However, the tissue must first be fixed to prepare it for this high magnification imaging method. This process causes the brain to shrink; as a result, microscope images can be distorted, e.g. showing neurons to be much closer than they actually are. EPFL scientists have now solved the problem by using a technique that rapidly freezes the brain, preserving its true structure. The work is published ineLife.
The shrinking brain
Recent years have seen an upsurge of brain imaging, with renewed interest in techniques like electron microscopy, which allows us to observe and study the architecture of the brain in unprecedented detail. But at the same time, they have also revived old problems associated with how this delicate tissue is prepared before images can be collected.
Typically, the brain is fixed with stabilizing agents, such as aldehydes, and then encased, or embedded, in a resin. However, it has been known since the mid-sixties that this preparation process causes the brain to shrink by at least 30 percent. This in turn, distorts our understanding of the brain’s anatomy, e.g. the actual proximity of neurons, the structures of blood vessels etc.
The freezing brain
A study by Graham Knott at EPFL, led by Natalya Korogod and working with Carl Petersen, has successfully used an innovative method, called “cryofixation”, to prevent brain shrinkage during the preparation for electron microscopy. The method, whose roots go back to 1965, uses jets of liquid nitrogen to “snap-freeze” brain tissue down to -90oC, within milliseconds. The brain tissue here was mouse cerebral cortex.
The rapid freezing method is able to prevent the water in the tissue from forming crystals, as it would do in a regular freezer, by also applying very high pressures. Water crystals can severely damage the tissue by rupturing its cells. But in this high-pressure freezing method, the water turns into a kind of glass, preserving the original structures and architecture of the tissue.
The next step is to embed the frozen tissue in resin. This requires removing the glass-water and replacing it first with acetone, which is still a liquid at the low temperatures of cryofixation, and then, over a period of days, with resin; allowing it to slowly and gently push out the glassified water from the brain.
The real brain
After the brain was cryofixed and embedded, it was observed and photographed in using 3D electron microscopy. The researchers then compared the cryofixed brain images to those taken from a brain fixed with an “only chemical” method.
The analysis showed that the chemically fixed brain was much smaller in volume, showing a significant loss of extracellular space – the space around neurons. In addition, supporting brain cells called “astrocytes”, seemed to be less connected with neurons and even blood vessels in the brain. And finally, the connections between neurons, the synapses, seemed significantly weaker in the chemically-fixed brain compared to the cryofixed one.
The researchers then compared their measurements of the brain to those calculated in functional studies – studies that measure the time it takes for a molecule to travel across that brain region. To the researchers’ surprise, the data matched, adding even more evidence that cryofixation preserves the real anatomy of the brain.
“All this shows us that high-pressure cryofixation is a very attractive method for brain imaging,” says Graham Knott. “At the same time, it challenges previous imaging efforts, which we might have to re-examine in light of new evidence.” His team is now aiming to use cryofixation on other parts of the brain and even other types of tissue.
July 13, 2015
Melbourne researchers have identified a protein responsible for preserving the antibody-producing cells that lead to long-term immunity after infection or vaccination.
Dr Kim Good-Jacobson, Professor David Tarlinton and colleagues from the Walter and Eliza Hall Institute discovered the presence of a protein called Myb was essential for antibody-producing plasma cells to migrate into bone marrow, preserving them for many years or even decades. Their findings were published in the Journal of Experimental Medicine.
Dr Good-Jacobson said plasma cells were created when the immune system was exposed to pathogens such as viruses or bacteria. “When our immune system encounters a new pathogen, it can create plasma cells that secrete antibodies to specifically prevent future infections, generating immunity,” she said.
“Our bone marrow is like a long-term storage facility for plasma cells, allowing them to continue producing antibodies to protect against future infections. Until now, it was not known why some plasma cells moved into the bone marrow, while others remained in the blood stream and perished after a few days.”
The research team discovered that when the gene that produces the protein Myb was removed, plasma cells were no longer able to move into the bone marrow to provide long-term immunity. “Myb is a type of protein called a transcription factor, which binds to DNA and, in effect, switches genes ‘on’ or ‘off’,” Dr Good-Jacobson said.
“We found that if a plasma cell produced Myb at some stage during an immune response, then those plasma cells had the ability to migrate into the bone marrow. If we can understand how to flip the molecular switch in plasma cells and activate Myb production, we might be able to encourage the immune system to create long-term immunity for a range of infections.”
Plasma cells are created during an immune response in temporary structures called germinal centres, Dr Good-Jacobson said. “Germinal centres act as a rapid proto-typing facility, improving the design of antibodies to better recognise invading pathogens in the future,” she said. “The Myb protein marks the plasma cells that produce high-quality antibodies for preservation.”
Professor Tarlinton said the discovery would mean researchers could now search for the trigger of Myb production and find out what genes Myb controls. “Now that we know Myb is critical in creating long-term immunity, we can begin dissecting the pathways it uses to mark plasma cells for storage and the genes involved in migrating to the bone marrow,” he said.
“Some pathogens, such as malaria, typically trigger the creation of short-lived plasma cells. If we don’t create long-lived plasma cells, we don’t develop lasting immunity to the disease. If we can trigger the expression of Myb in plasma cells responding to pathogens – either by infection or by immunisation – we might be able to convince the immune system to store these plasma cells in the bone marrow to offer protection against future infections.”
June 24, 2015
Moving closer to the possibility of “materials that compute” and wearing your computer on your sleeve, researchers at the University of Pittsburgh Swanson School of Engineering have designed a responsive hybrid material that is fueled by an oscillatory chemical reaction and can perform computations based on changes in the environment or movement, and potentially even respond to human vital signs. The material system is sufficiently small and flexible that it could ultimately be integrated into a fabric or introduced as an inset into a shoe.
Anna C. Balazs, Ph.D., distinguished professor of chemical and petroleum engineering, and Steven P. Levitan, Ph.D., John A. Jurenko professor of electrical and computer engineering, integrated models for self-oscillating polymer gels and piezoelectric micro-electric-mechanical systems to devise a new reactive material system capable of performing computations without external energy inputs, amplification or computer mediation.
Their research, “Achieving synchronization with active hybrid materials: Coupling self-oscillating gels and piezoelectric (PZ) films,” appeared June 24th in the journal Scientific Reports, published by Nature (DOI: 10.1038/srep11577). The studies combine Balazs’ research in Belousov-Zhabotinsky (BZ) gels, a substance that oscillates in the absence of external stimuli, and Levitan’s expertise in computational modeling and oscillator-based computing systems. By working with Dr. Victor V. Yashin, research assistant professor of chemical and petroleum engineering and lead author on the paper, the researchers developed design rules for creating a hybrid “BZ-PZ” material.
“The BZ reaction drives the periodic oxidation and reduction of a metal catalyst that is anchored to the gel; this, in turn, makes the gel swell and shrink. We put a thin piezoelectric (PZ) cantilever over the gel so that when the PZ is bent by the oscillating gel, it generates an electric potential (voltage). Conversely, an electric potential applied to the PZ cantilever causes it to bend,” said Balazs. “So, when a single BZ-PZ unit is wired to another such unit, the expansion of the oscillating BZ gel in the first unit deflects the piezoelectric cantilever, which produces an electrical voltage. The generated voltage in turn causes a deflection of the cantilever in the second unit; this deflection imposes a force on the underlying BZ gel that modifies its oscillations. The resulting “see-saw-like” oscillation permits communication and an exchange of information between the units.
Multiple BZ-PZ units can be connected in serial or parallel, allowing more complicated patterns of oscillation to be generated and stored in the system. In effect, these different oscillatory patterns form a type of “memory,” allowing the material to be used for computation. Levitan adds, however, the computations would not be general purpose, but rather specific to pattern-matching and recognition, or other non-Boolean operations.
“Imagine a group of organ pipes, and each is a different chord. When you introduce a new chord, one resonates with that particular pattern,” Levitan said. “Similarly, let’s say you have an array of oscillators and they each have an oscillating pattern. Each set of oscillators would reflect a particular pattern. Then you introduce a new external input pattern, say from a touch or a heartbeat. The materials themselves recognize the pattern and respond accordingly, thereby performing the actual computing.”
Developing so-called “materials that compute” addresses limitations inherent to the systems currently used by researchers to perform either chemical computing or oscillator-based computing. Chemical computing systems are limited by both the lack of an internal power system and the rate of diffusion as the chemical waves spread throughout the system, enabling only local coupling. Further, oscillator-based computing has not been translated into a potentially wearable material. The hybrid BZ-PZ model, which has never been proposed previously, solves these problems and points to the potential of designing synthetic material systems that are self-powered.
Balazs and Levitan note that the current BZ-PZ gel model oscillates in periods of tens of seconds, which would allow for simple non-Boolean operations or pattern recognition of patterns like human movement. The next step for Drs. Balazs and Levitan is to add an input layer for the pattern recognition, something that has been accomplished in other technologies but will be applied to self-oscillating gels and piezoelectric films for the first time.
June 15, 2015
Humans are unlikely to be the only animal capable of self-awareness, a new study has shown.
Conducted by University of Warwick researchers, the study found that humans and other animals capable of mentally simulating environments require at least a primitive sense of self. The finding suggests that any animal that can simulate environments must have a form of self-awareness.
Often viewed as one of man’s defining characteristics, the study strongly suggests that self-awareness is not unique to mankind and is instead likely to be common among animals.
The researchers, from the University of Warwick’s Departments of Phycology and Philosophy, used thought experiments to discover which capabilities animals must have in order to mentally simulate their environment.
Commenting on the research Professor Thomas Hills, study co-author from Warwick’s Department of Psychology, said: “The study’s key insight is that those animals capable of simulating their future actions must be able to distinguish between their imagined actions and those that are actually experienced.”
The researchers were inspired by work conducted in the 1950s on maze navigation in rats. It was observed that rats, at points in the maze that required them to make decisions on what they would do next, often stopped and appeared to deliberate over their future actions.
Recent neuroscience research found that at these ‘choice points’ rats and other vertebrates activate regions of their hippocampus that appear to simulate choices and their potential outcomes.
Professor Hills and Professor Stephen Butterfill, from Warwick’s Department of Philosophy, created different descriptive models to explain the process behind the rat’s deliberation at the ‘choice points’.
One model, the Naive Model, assumed that animals inhibit action during simulation. However, this model created false memories because the animal would be unable to tell the differences between real and imagined actions.
A second, the Self-actuating Model, was able to solve this problem by ‘tagging’ real versus imagined experience. Hills and Butterfill called this tagging the ‘primal self’.
Commenting on the finding the Professor Hills, said: “The study answers a very old question: do animals have a sense of self? Our first aim was to understand the recent neural evidence that animals can project themselves into the future. What we wound up understanding is that, in order to do so, they must have a primal sense of self.”
“As such, humans must not be the only animal capable of self-awareness. Indeed, the answer we are led to is that anything, even robots, that can adaptively imagine themselves doing what they have not yet done, must be able to separate the knower from the known.”
The study, From foraging to autonoetic consciousness: The primal self as a consequence of embodied prospective foraging, is published by Current Zoology.
June 3, 2015
An international team of scientists led by Cardiff University researchers has provided the strongest evidence yet of what causes schizophrenia – a condition that affects around 1% of the global population.
Published in the journal Neuron, their work presents strong evidence that disruption of a delicate chemical balance in the brain is heavily implicated in the disorder.
In the largest ever study of its kind, the team found that disease-linked mutations disrupt specific sets of genes contributing to excitatory and inhibitory signalling, the balance of which plays a crucial role in healthy brain development and function.
The breakthrough builds on two landmark studies led by members of the Cardiff University team, published last year in the journal Nature.
“We’re finally starting to understand what goes wrong in schizophrenia,” says lead author Dr Andrew Pocklington from Cardiff University’s MRC Centre for Neuropsychiatric Genetics and Genomics.
“Our study marks a significant step towards understanding the biology underpinning schizophrenia, which is an incredibly complex condition and has up until very recently kept scientists largely mystified as to its origins.
“We now have what we hope is a pretty sizeable piece of the jigsaw puzzle that will help us develop a coherent model of the disease, while helping us to rule out some of the alternatives.
“A reliable model of disease is urgently needed to direct future efforts in developing new treatments, which haven’t really improved a great deal since the 1970s.”
Professor Hugh Perry, who chairs the Medical Research Council Neuroscience and Mental Health Board said: “This work builds on our understanding of the genetic causes of schizophrenia – unravelling how a combination of genetic faults can disrupt the chemical balance of the brain.
“Scientists in the UK, as part of an international consortium, are uncovering the genetic causes of a range of mental health issues, such as schizophrenia.
“In the future, this work could lead to new ways of predicting an individual’s risk of developing schizophrenia and form the basis of new targeted treatments that are based on an individual’s genetic makeup.”
A healthy brain is able to function properly thanks to a precise balance between chemical signals that excite and inhibit nerve cell activity. Researchers studying psychiatric disorders have previously suspected that disruption of this balance contributes to schizophrenia.
The first evidence that schizophrenia mutations interfere with excitatory signalling was uncovered in 2011 by the same team, based at Cardiff University’s MRC Centre for Neuropsychiatric Genetics and Genomics.
This paper not only confirms their previous findings, but also provides the first strong genetic evidence that disruption of inhibitory signalling contributes to the disorder.
To reach their conclusions scientists compared the genetic data of 11,355 patients with schizophrenia against a control group of 16,416 people without the condition.
They looked for types of mutation known as copy number variants (CNVs), mutations in which large stretches of DNA are either deleted or duplicated.
Comparing the CNVs found in people with schizophrenia to those found in unaffected people, the team was able to show that the mutations in individuals with the disorder tended to disrupt genes involved in specific aspects of brain function.
The disease-causing effects of CNVs are also suspected to be involved in other neurodevelopmental disorders such as intellectual disability, Autism Spectrum Disorder and ADHD.
Around 635,000 people in the UK will at some stage in their lives be affected by schizophrenia.
The estimated cost of schizophrenia and psychosis to society is around Â£11.8 billion a year.
The symptoms of schizophrenia can be extremely disruptive, and have a large impact on a person’s ability to carry out everyday tasks, such as going to work, maintaining relationships and caring for themselves or others.
April 22, 2015
How is lightning initiated in thunderclouds? This is difficult to answer – how do you measure electric fields inside large, dangerously charged clouds? It was discovered, more or less by coincidence, that cosmic rays provide suitable probes to measure electric fields within thunderclouds. This surprising finding is published in Physical Review Letters on April 24th. The measurements were performed with the LOFAR radio telescope located in the Netherlands.
‘We used to throw away LOFAR measurements taken during thunderstorms. They were too messy.’ says astronomer Pim Schellart. ‘Well, we didn’t actually throw them away of course, we just didn’t analyze them.’ Schellart, who completed his PhD in March this year at Radboud University in Nijmegen and is supervised by Prof. Heino Falcke, is interested in cosmic rays. These high-energy particles, originating from exploding stars and other astrophysical sources, continuously bombard Earth from space.
High in the atmosphere these particles strike atmospheric molecules and create ‘showers’ of elementary particles. These showers can also be measured from the radio emission that is generated when their constituent particles are deflected by the magnetic field of the Earth. The radio emission also gives information about the original particles. These measurements are routinely conducted with LOFAR at ASTRON in Dwingeloo, but not during thunderstorms.
That changed when the data were examined in a collaborative effort with astrophysicist Gia Trinh, Prof. Olaf Scholten from the University of Groningen and lightning expert Ute Ebert from the Centrum Wiskunde & Informatica in Amsterdam.
‘We modeled how the electric field in thunderstorms can explain the different measurements. This worked very well. How the radio emission changes gives us a lot of information about the electric fields in thunderstorms. We could even determine the strength of the electric field at a certain height in the cloud.’ says Schellart.
This field can be as strong as 50 kV/m. This translates into a voltage of hundreds of millions of volts over a distance of multiple kilometers: a thundercloud contains enormous amounts of energy.
Lightning is a highly unpredictable natural phenomenon that inflicts damage to infrastructure and claims victims around the world. This new method to measure electric fields in thunderclouds will contribute to a better understanding and ultimately better predictions of lightning activity. Current measurement methods from planes, balloons or little rockets are dangerous and too localized. Most importantly the presence of the measurement equipment influences the measurements. Cosmic rays probe the thunderclouds from top to bottom. Moving at almost the speed of light they provide a near instantaneous ‘picture’ of the electric fields in the cloud. Moreover, they are created by nature and are freely available.
‘This research is an exemplary form of interdisciplinary collaboration between astronomers, particle physicists and geophysicists’, says Heino Falcke. ‘We hope to develop the model further to ultimately answer the question: how is lightning initiated within thunderclouds?’
February 14, 2015
We’re so used murder mysteries that we don’t even notice how mystery authors play with time. Typically the murder occurs well before the midpoint of the book, but there is an information blackout at that point and the reader learns what happened then only on the last page.
If the last page were ripped out of the book, physicist Kater Murch, PhD, said, would the reader be better off guessing what happened by reading only up to the fatal incident or by reading the entire book?
The answer, so obvious in the case of the murder mystery, is less so in world of quantum mechanics, where indeterminacy is fundamental rather than contrived for our reading pleasure.
Even if you know everything quantum mechanics can tell you about a quantum particle, said Murch, an assistant professor of physics in Arts & Sciences at Washington University in St. Louis, you cannot predict with certainty the outcome of a simple experiment to measure its state. All quantum mechanics can offer are statistical probabilities for the possible results.
The orthodox view is that this indeterminacy is not a defect of the theory, but rather a fact of nature. The particle’s state is not merely unknown, but truly undefined before it is measured. The act of measurement itself forces the particle to collapse to a definite state.
In the Feb. 13 issue of Physical Review Letters, Kater Murch describes a way to narrow the odds. By combining information about a quantum system’s evolution after a target time with information about its evolution up to that time, his lab was able to narrow the odds of correctly guessing the state of the two-state system from 50-50 to 90-10.
It’s as if what we did today, changed what we did yesterday. And as this analogy suggests, the experimental results have spooky implications for time and causality–at least in microscopic world to which quantum mechanics applies.
Measuring a phantom
Until recently physicists could explore the quantum mechanical properties of single particles only through thought experiments, because any attempt to observe them directly caused them to shed their mysterious quantum properties.
But in the 1980s and 1990s physicists invented devices that allowed them to measure these fragile quantum systems so gently that they don’t immediately collapse to a definite state.
The device Murch uses to explore quantum space is a simple superconducting circuit that enters quantum space when it is cooled to near absolute zero. Murch’s team uses the bottom two energy levels of this qubit, the ground state and an excited state, as their model quantum system. Between these two states, there are an infinite number of quantum states that are superpositions, or combinations, of the ground and excited states.
The quantum state of the circuit is detected by putting it inside a microwave box. A few microwave photons are sent into the box, where their quantum fields interact with the superconducting circuit. So when the photons exit the box they bear information about the quantum system.
Crucially, these “weak,” off-resonance measurements do not disturb the qubit, unlike “strong” measurements with photons that are resonant with the energy difference between the two states, which knock the circuit into one or the other state.
A quantum guessing game
In Physical Review Letters, Murch describes a quantum guessing game played with the qubit.
“We start each run by putting the qubit in a superposition of the two states,” he said. “Then we do a strong measurement but hide the result, continuing to follow the system with weak measurements.”
They then try to guess the hidden result, which is their version of the missing page of the murder mystery.
“Calculating forward, using the Born equation that expresses the probability of finding the system in a particular state, your odds of guessing right are only 50-50,” Murch said. “But you can also calculate backward using something called an effect matrix. Just take all the equations and flip them around. They still work and you can just run the trajectory backward.
“So there’s a backward-going trajectory and a forward-going trajectory and if we look at them both together and weight the information in both equally, we get something we call a hindsight prediction, or “retrodiction.”
The shattering thing about the retrodiction is that it is 90 percent accurate. When the physicists check it against the stored measurement of the system’s earlier state it is right nine times out of 10.
Down the rabbit hole
The quantum guessing game suggests ways to make both quantum computing and the quantum control of open systems, such as chemical reactions, more robust. But it also has implications for much deeper problems in physics.
For one thing, it suggests that in the quantum world time runs both backward and forward whereas in the classical world it only runs forward.
“I always thought the measurement would resolve the time symmetry in quantum mechanics,” Murch said. “If we measure a particle in a superposition of states and it collapses into one of two states, well, that sounds like a process that goes forward in time.”
But in the quantum guessing experiment, time symmetry has returned. The improved odds imply the measured quantum state somehow incorporates information from the future as well as the past. And that implies that time, notoriously an arrow in the classical world, is a double-headed arrow in the quantum world.
“It’s not clear why in the real world, the world made up of many particles, time only goes forward and entropy always increases,” Murch said. “But many people are working on that problem and I expect it will be solved in a few years,” he said.
In a world where time is symmetric, however, is there such a thing as cause and effect? To find out, Murch proposes to run a qubit experiment that would set up feedback loops (which are chains of cause and effect) and try to run them both forward and backward.
“It takes 20 or 30 minutes to run one of these experiments,” Murch said, “several weeks to process it, and a year to scratch our heads to see if we’re crazy or not.”
“At the end of the day,” he said, “I take solace in the fact that we have a real experiment and real data that we plot on real curves.”
January 23, 2015
Recent developments in science are beginning to suggest that the universe naturally produces complexity. The emergence of life in general and perhaps even rational life, with its associated technological culture, may be extremely common, argues Clemson researcher Kelly Smith in a recently published paper in the journal Space Policy.
What’s more, he suggests, this universal tendency has distinctly religious overtones and may even establish a truly universal basis for morality.
Smith, a Philosopher and Evolutionary Biologist, applies recent theoretical developments in Biology and Complex Systems Theory to attempt new answers to the kind of enduring questions about human purpose and obligation that have long been considered the sole province of the humanities.
He points out that scientists are increasingly beginning to discuss how the basic structure of the universe seems to favor the creation of complexity. The large scale history of the universe strongly suggests a trend of increasing complexity: disordered energy states produce atoms and molecules, which combine to form suns and associated planets, on which life evolves. Life then seems to exhibit its own pattern of increasing complexity, with simple organisms getting more complex over evolutionary time until they eventually develop rationality and complex culture.
And recent theoretical developments in Biology and complex systems theory suggest this trend may be real, arising from the basic structure of the universe in a predictable fashion.
“If this is right,” says Smith, “you can look at the universe as a kind of ‘complexity machine’, which raises all sorts of questions about what this means in a broader sense. For example, does believing the universe is structured to produce complexity in general, and rational creatures in particular, constitute a religious belief? It need not imply that the universe was created by a God, but on the other hand, it does suggest that the kind of rationality we hold dear is not an accident.”
And Smith feels another similarity to religion are the potential moral implications of this idea. If evolution tends to favor the development of sociality, reason, and culture as a kind of “package deal”, then it’s a good bet that any smart extraterrestrials we encounter will have similar evolved attitudes about their basic moral commitments.
In particular, they will likely agree with us that there is something morally special about rational, social creatures. And such universal agreement, argues Smith, could be the foundation for a truly universal system of ethics.
Smith will soon take sabbatical to lay the groundwork for a book exploring these issues in more detail.