Discovery of new subatomic particle sheds light on fundamental force of nature

October 9, 2014

The discovery of a new particle will “transform our understanding” of the fundamental force of nature that binds the nuclei of atoms, researchers argue.

Led by scientists from the University of Warwick, the discovery of the new particle will help provide greater understanding of the strong interaction, the fundamental force of nature found within the protons of an atom’s nucleus.

Named Ds3*(2860)ˉ, the particle, a new type of meson, was discovered by analysing data collected with the LHCb detector at CERN’s Large Hadron Collider (LHC) .

The new particle is bound together in a similar way to protons. Due to this similarity, the Warwick researchers argue that scientists will now be able to study the particle to further understand strong interactions.

Along with gravity, the electromagnetic interaction and weak nuclear force, strong-interactions are one of four fundamental forces. Lead scientist Professor Tim Gershon, from The University of Warwick’s Department of Physics, explains:

“Gravity describes the universe on a large scale from galaxies to Newton’s falling apple, whilst the electromagnetic interaction is responsible for binding molecules together and also for holding electrons in orbit around an atom’s nucleus.

“The strong interaction is the force that binds quarks, the subatomic particles that form protons within atoms, together. It is so strong that the binding energy of the proton gives a much larger contribution to the mass, through Einstein’s equation E = mc2, than the quarks themselves. ”

Due in part to the forces’ relative simplicity, scientists have previously been able to solve the equations behind gravity and electromagnetic interactions, but the strength of the strong interaction makes it impossible to solve the equations in the same way.

“Calculations of strong interactions are done with a computationally intensive technique called Lattice QCD,” says Professor Gershon. “In order to validate these calculations it is essential to be able to compare predictions to experiments. The new particle is ideal for this purpose because it is the first known that both contains a charm quark and has spin 3.”

There are six quarks known to physicists; Up, Down, Strange, Charm, Beauty and Top. Protons and neutrons are composed of up and down quarks, but particles produced in accelerators such as the LHC can contain the unstable heavier quarks. In addition, some of these particles have higher spin values than the naturally occurring stable particles.

“Because the Ds3*(2860)ˉ particle contains a heavy charm quark it is easier for theorists to calculate its properties. And because it has spin 3, there can be no ambiguity about what the particle is,” adds Professor Gershon. “Therefore it provides a benchmark for future theoretical calculations. Improvements in these calculations will transform our understanding of how nuclei are bound together.”

Spin is one of the labels used by physicists to distinguish between particles. It is a concept that arises in quantum mechanics that can be thought of as being similar to angular momentum: in this sense higher spin corresponds to the quarks orbiting each other faster than those with a lower spin.

Warwick Ph.D. student Daniel Craik, who worked on the study, adds “Perhaps the most exciting part of this new result is that it could be the first of many similar discoveries with LHC data. Whether we can use the same technique, as employed with our research into Ds3*(2860)ˉ, to also improve our understanding of the weak interaction is a key question raised by this discovery. If so, this could help to answer one of the biggest mysteries in physics: why there is more matter than antimatter in the Universe.”

The results are detailed in two papers that will be published in the next editions of the journals Physical Review Letters and Physical Review D. Both papers have been given the accolade of being selected as Editors’ Suggestions.

 

###

Contact: Tom Frew, International Press Officer.
E: a.t.frew@warwick.ac.uk
P: +44 (0)2476575910

 

Notes for Editors:

The results are detailed in papers titled:

- “Observation of overlapping spin-1 and spin-3 D0K- resonances at mass 2.86 GeV/c2″, to be published inPhysical Review Letters

http://arxiv.org/pdf/1407.7574.pdf

-”Dalitz plot analysis of Bs0→D0K-π+ decays”, to be published in Physical Review D

http://arxiv.org/pdf/1407.7712.pdf

- The Ds3*(2860)ˉ particle is a meson that contains a charm anti-quark and a strange quark. The subscript 3 denotes that it has spin 3, while the number 2860 in parentheses is the mass of the particle in the units of MeV/c2 that are favoured by particle physicists. The value of 2860 MeV/c2 corresponds to approximately 3 times the mass of the proton.

- The particle was discovered in the decay chain Bs0→D0K–π+ , where the Bs0, D0, K– and π+ mesons contain respectively a bottom anti-quark and a strange quark, a charm anti-quark and an up quark, an up anti-quark and a strange quark, and a down anti-quark and an up quark. The Ds3*(2860)ˉ particle is observed as a peak in the mass of combinations of the D0 and K– mesons. The distributions of the angles between the D0, K– and π+ particles allow the spin of the Ds3*(2860)ˉ meson to be unambiguously determined.

- Quarks are bound by the strong interaction into one of two types of particles: baryons, such as the proton, are composed of three quarks; mesons are composed of one quark and one anti-quark, where an anti-quark is the antimatter version of a quark.

- CERN, the European Organization for Nuclear Research, is the world’s leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. Romania is a Candidate for Accession. Serbia is an Associate Member in the pre-stage to Membership. India, Japan, the Russian Federation, the United States of America, Turkey, the European Commission and UNESCO have Observer Status.

- The LHCb experiment is one of the four main experiments at the CERN Large Hadron Collider, and is set up to explore what happened after the Big Bang that allowed matter to survive and build the Universe we inhabit today. The LHCb collaboration comprises about 700 physicists from 67 institutes in 17 countries.

a.t.frew@warwick.ac.uk
44-024-767-75910
University of Warwick
@warwickuni

Do we live in a 2-D hologram?

August 26, 2014

A unique experiment at the U.S. Department of Energy’s Fermi National Accelerator Laboratory called the Holometer has started collecting data that will answer some mind-bending questions about our universe – including whether we live in a hologram.

Much like characters on a television show would not know that their seemingly 3 – D world exists only on a 2 – D screen, we could be clueless that our 3 – D space is just an illusion. The information about everything in our universe could actually be encoded in tiny packets in two dimensions.

Get close enough to your TV screen and you’ll see pixels, small points of data that make a seamless image if you stand back. Scientists think that the universe’s information may be contained in the same way, and that the natural “pixel size” of space is roughly 10 trillion trillion times smaller than an atom, a distance that physicists refer to as the Planck scale.

“We want to find out whether spacetime is a quantum system just like matter is,” said Craig Hogan, director of Fermilab’s Center for Particle Astrophysics and the developer of the holographic noise theory. “If we see something, it will completely change ideas about space we’ve used for thousands of years.”

Quantum theory suggests that it is impossible to know both the exact location and the exact speed of subatomic particles. If space comes in 2-D bits with limited information about the precise location of objects, then space itself would fall under the same theory of uncertainty . The same way that matter continues to jiggle (as quantum waves) even when cooled to absolute zero, this digitized space should have built-in vibrations even in its lowest energy state.

Essentially, the experiment probes the limits of the universe’s ability to store information. If there are a set number of bits that tell you where something is, it eventually becomes impossible to find more specific information about the location – even in principle. The instrument testing these limits is Fermilab’s Holometer, or holographic interferometer, the most sensitive device ever created to measure the quantum jitter of space itself.

Now operating at full power, the Holometer uses a pair of interferometers placed close to one another. Each one sends a one-kilowatt laser beam (the equivalent of 200,000 laser pointers) at a beam splitter and down two perpendicular 40-meter arms. The light is then reflected back to the beam splitter where the two beams recombine, creating fluctuations in brightness if there is motion. Researchers analyze these fluctuations in the returning light to see if the beam splitter is moving in a certain way – being carried along on a jitter of space itself.

“Holographic noise” is expected to be present at all frequencies, but the scientists’ challenge is not to be fooled by other sources of vibrations. The Holometer is testing a frequency so high – millions of cycles per second – that motions of normal matter are not likely to cause problems. Rather, the dominant background noise is more often due to radio waves emitted by nearby electronics. The Holometer experiment is designed to identify and eliminate noise from such conventional sources.

“If we find a noise we can’t get rid of, we might be detecting something fundamental about nature – a noise that is intrinsic to spacetime,” said Fermilab physicist Aaron Chou, lead scientist and project manager for the Holometer. “It’s an exciting moment for physics. A positive result will open a whole new avenue of questioning about how space works.”

###

The Holometer experiment, funded by the U.S. Department of Energy Office of Science and other sources , is expected to gather data over the coming year.

The Holometer team comprises 21 scientists and students from Fermilab, Massachusetts Institute of Technology, University of Chicago, and University of Michigan. For more information about the experiment, visit http://holometer.fnal.gov/ .

Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website athttp://www.fnal.gov and follow us on Twitter at @FermilabToday .

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov .

media@fnal.gov
630-840-3351
DOE/Fermi National Accelerator Laboratory

Researchers discover boron ‘buckyball’

July 14, 2014

The discovery 30 years ago of soccer-ball-shaped carbon molecules called buckyballs helped to spur an explosion of nanotechnology research. Now, there appears to be a new ball on the pitch.

Researchers from Brown University, Shanxi University and Tsinghua University in China have shown that a cluster of 40 boron atoms forms a hollow molecular cage similar to a carbon buckyball. It’s the first experimental evidence that a boron cage structure – previously only a matter of speculation – does indeed exist.

“This is the first time that a boron cage has been observed experimentally,” said Lai-Sheng Wang, a professor of chemistry at Brown who led the team that made the discovery. “As a chemist, finding new molecules and structures is always exciting. The fact that boron has the capacity to form this kind of structure is very interesting.”

Wang and his colleagues describe the molecule, which they’ve dubbed borospherene, in the journal Nature Chemistry.

Carbon buckyballs are made of 60 carbon atoms arranged in pentagons and hexagons to form a sphere – like a soccer ball. Their discovery in 1985 was soon followed by discoveries of other hollow carbon structures including carbon nanotubes. Another famous carbon nanomaterial – a one-atom-thick sheet called graphene – followed shortly after.

After buckyballs, scientists wondered if other elements might form these odd hollow structures. One candidate was boron, carbon’s neighbor on the periodic table. But because boron has one less electron than carbon, it can’t form the same 60-atom structure found in the buckyball. The missing electrons would cause the cluster to collapse on itself. If a boron cage existed, it would have to have a different number of atoms.

Wang and his research group have been studying boron chemistry for years. In a paper published earlier this year, Wang and his colleagues showed that clusters of 36 boron atoms form one-atom-thick disks, which might be stitched together to form an analog to graphene, dubbed borophene. Wang’s preliminary work suggested that there was also something special about boron clusters with 40 atoms. They seemed to be abnormally stable compared to other boron clusters. Figuring out what that 40-atom cluster actually looks like required a combination of experimental work and modeling using high-powered supercomputers.

On the computer, Wang’s colleagues modeled over 10,000 possible arrangements of 40 boron atoms bonded to each other. The computer simulations estimate not only the shapes of the structures, but also estimate the electron binding energy for each structure – a measure of how tightly a molecule holds its electrons. The spectrum of binding energies serves as a unique fingerprint of each potential structure.

The next step is to test the actual binding energies of boron clusters in the lab to see if they match any of the theoretical structures generated by the computer. To do that, Wang and his colleagues used a technique called photoelectron spectroscopy.

Chunks of bulk boron are zapped with a laser to create vapor of boron atoms. A jet of helium then freezes the vapor into tiny clusters of atoms. The clusters of 40 atoms were isolated by weight then zapped with a second laser, which knocks an electron out of the cluster. The ejected electron flies down a long tube Wang calls his “electron racetrack.” The speed at which the electrons fly down the racetrack is used to determine the cluster’s electron binding energy spectrum – its structural fingerprint.

The experiments showed that 40-atom-clusters form two structures with distinct binding spectra. Those spectra turned out to be a dead-on match with the spectra for two structures generated by the computer models. One was a semi-flat molecule and the other was the buckyball-like spherical cage.

“The experimental sighting of a binding spectrum that matched our models was of paramount importance,” Wang said. “The experiment gives us these very specific signatures, and those signatures fit our models.”

The borospherene molecule isn’t quite as spherical as its carbon cousin. Rather than a series of five- and six-membered rings formed by carbon, borospherene consists of 48 triangles, four seven-sided rings and two six-membered rings. Several atoms stick out a bit from the others, making the surface of borospherene somewhat less smooth than a buckyball.

As for possible uses for borospherene, it’s a little too early to tell, Wang says. One possibility, he points out, could be hydrogen storage. Because of the electron deficiency of boron, borospherene would likely bond well with hydrogen. So tiny boron cages could serve as safe houses for hydrogen molecules.

But for now, Wang is enjoying the discovery.

“For us, just to be the first to have observed this, that’s a pretty big deal,” Wang said. “Of course if it turns out to be useful that would be great, but we don’t know yet. Hopefully this initial finding will stimulate further interest in boron clusters and new ideas to synthesize them in bulk quantities.”

###

The theoretical modeling was done with a group led by Prof. Si-Dian Li from Shanxi University and a group led by Prof. Jun Li from Tsinghua University. The work was supported by the U.S. National Science Foundation (CHE-1263745) and the National Natural Science Foundation of China.’

kevin_stacey@brown.edu
401-863-3766
Brown University

Understanding consciousness

July 11, 2014

Why does a relentless stream of subjective experiences normally fill your mind? Maybe that’s just one of those mysteries that will always elude us.

Yet, research from Northwestern University suggests that consciousness lies well within the realm of scientific inquiry — as impossible as that may currently seem. Although scientists have yet to agree on an objective measure to index consciousness, progress has been made with this agenda in several labs around the world.

“The debate about the neural basis of consciousness rages because there is no widely accepted theory about what happens in the brain to make consciousness possible,” said Ken Paller, professor of psychology in the Weinberg College of Arts and Sciences and director of the Cognitive Neuroscience Program at Northwestern.

“Scientists and others acknowledge that damage to the brain can lead to systematic changes in consciousness. Yet, we don’t know exactly what differentiates brain activity associated with conscious experience from brain activity that is instead associated with mental activity that remains unconscious,” he said.

In a new article, Paller and Satoru Suzuki, also professor of psychology at Northwestern, point out flawed assumptions about consciousness to suggest that a wide range of scientific perspectives can offer useful clues about consciousness.

“It’s normal to think that if you attentively inspect something you must be aware of it and that analyzing it to a high level would necessitate consciousness,” Suzuki noted. “Results from experiments on perception belie these assumptions.

“Likewise, it feels like we can freely decide at a precise moment, when actually the process of deciding begins earlier, via neurocognitive processing that does not enter awareness,” he said.

The authors write that unconscious processing can influence our conscious decisions in ways we never suspect.

If these and other similar assumptions are incorrect, the researchers state in their article, then mistaken reasoning might be behind arguments for taking the science of consciousness off the table.

“Neuroscientists sometimes argue that we must focus on understanding other aspects of brain function, because consciousness is never going to be understood,” Paller said. “On the other hand, many neuroscientists are actively engaged in probing the neural basis of consciousness, and, in many ways, this is less of a taboo area of research than it used to be.”

Experimental evidence has supported some theories about consciousness that appeal to specific types of neural communication, which can be described in neural terms or more abstractly in computational terms. Further theoretical advances can be expected if specific measures of neural activity can be brought to bear on these ideas.

Paller and Suzuki both conduct research that touches on consciousness. Suzuki studies perception, and Paller studies memory. They said it was important for them to write the article to counter the view that it is hopeless to ever make progress through scientific research on this topic.

They outlined recent advances that provide reason to be optimistic about future scientific inquiries into consciousness and about the benefits that this knowledge could bring for society.

“For example, continuing research on the brain basis of consciousness could inform our concerns about human rights, help us explain and treat diseases that impinge on consciousness, and help us perpetuate environments and technologies that optimally contribute to the well being of individuals and of our society,” the authors wrote.

They conclude that research on human consciousness belongs within the purview of science, despite philosophical or religious arguments to the contrary.

###

Their paper, “The Source of Consciousness,” has been published online in the journal Trends in Cognitive Sciences.

h-anyaso@northwestern.edu
847-491-4887
Northwestern University

A new genome editing method brings the possibility of gene therapies closer to reality

July 11, 2014

Researchers from Salk Institute for Biological Studies, BGI, and other institutes for the first time evaluated the safety and reliability of the existing targeted gene correction technologies, and successfully developed a new method, TALEN-HDAdV, which could significantly increased gene-correction efficiency in human induced pluripotent stem cell (hiPSC). This study published online in Cell Stell Cell provides an important theoretical foundation for stem cell-based gene therapy.

The combination of stem cells and targeted genome editing technology provides a powerful tool to model human diseases and develop potential cell replacement therapy. Although the utility of genome editing has been extensively documented, but the impact of these technologies on mutational load at the whole-genome level remains unclear.

In the study, researchers performed whole-genome sequencing to evaluate the mutational load at single-base resolution in individual gene-corrected hiPSC clones in three different disease models, including Hutchinson-Gilford progeria syndrome (HGPS), sickle cell disease (SCD), and Parkinson’s disease (PD).

They evaluated the efficiencies of gene-targeting and gene-correction at the haemoglobin gene HBB locus with TALEN, HDAdV, CRISPR/CAS9 nuclease, and found the TALENs, HDAdVs and CRISPR/CAS9 mediated gene-correction methods have a similar efficiency at the gene HBB locus. In addition, the results of deep whole-genome sequencing indicated that TALEN and HDAdV could keep the patient’s genome integrated at a maximum level, proving the safety and reliability of these methods.

Through integrating the advantages of TALEN- and HDAdV-mediated genome editing, researchers developed a new TALEN-HDAdV hybrid vector (talHDAdV), which can significantly increase the gene-correction efficiency in hiPSCs. Almost all the genetic mutations at the gene HBB locus can be detected by telHDAdV, which allows this new developed technology can be applied into the gene repair of different kinds of hemoglobin diseases such as SCD and Thalassemia.

liujia@genomics.cn
BGI Shenzhen

Research shows compassion and euthanasia don’t always jibe

June 6, 2014

New research from Case Western Reserve University found that compassion can produce counterintuitive results, challenging prevailing views of empathy’s effects on moral judgment.

To understand how humans make moral choices, researchers asked subjects to respond to a variety of moral dilemmas, for instance: Whether to stay and defend a mortally wounded soldier until he dies or shoot him to protect him from enemy torture and enable you and five other soldiers to escape unharmed.

Leading research has said people make choices based on a struggle within their brains between thoughtful reason and automatic passion.

“But this simple reason versus passion model fails to capture that there’s a refined way of thinking with emotions, closely related to empathy and compassion,” said Anthony Jack, Director of Research at the Inamori International Center for Ethics and Excellence, associate professor of cognitive science, psychology and philosophy at Case Western Reserve and lead author of the new research.

Co-authors are Philip Robbins, of the department of philosophy at the University of Missouri, Jared P. Friedman, who just graduated with a BA in cognitive science and philosophy from Case Western Reserve, and Chris D. Meyers, of the department of philosophy at the University of Southern Mississippi. Their study is published in the journal Advances in Experimental Philosophy of Mind at: http://www.bloomsbury.com/us/advances-in-experimental-philosophy-of-mind-9781472507334/.

The researchers agree that there are two networks in the brain that fight to guide our moral decisions, but say that leading work, by Joshua Greene at Harvard University, mischaracterizes the networks involved and how they operate.

A new model

“There’s a tension between cold hard reasoning – what’s called analytic reasoning – and another type of reasoning important to emotions, self-regulation and social insight,” Jack explained. “The second type of reasoning isn’t characterized by being caught up in reflexive and primitive emotions, as Greene suggests. It’s critically important to understanding and appreciating the experiential point of view of others.”

Using functional magnetic resonance imagers (fMRI), Jack has found that the human brain has an analytic network and an empathetic network that tend to suppress one another.

For example, in a healthy brain, physics problems activate the analytic network and deactivate the empathetic. Meanwhile, videos or stories that put a subject in the shoes of another activate the empathetic network and deactivate the analytic.

In these studies, students from Case Western Reserve and groups of adults recruited through Amazon Mechanical Turk responded to a series of questions about themselves and their views. They were then asked to make choices about a series of moral conundrums.

Among the conundrums were questions involving euthanasia. The respondents clearly made different choices between actions taken for a suffering dog versus a suffering person.

Counterintuitive

“For humans, we privilege their autonomy or life spirit over their basic emotions, such as how much pain they’re in. In contrast, our view of non-human animals tends to be more reductive – we see them as little more than their emotions” Jack said.

“Even though people talk about euthanasia with animals as the humane thing to do, people who are more empathetic have the greatest opposition to euthanasia involving a human,” he said.

Subjects were presented scenarios that included passive euthanasia, such as halting medical intervention, and active euthanasia, such as assisting in the subject’s death.

“More compassionate people didn’t think euthanasia was appropriate for humans, even when we told them the person would be in pain for the rest of his or her life,” Jack said. “That is surprising, because the way we measure compassion is to assess how much people are concerned by the suffering of others.”

Here again, the researchers argue, Greene’s model falls short. According to Greene, those who oppose utilitarian thinking (e.g., euthanasia), should have higher levels of reflexive, primitive, raw emotion Instead, the researchers found that those who were more susceptible to personal distress were actually more likely to support euthanasia.

Opposition to utilitarian thinking was predicted specifically by compassion, not by measures of primitive or reflexive emotion. “Our culture often paints empathy as weakness,” Jack said, “Greene’s model plays into that view, suggesting that those who don’t like utilitarian thinking are intellectually weak and ruled by primitive passions. But these views are fundamentally misleading. Compassion is actually linked to stronger emotion regulation abilities. Decades of research shows that we have to overcome our reflexive feelings of aversion and distress to be ready and willing to help others.”

The researchers found that people judged to be more compassionate and empathetic by their peers – for instance better listeners – tended to oppose utilitarian choices such as sacrificing one to save the many or euthanasia.

The findings suggest that more compassionate people have more of a sense of the sanctity of human life. “The idea that life is sacred may be hard for the reductive, analytic mind to grasp, but it is hardly a primitive or reflexive sentiment” Jack said.

That’s not to say that, given more information, the compassionate will continue to oppose euthanasia. The conundrums were limited in an important way: the test subjects knew nothing about the wishes of the person suffering.

The researchers are continuing their studies. They expect to see a different relationship between compassion and moral judgments about euthanasia when more is understood about the person who is suffering, in particular when continued suffering undermines that person’s life narrative.

kevin.mayhood@case.edu
216-368-4442
Case Western Reserve University

Spontaneous thoughts are perceived to reveal meaningful self-insight

May 28, 2014

Spontaneous thoughts, intuitions, dreams and quick impressions. We all have these seemingly random thoughts popping into our minds on a daily basis. The question is what do we make of these unplanned, spur-of-the-moment thoughts? Do we view them as coincidental wanderings of a restless mind, or as revealing meaningful insight into ourselves?

A research team from Carnegie Mellon University and Harvard Business School set out to determine how people perceive their own spontaneous thoughts and if those thoughts or intuitions have any influence over judgment. Published in the “Journal of Experimental Psychology: General,” their research found that spontaneous thoughts are perceived to provide potent self-insight and can influence judgment and decisions more than similar, more deliberate kinds of thinking – even on important topics such as commitment to current romantic partners.

“We are aware of the output of spontaneous thoughts, but lack insight into the reasons why and processes by which they occurred. Rather than dismiss these seemingly random thoughts as meaningless, our research found that people believe, precisely because they are not controlled, that spontaneous thoughts reveal more meaningful insight into their own mind – their beliefs, attitudes, and preferences – than similar deliberate thoughts. As a consequence, spontaneous thoughts can have a more potent influence on judgment,” said Carey K. Morewedge, lead author and associate professor of marketing in the Tepper School of Business with an additional appointment in the Dietrich College’s Department of Social and Decision Sciences. “People often believe their intuitions, dreams and or random thoughts reveal more insight than the result of more effortful thinking and reasoning. This research helps to explain these curious beliefs. ”

For the study, Morewedge, CMU’s Colleen E. Giblin and Harvard University’s Michael I. Norton ran five studies. The first three were designed to test the hypothesis that the more spontaneous a thought is, the more it is believed to provide meaningful self-insight. Participants rated the extent to which different thought categories are spontaneous or controlled and the extent to which each provides self-insight; they recalled either a pleasant or unpleasant childhood event and evaluated the degree that the recollection would provide meaningful self-insight if it happened spontaneously or deliberately; and they generated thoughts about four strangers through a deliberative or spontaneous process and rated how much those thoughts provided them with valuable self-insight.

The results suggest that when people evaluate a particular thought, they not only consider its content, they are also influenced by their more general beliefs about different thought processes. Thoughts with the same content are judged to be more meaningful if they occurred through a spontaneous, uncontrolled process rather than a deliberate, controlled process. The effect was found across various kinds of thought and thought content, including thoughts about other people. This means that the content of spontaneous thought need not be entirely about the self in order for people to feel like they’ve gleaned meaningful self-insight.

The last two experiments extended the investigation to determine if the greater insight attributed to spontaneous thoughts leads them to have a greater impact on judgment. The researchers tested this first by having participants think about a love interest other than their present or most recent significant other spontaneously or deliberately, report the self-insight that the thought provided and then indicate their attraction toward that person. They found that those who spontaneously generated a thought of a love interest believed that thought revealed more self-insight and perceived their attraction to be stronger than the participants who identified a love interest with deliberate thinking.

Finally, to determine whether this greater influence would extend to both positive and negative spontaneous thoughts, participants recalled a positive or negative experience related to their current or most recent romantic relationship. Participants reported the extent to which the spontaneous and deliberate recollection of that memory would provide them with meaningful self-insight and increase or decrease the likelihood that they would end the relationship. The results showed that participants believed the recollection of a positive or negative experience with their current romantic partner would reveal more self-insight and have a greater influence on their commitment to that relationship if it was recalled spontaneously rather than deliberately.

“The perception that a thought popped into mind out of nowhere can lead people to overvalue their own insights. When considering a thought that came to mind spontaneously, it may be useful to ask yourself the following question: had the same thought come to mind after careful deliberation, would it seem just as meaningful? If you realize that your interpretation of a particular thought depends on whether it came to mind spontaneously, that’s an indication that your beliefs about these different kinds of thoughts might be affecting your judgment,” said Giblin, a doctoral student in CMU’s Tepper School of Business.

###

Carnegie Mellon University and Harvard Business School funded this research.

shilo@cmu.edu
412-268-6094
Carnegie Mellon University

The next ‘Big One’ for the Bay Area may be a cluster of major quakes

May 20, 2014

A cluster of closely timed earthquakes over 100 years in the 17th and 18th centuries released as much accumulated stress on San Francisco Bay Area’s major faults as the Great 1906 San Francisco earthquake, suggesting two possible scenarios for the next “Big One” for the region, according to new research published by the Bulletin of the Seismological Society of America (BSSA).

“The plates are moving,” said David Schwartz, a geologist with the U.S. Geological Survey and co-author of the study. “The stress is re-accumulating, and all of these faults have to catch up. How are they going to catch up?”

The San Francisco Bay Region (SFBR) is considered within the boundary between the Pacific and North American plates. Energy released during its earthquake cycle occurs along the region’s principal faults: the San Andreas, San Gregorio, Calaveras, Hayward-Rodgers Creek, Greenville, and Concord-Green Valley faults.

“The 1906 quake happened when there were fewer people, and the area was much less developed,” said Schwartz. “The earthquake had the beneficial effect of releasing the plate boundary stress and relaxing the crust, ushering in a period of low level earthquake activity.”

The earthquake cycle reflects the accumulation of stress, its release as slip on a fault or a set of faults, and its re-accumulation and re-release. The San Francisco Bay Area has not experienced a full earthquake cycle since its been occupied by people who have reported earthquake activity, either through written records or instrumentation. Founded in 1776, the Mission Dolores and the Presidio in San Francisco kept records of felt earthquakes and earthquake damage, marking the starting point for the historic earthquake record for the region.

“We are looking back at the past to get a more reasonable view of what’s going to happen decades down the road,” said Schwartz. “The only way to get a long history is to do these paleoseismic studies, which can help construct the rupture histories of the faults and the region. We are trying to see what went on and understand the uncertainties for the Bay Area.”

Schwartz and colleagues excavated trenches across faults, observing past surface ruptures from the most recent earthquakes on the major faults in the area. Radiocarbon dating of detrital charcoal and the presence of non-native pollen established the dates of paleoearthquakes, expanding the span of information of large events back to 1600.

The trenching studies suggest that between 1690 and the founding of the Mission Dolores and Presidio in 1776, a cluster of earthquakes ranging from magnitude 6.6 to 7.8 occurred on the Hayward fault (north and south segments), San Andreas fault (North Coast and San Juan Bautista segments), northern Calaveras fault, Rodgers Creek fault, and San Gregorio fault. There are no paleoearthquake data for the Greenville fault or northern extension of the Concord-Green Valley fault during this time interval.

“What the cluster of earthquakes did in our calculations was to release an amount of energy somewhat comparable to the amount released in the crust by the 1906 quake,” said Schwartz.

As stress on the region accumulates, the authors see at least two modes of energy release – one is a great earthquake and other is a cluster of large earthquakes. The probability for how the system will rupture is spread out over all faults in the region, making a cluster of large earthquakes more likely than a single great earthquake.

“Everybody is still thinking about a repeat of the 1906 quake,” said Schwartz. “It’s one thing to have a 1906-like earthquake where seismic activity is shut off, and we slide through the next 110 years in relative quiet. But what happens if every five years we get a magnitude 6.8 or 7.2? That’s not outside the realm of possibility.”

###

The paper, “The Earthquake Cycle in the San Francisco Bay Region: AD 1600-2012,” will be published online May 20, 2014 by BSSA and will appear in the June print issue. BSSA is published by the Seismological Society of America, which is an international scientific society devoted to the advancement of seismology and the understanding of earthquakes for the benefit of society.

“The Earthquake Cycle in the San Francisco Bay Region: AD 1600-2012,” published by BSSA. Authors: David Schwartz, USGS; James J. Lienkaemper, USGS; Suzanne Hecker, USGS; Keith I. Kelson, URS Corporation; Thomas E. Fumal, USGS; John N. Baldwin, Lettis Consultants International, Inc.; Gordon G. Seitz, California Geological Survey; Tina M. Niemi, University of Missouri-Kansas.

press@seismosoc.org
408-431-9885
Seismological Society of America

Scientists discover how to turn light into matter after 80-year quest

May 19, 2014

Imperial College London physicists have discovered how to create matter from light – a feat thought impossible when the idea was first theorised 80 years ago.

In just one day over several cups of coffee in a tiny office in Imperial’s Blackett Physics Laboratory, three physicists worked out a relatively simple way to physically prove a theory first devised by scientists Breit and Wheeler in 1934.

Breit and Wheeler suggested that it should be possible to turn light into matter by smashing together only two particles of light (photons), to create an electron and a positron – the simplest method of turning light into matter ever predicted. The calculation was found to be theoretically sound but Breit and Wheeler said that they never expected anybody to physically demonstrate their prediction. It has never been observed in the laboratory and past experiments to test it have required the addition of massive high-energy particles.

The new research, published in Nature Photonics, shows for the first time how Breit and Wheeler’s theory could be proven in practice. This ‘photon-photon collider’, which would convert light directly into matter using technology that is already available, would be a new type of high-energy physics experiment. This experiment would recreate a process that was important in the first 100 seconds of the universe and that is also seen in gamma ray bursts, which are the biggest explosions in the universe and one of physics’ greatest unsolved mysteries.

The scientists had been investigating unrelated problems in fusion energy when they realised what they were working on could be applied to the Breit-Wheeler theory. The breakthrough was achieved in collaboration with a fellow theoretical physicist from the Max Planck Institute for Nuclear Physics, who happened to be visiting Imperial.

Demonstrating the Breit-Wheeler theory would provide the final jigsaw piece of a physics puzzle which describes the simplest ways in which light and matter interact (see image in notes to editors). The six other pieces in that puzzle, including Dirac’s 1930 theory on the annihilation of electrons and positrons and Einstein’s 1905 theory on the photoelectric effect, are all associated with Nobel Prize-winning research (see image).

Professor Steve Rose from the Department of Physics at Imperial College London said: “Despite all physicists accepting the theory to be true, when Breit and Wheeler first proposed the theory, they said that they never expected it be shown in the laboratory. Today, nearly 80 years later, we prove them wrong. What was so surprising to us was the discovery of how we can create matter directly from light using the technology that we have today in the UK. As we are theorists we are now talking to others who can use our ideas to undertake this landmark experiment.”

The collider experiment that the scientists have proposed involves two key steps. First, the scientists would use an extremely powerful high-intensity laser to speed up electrons to just below the speed of light. They would then fire these electrons into a slab of gold to create a beam of photons a billion times more energetic than visible light.

The next stage of the experiment involves a tiny gold can called a hohlraum (German for ‘empty room’). Scientists would fire a high-energy laser at the inner surface of this gold can, to create a thermal radiation field, generating light similar to the light emitted by stars.

They would then direct the photon beam from the first stage of the experiment through the centre of the can, causing the photons from the two sources to collide and form electrons and positrons. It would then be possible to detect the formation of the electrons and positrons when they exited the can.

Lead researcher Oliver Pike who is currently completing his PhD in plasma physics, said: “Although the theory is conceptually simple, it has been very difficult to verify experimentally. We were able to develop the idea for the collider very quickly, but the experimental design we propose can be carried out with relative ease and with existing technology. Within a few hours of looking for applications of hohlraums outside their traditional role in fusion energy research, we were astonished to find they provided the perfect conditions for creating a photon collider. The race to carry out and complete the experiment is on!”

###

The research was funded by the Engineering and Physical Sciences Research Council (EPSRC), the John Adams Institute for Accelerator Science, and the Atomic Weapons Establishment (AWE), and was carried out in collaboration with Max-Planck-Institut für Kernphysik.

gail.wilson@imperial.ac.uk
44-020-759-46702
Imperial College London

Just keep your promises: Going above and beyond does not pay off

May 9, 2014

If you are sending Mother’s Day flowers to your mom this weekend, chances are you opted for guaranteed delivery: the promise that they will arrive by a certain time. Should the flowers not arrive in time, you will likely feel betrayed by the sender for breaking their promise. But if they arrive earlier, you likely will be no happier than if they arrive on time, according to new research. The new work suggests that we place such a high premium on keeping a promise that exceeding it confers little or no additional benefit.

Whether we make them with a person or company, promises are social contracts, says Ayelet Gneezy of the University of California, San Diego. While researchers have explored the negative consequences of breaking promises, until now, they have not explored what happens when someone exceeds a promise.

Gneezy became interested in the topic when thinking about how consumers respond to promises made by firms. “My first memory in that respect is that of Amazon’s tendency to exceed its promise with respect to delivery time – that is, packages always arrived earlier than promised – and my lack of appreciation of the ‘gesture’/fact,” Gneezy says.

Gneezy, with colleague Nicholas Epley of the University of Chicago Booth School of Business, set out to explore “promise exceeding” in a series of experiments that tested imagined, recalled, and actual promise-making. In one of the experiments, for example, researchers asked participants to recall three promises: one broken, one kept, and one exceeded. They then asked them to rate how happy they were with the promise-maker’s behavior.

While participants valued keeping a promise much more highly than breaking one, exceeding the promise conferred virtually no additional happiness with the promise-maker, as published today in Social Psychological and Personality Science. Additionally, in a follow-up experiment, participants said that exceeding a promise did not require expending significantly more effort.

In another experiment, researchers paired participants, making one the promise-maker and one the promise-receiver. The promise-receiver needed to solve 40 puzzles, being paid for each puzzle solved. The promise-maker promised to help in solving 10 puzzles. The experimenter then instructed the promise-makers to solve either the 10 puzzles (as promised), only 5, or 15.

Although exceeding the promise by solving 15 puzzles clearly required more effort, the promise- receivers did not value that extra work any more than just keeping to the 10 puzzles promised: They valued promise keeping and exceeding equally.

“I was surprised that exceeding a promise produced so little meaningful increase in gratitude or appreciation. I had anticipated a modest positive effect,” Epley says, but “what we actually found was almost no gain from exceeding a promise whatsoever.”

And the trend held true across the experiments: “Being able to demonstrate our effect so reliably across so many very different methods gives us great confidence in the robustness of these effects,” Epley says.

The data suggest that the reason for these effects lies in how we value promises as a society. “Keeping a promise is valued so highly, above and beyond its ‘objective’ value,” Epley says. “When you keep a promise, not only have you done something nice for someone but you’ve also fulfilled a social contract and shown that you’re a reliable and trustworthy person.”

The bottom line, Epley says, is that exceeding a promise may not be worth the effort you put in. “Invest efforts into keeping promises, not in exceeding them,” he says. And this advice also holds true for businesses, which should prioritize resources to make sure they do not break promises, rather than trying to go above and beyond.

To test this idea further, Epley and Gneezy asked participants in a follow-up study to imagine they had bought concert tickets for row 10 and then either received worse tickets than promised (row 11, 13, or 15), better tickets than promised (row 9, 7, or 5), or exactly what was promised. Participants were more negative about receiving worse tickets but were no more positive – nor more likely to recommend the company – when they received better tickets than promised.

Epley and colleague Nadav Klein are currently working on related work about how people evaluate selfless compared to selfish behavior and they are finding similar results. “Behaving fairly toward others is the critical point,” Epley explains. “Beyond being fair, generosity does not seem to be valued as much as one would expect.”

So, Epley says: “Don’t be upset when your friends, family members, clients, or students fail to appreciate the extra effort you put into going above and beyond your promise. They do not appear to be uniquely ungrateful, just human.”

spps.media@gmail.com
571-354-0754
SAGE Publications

Next Page »