June 13, 2013
The sounds of different languages may have been shaped by the geography of the places where they are spoken, according to research published June 5 in the open access journal PLOS ONE by Caleb Everett from the University of Miami.
Everett compared the sounds used in about 600 languages across the world with the regions they were commonly spoken, and found a strong correlation between high altitude and spoken languages that included consonant sounds produced with an intense burst of air, called ejective consonants. Ejectives are absent in the English language, but were found in languages spoken on, or near, five out of six major high altitude regions where people live. The artificial Na’vi language spoken in the movie Avatar also uses ejective consonants. The relationship is difficult to explain by other factors, says the author.
“This is evidence that geography does influence phonology – the sound system of languages,” explains Everett. “Ejectives are produced by creating a pocket of air in the pharynx then compressing it. Since air pressure decreases with altitude and it takes less effort to compress less dense air, I speculate that it’s easier to produce these sounds at high altitude. The results of the study suggest that ecological factors may shape the structure of language in ways that have gone unrecognized.”
Citation: Everett C (2013) Evidence for Direct Geographic Influences on Linguistic Sounds: The Case of Ejectives. PLOS ONE 8(6): e65275. doi:10.1371/journal.pone.0065275
Financial Disclosure: This research was funded by salary from the University of Miami. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing Interest Statement: The author has declared that no competing interests exist.
PLEASE LINK TO THE SCIENTIFIC ARTICLE IN ONLINE VERSIONS OF YOUR REPORT (URL goes live after the embargo ends): http://dx.plos.org/10.1371/journal.pone.0065275
Disclaimer: This press release refers to upcoming articles in PLOS ONE. The releases have been provided by the article authors and/or journal staff. Any opinions expressed in these are the personal views of the contributors, and do not necessarily represent the views or policies of PLOS. PLOS expressly disclaims any and all warranties and liability in connection with the information found in the release and article and your use of such information.
About PLOS ONE: PLOS ONE is the first journal of primary research from all areas of science to employ a combination of peer review and post-publication rating and commenting, to maximize the impact of every report it publishes. PLOS ONE is published by the Public Library of Science (PLOS), the open-access publisher whose goal is to make the world’s scientific and medical literature a public resource.
June 3, 2013
Through innovations to a printing process, researchers have made major improvements to organic electronics – a technology in demand for lightweight, low-cost solar cells, flexible electronic displays and tiny sensors. The printing method is fast and works with a variety of organic materials to produce semiconductors of strikingly higher quality than what has so far been achieved with similar methods.
Organic electronics have great promise for a variety of applications, but even the highest quality films available today fall short in how well they conduct electrical current. The team from the U.S. Department of Energy’s (DOE) SLAC National Accelerator Laboratory and Stanford University have developed a printing process they call FLUENCE – fluid-enhanced crystal engineering – that for some materials results in thin films capable of conducting electricity 10 times more efficiently than those created using conventional methods.
“Even better, most of the concepts behind FLUENCE can scale up to meet industry requirements,” said Ying Diao, a SLAC/Stanford postdoctoral researcher and lead author of the study, which appeared today in Nature Materials.
Stefan Mannsfeld, a SLAC materials physicist and one of the principal investigators of the experiment, said the key was to focus on the physics of the printing process rather than the chemical makeup of the semiconductor. Diao engineered the process to produce strips of big, neatly aligned crystals that electrical charge can flow through easily, while preserving the benefits of the “strained lattice” structure and “solution shearing” printing technique previously developed in the lab of Mannsfeld’s co-principal investigator, Professor Zhenan Bao of the Stanford Institute for Materials and Energy Sciences, a joint SLAC-Stanford institute.
To make the advance, Diao focused on controlling the flow of the liquid in which the organic material is dissolved. “It’s a vital piece of the puzzle,” she said. If the ink flow does not distribute evenly, as is often the case during fast printing, the semiconducting crystals will be riddled with defects. “But in this field there’s been little research done on controlling fluid flow.”
Diao designed a printing blade with tiny pillars embedded in it that mix the ink so it forms a uniform film. She also engineered a way around another problem: the tendency of crystals to randomly form across the substrate. A series of cleverly designed chemical patterns on the substrate suppress the formation of unruly crystals that would otherwise grow out of alignment with the printing direction. The result is a film of large, well-aligned crystals.
X-ray studies of the group’s organic semiconductors at the Stanford Synchrotron Radiation Lightsource (SSRL) allowed them to inspect their progress and continue to make improvements, eventually showing neatly arranged crystals at least 10 times longer than crystals created with other solution-based techniques, and of much greater structural perfection.
The group also repeated the experiment using a second organic semiconductor material with a significantly different molecular structure, and again they saw a notable improvement in the quality of the film. They believe this is a sign the techniques will work across a variety of materials.
Principal investigators Bao and Mannsfeld say the next step for the group is pinning down the underlying relationship between the material and the process that enabled such a stellar result. Such a discovery could provide an unprecedented degree of control over the electronic properties of printed films, optimizing them for the devices that will use them.
“That could lead to a revolutionary advance in organic electronics,” Bao said. “We’ve been making excellent progress, but I think we’re only just scratching the surface.”
Other study co-authors included researchers from Stanford University’s departments of chemistry and chemical and electrical engineering and Nanjing University. The research was supported by the SLAC’s Laboratory Directed Research and Development program. SSRL is a national user facility operated by Stanford University on behalf of the DOE’s Office of Science.
SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the U.S. Department of Energy Office of Science. To learn more, please visit http://www.slac.stanford.edu.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
Citation: Y. Diao et al., Nature Materials, 02 June 2013 (10.1038/NMAT3650)
Press Office Contact:
Zhenan Bao, SLAC / Stanford University
Stefan Mannsfeld, SLAC
Also on the Web:
The Bao Group at Stanford
The Stanford Synchrotron Radiation Lightsource (SSRL)
June 2, 2013
With so much on the line for job seekers in this difficult economic climate, a lot of new hires might be wondering how – or whether at all – to negotiate salary when offered a new position. A recently published study on the art of negotiation by two professors at Columbia Business School could help these new hires – and all negotiators – seal a stronger deal than before.
Research conducted by Professors Malia Mason and Daniel Ames and doctoral students Alice Lee and Elizabeth Wiley finds that asking for a specific and precise dollar amount versus a rounded-off dollar amount can give you the upper hand during any negotiation over a quantity.
“What we discovered is there is a big difference in what most people think is a good strategy when negotiating and what research shows is a good strategy,” said Professor Mason. “Negotiators should remember that in this case, zero’s really do add nothing to the bargaining table.”
The research, forthcoming in the Journal of Experimental Social Psychology, looks at the two-way flow of communication between 1,254 fictitious negotiators.
The negotiators were placed in everyday scenarios such as buying jewelry or negotiating the sale of a used car. Some people were asked to make an opening offer using a rounded-off dollar amount, while other people were asked to use a precise dollar amount; let’s say for example $5,000 vs. $5,015.
The results showed that overall, people making an offer using a precise dollar amount such as $5,015 versus a rounded-off dollar amount such as $5,000 were perceived to be more informed about the true value of the offer being negotiated. This perception, in turn, led precise-offer recipients to concede more value to their counterpart.
In their negotiation scenarios, the professors concluded the person making a precise offer is successfully giving the illusion they have done their homework. When perceived as better informed, the person on the opposite end believes there is less room to negotiate.
To determine whether people make round offers more often than not, the researchers looked at the real estate market. Research done on Zillow, the online real estate marketplace, showed the overwhelming majority of displayed prices were rounded numbers, and that only two percent of people listed their homes with precise dollar amounts.
“The practical application of these findings – signaling that you are informed and using a precise number – can be used in any negotiation situation to imply you’ve done your homework,” Mason concluded.
The study, Precise Offers Are Potent Anchors: Conciliatory Counteroffers and Attributions of Knowledge in Negotiations was authored by Malia Mason, the Gantcher Associate Professor of Business; doctoral students Alice Lee and Elizabeth Wiley; and Daniel Ames, professor. Download the full report.
To learn more about cutting-edge research being performed by Columbia Business School faculty members, please visit http://www.gsb.columbia.edu.
About Columbia Business School
Led by Dean Glenn Hubbard, the Russell L. Carson Professor of Finance and Economics, Columbia Business School is at the forefront of management education. The School’s cutting-edge curriculum bridges academic theory and practice, equipping students with an entrepreneurial mindset to recognize, capture, and create opportunity in a competitive business environment. Beyond academic rigor and teaching excellence, the School offers programs that are designed to give students practical experience making decisions in real-world environments. The school offers MBA, Masters, and PhD degrees, as well as non-degree Executive Education programs.
May 31, 2013
Researchers at the Stanford University School of Medicine have found that a naturally occurring protein secreted only in discrete areas of the mammalian brain may act as a Valium-like brake on certain types of epileptic seizures.
The protein is known as diazepam binding inhibitor, or DBI. It calms the rhythms of a key brain circuit and so could prove valuable in developing novel, less side-effect-prone therapies not only for epilepsy but possibly for anxiety and sleep disorders, too. The researchers’ discoveries will be published May 30 in Neuron.
“This is one of the most exciting findings we have had in many years,” said John Huguenard, PhD, professor of neurology and neurological sciences and the study’s senior author. “Our results show for the first time that a nucleus deep in the middle of the brain generates a small protein product, or peptide, that acts just like benzodiazepines.” This drug class includes not only the anti-anxiety compound Valium (generic name diazepam), first marketed in 1965, but its predecessor Librium, discovered in 1955, and the more recently developed sleep aid Halcyon.
Valium, which is notoriously addictive, prone to abuse and dangerous at high doses, was an early drug treatment for epilepsy, but it has fallen out of use for this purpose because its efficacy quickly wears off and because newer, better anti-epileptic drugs have come along.
For decades, DBI has also been known to researchers under a different name: ACBP. In fact, it is found in every cell of the body, where it is an intracellular transporter of a metabolite called acyl-CoA. “But in a very specific and very important brain circuit that we’ve been studying for many years, DBI not only leaves the cells that made it but is – or undergoes further processing to become – a natural anti-epileptic compound,” Huguenard said. “In this circuit, DBI or one of its peptide fragments acts just like Valium biochemically and produces the same neurological effect.”
Other endogenous (internally produced) substances have been shown to cause effects similar to psychoactive drugs. In 1974, endogenous proteins called endorphins, with biochemical activity and painkilling properties similar to that of opiates, were isolated. A more recently identified set of substances, the endocannabinoids, mimic the memory-, appetite- and analgesia-regulating actions of the psychoactive components of cannabis, or marijuana.
DBI binds to receptors that sit on nerve-cell surfaces and are responsive to a tiny but important chemical messenger, or neurotransmitter, called GABA. The roughly one-fifth of all nerve cells in the brain that are inhibitory mainly do their job by secreting GABA, which binds to receptors on nearby nerve cells, rendering those cells temporarily unable to fire any electrical signals of their own.
Benzodiazepine drugs enhance GABA-induced inhibition by binding to a different site on GABA receptors from the one GABA binds to. That changes the receptor’s shape, making it hyper-responsive to GABA. These receptors come in many different types and subtypes, not all of which are responsive to benzodiazepines. DBI binds to the same spot to which benzodiazepines bind on benzodiazepine-responsive GABA receptors. But until now, exactly what this means has remained unclear.
Huguenard, along with postdoctoral scholar and lead author Catherine Christian, PhD, and several Stanford colleagues zeroed in on DBI’s function in the thalamus, a deep-brain structure that serves as a relay station for sensory information, and which previous studies in the Huguenard lab have implicated on the initiation of seizures. The researchers used single-nerve-cell-recording techniques to show that within a GABA-secreting nerve-cell cluster called the thalamic reticular nucleus, DBI has the same inhibition-boosting effect on benzodiazepine-responsive GABA receptors as do benzodiazepines. Using bioengineered mice in which those receptors’ benzodiazepine-binding site was defective, they showed that DBI lost its effect, which Huguenard and Christian suggested makes these mice seizure-prone.
In another seizure-prone mouse strain in which that site is intact but the gene for DBI is missing, the scientists saw diminished inhibitory activity on the part of benzodiazepine-responsive GABA receptors. Re-introducing the DBI gene to the brains of these mice via a sophisticated laboratory technique restored the strength of the GABA-induced inhibition. In normal mice, a compound known to block the benzodiazepine-binding site weakened these same receptors’ inhibitory activity in the thalamic reticular nucleus, even in the absence of any administered benzodiazepines. This suggested that some naturally occurring benzodiazepine-like substance was being displaced from the benzodiazepine-binding site by the drug. In DBI-gene-lacking mice, the blocking agent had no effect at all.
Huguenard’s team also showed that DBI has the same inhibition-enhancing effect on nerve cells in an adjacent thalamic region – but also that, importantly, no DBI is naturally generated in or near this region; in the corticothalamic circuit, at least, DBI appears to be released only in the thalamic reticular nucleus. So, the actions of DBI on GABA receptors appear to be tightly controlled to occur only in specific brain areas.
Huguenard doesn’t know yet whether it is DBI per se, or one of its peptide fragments (and if so which one), that is exerting the active inhibitory role. But, he said, by finding out exactly which cells are releasing DBI under what biochemical circumstances, it may someday be possible to develop agents that could jump-start and boost its activity in epileptic patients at the very onset of seizures, effectively nipping them in the bud.
The study received funding from the National Institute of Neurological Disorders and Stroke (grants NS034774, NS006477 and T32NS007280), the Epilepsy Foundation and a Katharine McCormick Advanced Postdoctoral Fellowship at the School of Medicine. Other Stanford co-authors were postdoctoral scholar Susanne Pangratz-Fuehrer, DVM; neurology resident Rebecca Holt; and research assistants Anne Herbert, MD, Kathy Peng and Kyla Sherwood.
May 29, 2013
The widespread disappearance of stromatolites, the earliest visible manifestation of life on Earth, may have been driven by single-celled organisms called foraminifera.
The findings, by scientists at Woods Hole Oceanographic Institution (WHOI); Massachusetts Institute of Technology; the University of Connecticut; Harvard Medical School; and Beth Israel Deaconess Medical Center, Boston, were published online the week of May 27 in the Proceedings of the National Academy of Sciences.
Stromatolites (“layered rocks”) are structures made of calcium carbonate and shaped by the actions of photosynthetic cyanobacteria and other microbes that trapped and bound grains of coastal sediment into fine layers. They showed up in great abundance along shorelines all over the world about 3.5 billion years ago.
“Stromatolites were one of the earliest examples of the intimate connection between biology – living things – and geology – the structure of the Earth itself,” said WHOI geobiologist Joan Bernhard, lead author of the study.
The growing bacterial community secreted sticky compounds that bound the sediment grains around themselves, creating a mineral “microfabric” that accumulated to become massive formations. Stromatolites dominated the scene for more than two billion years, until late in the Proterozoic Eon.
“Then, around 1 billion years ago, their diversity and their fossil abundance begin to take a nosedive,” said Bernhard. All over the globe, over a period of millions of years, the layered formations that had been so abundant and diverse began to disappear. To paleontologists, their loss was almost as dramatic as the extinction of the dinosaurs millions of years later, although not as complete: Living stromatolites can still be found today, in limited and widely scattered locales, as if a few velociraptors still roamed in remote valleys.
While the extinction of the dinosaurs has largely been explained by the impact of a large meteorite, the crash of the stromatolites remains unsolved. “It’s one of the major questions in Earth history,” said WHOI microbial ecologist Virginia Edgcomb, a co-author on the paper.
Just as puzzling is the sudden appearance in the fossil record of different formations called thrombolites (“clotted stones”). Like stromatolites, thrombolites are produced through the action of microbes on sediment and minerals. Unlike stromatolites, they are clumpy, rather than finely layered.
It’s not known whether stromatolites became thrombolites, or whether thrombolites arose independently of the decline in strombolites. Hypotheses proposed to explain both include changes in ocean chemistry and the appearance of multicellular life forms that might have preyed on the microbes responsible for their structure.
Bernhard and Edgcomb thought foraminifera might have played a role. Foraminifera (or “forams,” for short) are protists, the kingdom that includes amoeba, ciliates, and other groups formerly referred to as “protozoa.” They are abundant in modern-day oceanic sediments, where they use numerous slender projections called pseudopods to engulf prey, to move, and to continually explore their immediate environment. Despite their known ability to disturb modern sediments, their possible role in the loss of stromatolites and appearance of thrombolites had never been considered.
The researchers examined modern stromatolites and thrombolites from Highborne Cay in the Bahamas for the presence of foraminifera. Using microscopic and rRNA sequencing techniques, they found forams in both kinds of structures. Thrombolites were home to a greater diversity of foraminifera and were especially rich in forams that secrete an organic sheath around themselves. These “thecate” foraminifera were probably the first kinds of forams to evolve, not long (in geologic terms) before stromatolites began to decline.
“The timing of their appearance corresponds with the decline of layered stromatolites and the appearance of thrombolites in the fossil record,” said Edgcomb. “That lends support to the idea that it could have been forams that drove their evolution.”
Next, Bernhard, Edgcomb, and postdoctoral investigator Anna McIntyre-Wressnig created an experimental scenario that mimicked what might have happened a billion years ago.
“No one will ever be able to re-create the Proterozoic exactly, because life has evolved since then, but you do the best you can,” Edgcomb said.
They started with chunks of modern-day stromatolites collected at Highborne Cay, and seeded them with foraminifera found in modern-day thrombolites. Then they waited to see what effect, if any, the added forams had on the stromatolites.
After about six months, the finely layered arrangement characteristic of stromatolites had changed to a jumbled arrangement more like that of thrombolites. Even their fine structure, as revealed by CAT scans, resembled that of thrombolites collected from the wild. “The forams obliterated the microfabric,” said Bernhard.
That result was intriguing, but it did not prove that the changes in the structure were due to the activities of the foraminifera. Just being brought into the lab might have caused the changes. But the researchers included a control in their experiment: They seeded foraminifera onto freshly-collected stromatolites as before, but also treated them with colchicine, a drug that prevented them from sending out pseudopods. “They’re held hostage,” said Bernhard. “They’re in there, but they can’t eat, they can’t move.”
After about six months, the foraminifera were still present and alive – but the rock’s structure had not become more clotted like a thrombolite. It was still layered.
The researchers concluded that active foraminifera can reshape the fabric of stromatolites and could have instigated the loss of those formations and the appearance of thrombolites.
The Woods Hole Oceanographic Institution is a private, non-profit organization on Cape Cod, Mass., dedicated to marine research, engineering, and higher education. Established in 1930 on a recommendation from the National Academy of Sciences, its primary mission is to understand the oceans and their interaction with the Earth as a whole, and to communicate a basic understanding of the oceans’ role in the changing global environment. For more information, please visit http://www.whoi.edu.
May 28, 2013
The Antarctic continental ice cap came into existence during the Oligocene epoch, some 33.6 million years ago, according to data from an international expedition led by the Andalusian Institute of Earth Sciences (IACT) – a Spanish National Research Council-University of Granada joint centre. These findings, based on information contained in ice sediments from different depths, have recently been published in the journal Science.
Before the ice covered Antarctica, the Earth was a warm place with a tropical climate. In this region, plankton diversity was high until glaciation reduced the populations leaving only those capable of surviving in the new climate.
The Integrated Ocean Drilling Program international expedition has obtained this information from the paleoclimatic history preserved in sediment strata in the Antarctic depths. IACT researcher Carlota Escutia, who led the expedition, explains that “the fossil record of dinoflagellate cyst communities reflects the substantial reduction and specialization of these species that took place when the ice cap became established and, with it, marked seasonal ice-pack formation and melting began”.
The appearance of the Antarctic polar icecap marks the beginning of plankton communities that are still functioning today. This ice-cap is associated with the ice-pack, the frozen part that disappears and reappears as a function of seasonal climate changes.
The article reports that when the ice-pack melts as the Antarctic summer approaches, this marks the increase in primary productivity of endemic plankton communities. When it melts, the ice frees the nutrients it has accumulated and these are used by the plankton. Dr Escutia says “this phenomenon influences the dynamics of global primary productivity”.
Since ice first expanded across Antarctica and caused the dinoflagellate communities to specialize, these species have been undergoing constant change and evolution. However, the IACT researcher thinks “the great change came when the species simplified their form and found they were forced to adapt to the new climatic conditions”.
Pre-glaciation sediment contained highly varied dinoflagellate communities, with star-shaped morphologies. When the ice appeared 33.6 million years ago, this diversity was limited and their activity subjected to the new seasonal climate.
Alexander J. P. Houben, Peter K. Bijl, Jörg Pross, Steven M. Bohaty, Sandra Passchier, Catherine E. Stickley, Ursula Röhl, Saiko Sugisaki, Lisa Tauxe, Tina van de Flierdt, Matthew Olney, Francesca Sangiorgi, Appy Sluijs, Carlota Escutia Henk Brinkhuis and the Expedition 318 Scientists.
Reorganization of Southern Ocean Plankton Ecosystem at the Onset of Antarctic Glaciation.
Science. DOI: 10.1126/science.1223646
May 21, 2013
Turns out, that old “practice makes perfect” adage may be overblown.
New research led by Michigan State University’s Zach Hambrick finds that a copious amount of practice is not enough to explain why people differ in level of skill in two widely studied activities, chess and music.
In other words, it takes more than hard work to become an expert. Hambrick, writing in the research journal Intelligence, said natural talent and other factors likely play a role in mastering a complicated activity.
“Practice is indeed important to reach an elite level of performance, but this paper makes an overwhelming case that it isn’t enough,” said Hambrick, associate professor of psychology.
The debate over why and how people become experts has existed for more than a century. Many theorists argue that thousands of hours of focused, deliberate practice is sufficient to achieve elite status.
“The evidence is quite clear,” he writes, “that some people do reach an elite level of performance without copious practice, while other people fail to do so despite copious practice.”
Hambrick and colleagues analyzed 14 studies of chess players and musicians, looking specifically at how practice was related to differences in performance. Practice, they found, accounted for only about one-third of the differences in skill in both music and chess.
So what made up the rest of the difference?
Based on existing research, Hambrick said it could be explained by factors such as intelligence or innate ability, and the age at which people start the particular activity. A previous study of Hambrick’s suggested that working memory capacity – which is closely related to general intelligence – may sometimes be the deciding factor between being good and great.
While the conclusion that practice may not make perfect runs counter to the popular view that just about anyone can achieve greatness if they work hard enough, Hambrick said there is a “silver lining” to the research.
“If people are given an accurate assessment of their abilities and the likelihood of achieving certain goals given those abilities,” he said, “they may gravitate toward domains in which they have a realistic chance of becoming an expert through deliberate practice.”
Hambrick’s co-authors are Erik Altmann from MSU; Frederick Oswald from Rice University; Elizabeth Meinz from Southern Illinois University; Fernand Gobet from Brunel University in the United Kingdom; and Guillermo Campitelli from Edith Cowan University in Australia.
May 14, 2013
Your brain often works on autopilot when it comes to grammar. That theory has been around for years, but University of Oregon neuroscientists have captured elusive hard evidence that people indeed detect and process grammatical errors with no awareness of doing so.
Participants in the study — native-English speaking people, ages 18-30 – - had their brain activity recorded using electroencephalography, from which researchers focused on a signal known as the Event-Related Potential (ERP). This non-invasive technique allows for the capture of changes in brain electrical activity during an event. In this case, events were short sentences presented visually one word at a time.
Subjects were given 280 experimental sentences, including some that were syntactically (grammatically) correct and others containing grammatical errors, such as “We drank Lisa’s brandy by the fire in the lobby,” or “We drank Lisa’s by brandy the fire in the lobby.” A 50 millisecond audio tone was also played at some point in each sentence. A tone appeared before or after a grammatical faux pas was presented. The auditory distraction also appeared in grammatically correct sentences.
This approach, said lead author Laura Batterink, a postdoctoral researcher, provided a signature of whether awareness was at work during processing of the errors. “Participants had to respond to the tone as quickly as they could, indicating if its pitch was low, medium or high,” she said. “The grammatical violations were fully visible to participants, but because they had to complete this extra task, they were often not consciously aware of the violations. They would read the sentence and have to indicate if it was correct or incorrect. If the tone was played immediately before the grammatical violation, they were more likely to say the sentence was correct even it wasn’t.”
When tones appeared after grammatical errors, subjects detected 89 percent of the errors. In cases where subjects correctly declared errors in sentences, the researchers found a P600 effect, an ERP response in which the error is recognized and corrected on the fly to make sense of the sentence.
When the tones appear before the grammatical errors, subjects detected only 51 percent of them. The tone before the event, said co-author Helen J. Neville, who holds the UO’s Robert and Beverly Lewis Endowed Chair in psychology, created a blink in their attention. The key to conscious awareness, she said, is based on whether or not a person can declare an error, and the tones disrupted participants’ ability to declare the errors. But, even when the participants did not notice these errors, their brains responded to them, generating an early negative ERP response. These undetected errors also delayed participants’ reaction times to the tones.
“Even when you don’t pick up on a syntactic error your brain is still picking up on it,” Batterink said. “There is a brain mechanism recognizing it and reacting to it, processing it unconsciously so you understand it properly.”
The study was published in the May 8 issue of the Journal of Neuroscience.
The brain processes syntactic information implicitly, in the absence of awareness, the authors concluded. “While other aspects of language, such as semantics and phonology, can also be processed implicitly, the present data represent the first direct evidence that implicit mechanisms also play a role in the processing of syntax, the core computational component of language.”
It may be time to reconsider some teaching strategies, especially how adults are taught a second language, said Neville, a member of the UO’s Institute of Neuroscience and director of the UO’s Brain Development Lab.
Children, she noted, often pick up grammar rules implicitly through routine daily interactions with parents or peers, simply hearing and processing new words and their usage before any formal instruction. She likened such learning to “Jabberwocky,” the nonsense poem introduced by writer Lewis Carroll in 1871 in “Through the Looking Glass,” where Alice discovers a book in an unrecognizable language that turns out to be written inversely and readable in a mirror.
For a second language, she said, “Teach grammatical rules implicitly, without any semantics at all, like with jabberwocky. Get them to listen to jabberwocky, like a child does.”
The National Institute on Deafness and Other Communication Disorders of the National Institutes of Health supported the research (grant 5R01DC000128).
About the University of Oregon
The University of Oregon is among the 108 institutions chosen from 4,633 U.S. universities for top-tier designation of “Very High Research Activity” in the 2010 Carnegie Classification of Institutions of Higher Education. The UO also is one of two Pacific Northwest members of the Association of American Universities.
May 10, 2013
Water inside the Moon’s mantle came from primitive meteorites, new research finds, the same source thought to have supplied most of the water on Earth. The findings raise new questions about the process that formed the Moon.
The Moon is thought to have formed from a disc of debris left when a giant object hit the Earth 4.5 billion years ago, very early in Earth’s history. Scientists have long assumed that the heat from an impact of that size would cause hydrogen and other volatile elements to boil off into space, meaning the Moon must have started off completely dry. But recently, NASA spacecraft and new research on samples from the Apollo missions have shown that the Moon actually has water, both on its surface and beneath.
By showing that water on the Moon and on Earth came from the same source, this new study offers yet more evidence that the Moon’s water has been there all along.
“The simplest explanation for what we found is that there was water on the proto-Earth at the time of the giant impact,” said Alberto Saal, associate professor of Geological Sciences at Brown University and the study’s lead author. “Some of that water survived the impact, and that’s what we see in the Moon.”
The research was co-authored by Erik Hauri of the Carnegie Institution of Washington, James Van Orman of Case Western Reserve University, and Malcolm Rutherford from Brown and published online in Science Express.
To find the origin of the Moon’s water, Saal and his colleagues looked at melt inclusions found in samples brought back from the Apollo missions. Melt inclusions are tiny dots of volcanic glass trapped within crystals called olivine. The crystals prevent water escaping during an eruption and enable researchers to get an idea of what the inside of the Moon is like.
Research from 2011 led by Hauri found that the melt inclusions have plenty of water – as much water in fact as lavas forming on the Earth’s ocean floor. This study aimed to find the origin of that water. To do that, Saal and his colleagues looked at the isotopic composition of the hydrogen trapped in the inclusions. “In order to understand the origin of the hydrogen, we needed a fingerprint,” Saal said. “What is used as a fingerprint is the isotopic composition.”
Using a Cameca NanoSIMS 50L multicollector ion microprobe at Carnegie, the researchers measured the amount of deuterium in the samples compared to the amount of regular hydrogen. Deuterium is an isotope of hydrogen with an extra neutron. Water molecules originating from different places in the solar system have different amounts of deuterium. In general, things formed closer to the sun have less deuterium than things formed farther out.
Saal and his colleagues found that the deuterium/hydrogen ratio in the melt inclusions was relatively low and matched the ratio found in carbonaceous chondrites, meteorites originating in the asteroid belt near Jupiter and thought to be among the oldest objects in the solar system. That means the source of the water on the Moon is primitive meteorites, not comets as some scientists thought.
Comets, like meteorites, are known to carry water and other volatiles, but most comets formed in the far reaches of the solar system in a formation called the Oort Cloud. Because they formed so far from the sun, they tend to have high deuterium/hydrogen ratios – much higher ratios than in the Moon’s interior, where the samples in this study came from.
“The measurements themselves were very difficult,” Hauri said, “but the new data provide the best evidence yet that the carbon-bearing chondrites were a common source for the volatiles in the Earth and Moon, and perhaps the entire inner solar system.”
Recent research, Saal said, has found that as much as 98 percent of the water on Earth also comes from primitive meteorites, suggesting a common source for water on Earth and water on Moon. The easiest way to explain that, Saal says, is that the water was already present on the early Earth and was transferred to the Moon.
The finding is not necessarily inconsistent with the idea that the Moon was formed by a giant impact with the early Earth, but presents a problem. If the Moon is made from material that came from the Earth, it makes sense that the water in both would share a common source. However, there’s still the question of how that water was able to survive such a violent collision.
“The impact somehow didn’t cause all the water to be lost,” Saal said. “But we don’t know what that process would be.”
It suggests, the researchers say, that there are some important processes we don’t yet understand about how planets and satellites are formed.
“Our work suggests that even highly volatile elements may not be lost completely during a giant impact,” said Van Orman. “We need to go back to the drawing board and discover more about what giant impacts do, and we also need a better handle on volatile inventories in the Moon.”
Funding for the research came from NASA’s Cosmochemistry and LASER programs and the NASA Lunar Science Institute.
Editors: Brown University has a fiber link television studio available for domestic and international live and taped interviews, and maintains an ISDN line for radio interviews. For more information, call (401) 863-2476.
May 9, 2013
An international team of physicists has found the first direct evidence of pear shaped nuclei in exotic atoms.
The findings could advance the search for a new fundamental force in nature that could explain why the Big Bang created more matter than antimatter—a pivotal imbalance in the history of everything.
“If equal amounts of matter and antimatter were created at the Big Bang, everything would have annihilated, and there would be no galaxies, stars, planets or people,” said Tim Chupp, a University of Michigan professor of physics and biomedical engineering and co-author of a paper on the work published in the May 9 issue of Nature.
Antimatter particles have the same mass but opposite charge from their matter counterparts. Antimatter is rare in the known universe, flitting briefly in and out of existence in cosmic rays, solar flares and particle accelerators like CERN’s Large Hadron Collider, for example. When they find each other, matter and antimatter particles mutually destruct or annihilate.
What caused the matter/antimatter imbalance is one of physics’ great mysteries. It’s not predicted by the Standard Model—the overarching theory that describes the laws of nature and the nature of matter.
The Standard Model describes four fundamental forces or interactions that govern how matter behaves: Gravity attracts massive bodies to one another. The electromagnetic interaction gives rise to forces on electrically charged bodies. And the strong and weak forces operate in the cores of atoms, binding together neutrons and protons or causing those particles to decay.
Physicists have been searching for signs of a new force or interaction that might explain the matter-antimatter discrepancy. The evidence of its existence would be revealed by measuring how the axis of nuclei of the radioactive elements radon and radium line up with the spin.
The researchers confirmed that the cores of these atoms are shaped like pears, rather than the more typical spherical orange or elliptical watermelon profiles. The pear shape makes the effects of the new interaction much stronger and easier to detect.
“The pear shape is special,” Chupp said. “It means the neutrons and protons, which compose the nucleus, are in slightly different places along an internal axis.”
The pear-shaped nuclei are lopsided because positive protons are pushed away from the center of the nucleus by nuclear forces, which are fundamentally different from spherically symmetric forces like gravity.
“The new interaction, whose effects we are studying does two things,” Chupp said. “It produces the matter/antimatter asymmetry in the early universe and it aligns the direction of the spin and the charge axis in these pear-shaped nuclei.”
To determine the shape of the nuclei, the researchers produced beams of exotic—short- lived—radium and radon atoms at CERN’s Isotope Separator facility ISOLDE. The atom beams were accelerated and smashed into targets of nickel, cadmium and tin, but due to the repulsive force between the positively charged nuclei, nuclear reactions were not possible. Instead, the nuclei were excited to higher energy levels, producing gamma rays that flew out in a specific pattern that revealed the pear shape of the nucleus.
“In the very biggest picture, we’re trying to understand everything we’ve observed directly and also indirectly, and how it is that we happen to be here,” Chupp said.
The research was led by University of Liverpool Physics Professor Peter Butler.
“Our findings contradict some nuclear theories and will help refine others,” he said.
The measurements also will help direct the searches for atomic EDMs (electric dipole moments) currently being carried out in North America and Europe, where new techniques are being developed to exploit the special properties of radon and radium isotopes.
“Our expectation is that the data from our nuclear physics experiments can be combined with the results from atomic trapping experiments measuring EDMs to make the most stringent tests of the Standard Model, the best theory we have for understanding the nature of the building blocks of the universe,” Butler said.
The paper is titled “Studies of nuclear pear-shapes using accelerated radioactive beams.”
Tim Chupp: http://research.physics.lsa.umich.edu/chupp/