January 23, 2015
Recent developments in science are beginning to suggest that the universe naturally produces complexity. The emergence of life in general and perhaps even rational life, with its associated technological culture, may be extremely common, argues Clemson researcher Kelly Smith in a recently published paper in the journal Space Policy.
What’s more, he suggests, this universal tendency has distinctly religious overtones and may even establish a truly universal basis for morality.
Smith, a Philosopher and Evolutionary Biologist, applies recent theoretical developments in Biology and Complex Systems Theory to attempt new answers to the kind of enduring questions about human purpose and obligation that have long been considered the sole province of the humanities.
He points out that scientists are increasingly beginning to discuss how the basic structure of the universe seems to favor the creation of complexity. The large scale history of the universe strongly suggests a trend of increasing complexity: disordered energy states produce atoms and molecules, which combine to form suns and associated planets, on which life evolves. Life then seems to exhibit its own pattern of increasing complexity, with simple organisms getting more complex over evolutionary time until they eventually develop rationality and complex culture.
And recent theoretical developments in Biology and complex systems theory suggest this trend may be real, arising from the basic structure of the universe in a predictable fashion.
“If this is right,” says Smith, “you can look at the universe as a kind of ‘complexity machine’, which raises all sorts of questions about what this means in a broader sense. For example, does believing the universe is structured to produce complexity in general, and rational creatures in particular, constitute a religious belief? It need not imply that the universe was created by a God, but on the other hand, it does suggest that the kind of rationality we hold dear is not an accident.”
And Smith feels another similarity to religion are the potential moral implications of this idea. If evolution tends to favor the development of sociality, reason, and culture as a kind of “package deal”, then it’s a good bet that any smart extraterrestrials we encounter will have similar evolved attitudes about their basic moral commitments.
In particular, they will likely agree with us that there is something morally special about rational, social creatures. And such universal agreement, argues Smith, could be the foundation for a truly universal system of ethics.
Smith will soon take sabbatical to lay the groundwork for a book exploring these issues in more detail.
January 8, 2015
Researchers from the University of Cambridge and the University of Plymouth have shown that follow-through – such as when swinging a golf club or tennis racket – can help us to learn two different skills at once, or to learn a single skill faster. The research provides new insight into the way tasks are learned, and could have implications for rehabilitation, such as re-learning motor skills following a stroke.
The researchers found that the particular motor memory which is active and modifiable in the brain at any given time depends on both lead-in and follow-through movement, and that skills which may otherwise interfere can be learned at the same time if their follow-through motions are unique. The research is published today (8 January) in the journal Current Biology.
While follow-through in sports such as tennis or golf cannot affect the movement of the ball after it has been hit, it does serve two important purposes: it both helps maximise velocity or force at the point of impact, and helps prevent injuries by allowing a gradual slowdown of a movement.
Now, researchers have found a third important role for follow-through: it allows distinct motor memories to be learned. In other words, by practising the same action with different follow-throughs, different motor memories can be learned for a single movement.
If a new task, whether that is serving a tennis ball or learning a passage on a musical instrument, is repeated enough times, a motor memory of that task is developed. The brain is able to store, protect and reactivate this memory, quickly instructing the muscles to perform the task so that it can be performed seemingly without thinking.
The problem with learning similar but distinct tasks is that they can ‘interfere’ with each other in the brain. For example, tennis and racquetball are both racket sports. However, the strokes for the two sports are slightly different, as topspin is great for a tennis player, but not for a racquetball player. Despite this, in theory it should be possible to learn both sports independently. However, many people find it difficult to perform at a high level in both sports, due to interference between the two strokes.
In order to determine whether we learn a separate motor memory for each task, or a single motor memory for both, the researchers examined either the presence or absence of interference by having participants learn a ‘reaching’ task in the presence of two opposite force-fields.
Participants grasped the handle of a robotic interface and made a reaching movement through an opposing force-field to a central target, followed immediately by a second unopposed follow-through movement to one of two possible final targets. The direction of the force-field was changed, representing different tasks, and the researchers were able to examine whether the tasks are learned separately, in which case there would be no interference, or whether we learn the mean of the two opposing force-fields, in which case there would be complete interference.
The researchers found that the specific motor memory which is active at any given moment depends on the movement that will be made in the near future. When a follow-through movement was made that anticipated the force-field direction, there was a substantial reduction in interference. This suggests that different follow-throughs may activate distinct motor memories, allowing us to learn two different skills without them interfering, even when the rest of the movement is identical. However, while practising a variable follow-through can activate multiple motor memories, practising a consistent follow-through allowed for tasks to be learned much faster.
“There is always noise in our movements, which arrives in the sensory information we receive, the planning we undertake, and the output of our motor system,” said Dr David Franklin of Cambridge’s Department of Engineering, a senior author on the research. “Because of this, every movement we make is slightly different from the last one even if we try really hard to make it exactly the same – there will always be variability within our movements and therefore within our follow-through as well.”
When practicing a new skill such as a tennis stroke, we may think that we do not need to care as much about controlling the variability after we hit the ball as it can’t actually affect the movement of the ball itself. “However this research suggests that this variability has another very important point – that it reduces the speed of learning of the skill that is being practiced,” said Franklin.
The research may also have implications for rehabilitation, such as re-learning skills after a stroke. When trying to re-learn skills after a stroke, many patients actually exhibit a great deal of variability in their movements. “Since we have shown that learning occurs faster with consistent movements, it may therefore be important to consider methods to reduce this variability in order to improve the speed of rehabilitation,” said Dr Ian Howard of the University of Plymouth, the paper’s lead author.
The work was supported by the Wellcome Trust, Human Frontier Science Program, Plymouth University and the Royal Society.
November 5, 2014
The physics community has spent three decades searching for and finding no evidence that dark matter is made of tiny exotic particles. Case Western Reserve University theoretical physicists suggest researchers consider looking for candidates more in the ordinary realm and, well, more massive.
Dark matter is unseen matter, that, combined with normal matter, could create the gravity that, among other things, prevents spinning galaxies from flying apart. Physicists calculate that dark matter comprises 27 percent of the universe; normal matter 5 percent.
Instead of WIMPS, weakly interacting massive particles, or axions, which are weakly interacting low-mass particles, dark matter may be made of macroscopic objects, anywhere from a few ounces to the size of a good asteroid, and probably as dense as a neutron star, or the nucleus of an atom, the researchers suggest.
Physics professor Glenn Starkman and David Jacobs, who received his PhD in Physics from CWRU in May and is now a fellow at the University of Cape Town, say published observations provide guidance, limiting where to look. They lay out the possibilities in a paper at http://arxiv.org/pdf/1410.2236.pdf.
The Macros, as Starkman and Jacobs call them, would not only dwarf WIMPS and axions, but differ in an important way. They could potentially be assembled out of particles in the Standard Model of particle physics instead of requiring new physics to explain their existence.
“We’ve been looking for WIMPs for a long time and haven’t seen them,” Starkman said. “We expected to make WIMPS in the Large Hadron Collider, and we haven’t.”
WIMPS and axions remain possible candidates for dark matter, but there’s reason to search elsewhere, the theorists argue.
“The community had kind of turned away from the idea that dark matter could be made of normal-ish stuff in the late ’80s,” Starkman said. “We ask, was that completely correct and how do we know dark matter isn’t more ordinary stuff— stuff that could be made from quarks and electrons?”
After eliminating most ordinary matter, including failed Jupiters, white dwarfs, neutron stars, stellar black holes, the black holes in centers of galaxies and neutrinos with a lot of mass, as possible candidates, physicists turned their focus on the exotics.
Matter that was somewhere in between ordinary and exotic—relatives of neutron stars or large nuclei—was left on the table, Starkman said. “We say relatives because they probably have a considerable admixture of strange quarks, which are made in accelerators and ordinarily have extremely short lives,” he said.
Although strange quarks are highly unstable, Starkman points out that neutrons are also highly unstable. But in helium, bound with stable protons, neutrons remain stable.
“That opens the possibility that stable strange nuclear matter was made in the early universe and dark matter is nothing more than chunks of strange nuclear matter or other bound states of quarks, or of baryons, which are themselves made of quarks,” he said. Such dark matter would fit the Standard Model.
The Macros would have to be assembled from ordinary and strange quarks or baryons before the strange quarks or baryons decay, and at a temperature above 3.5 trillion degrees Celsius, comparable to the temperature in the center of a massive supernova, Starkman and Jacobs calculated. The quarks would have to be assembled with 90 percent efficiency, leaving just 10 percent to form the protons and neutrons found in the universe today.
The limits of the possible dark matter are as follows:
- A minimum of 55 grams. If dark matter were smaller, it would have been seen in detectors in Skylab or in tracks found in sheets of mica.
- A maximum of 1024 (a million billion billion) grams. Above this, the Macros would be so massive they would bend starlight, which has not been seen.
- The range of 1017 to 1020 grams per centimeter squared should also be eliminated from the search, the theorists say. Dark matter in that range would be massive for gravitational lensing to affect individual photons from gamma ray bursts in ways that have not been seen.
If dark matter is within this allowed range, there are reasons it hasn’t been seen.
- At the mass of 1018 grams, dark matter Macros would hit the Earth about once every billion years.
- At lower masses, they would strike the Earth more frequently but might not leave a recognizable record or observable mark.
- In the range of 109 to 1018, dark matter would collide with the Earth once annually, providing nothing to the underground dark matter detectors in place.
October 27, 2014
Making mistakes while learning can benefit memory and lead to the correct answer, but only if the guesses are close-but-no-cigar, according to new research findings from Baycrest Health Sciences.
“Making random guesses does not appear to benefit later memory for the right answer , but near-miss guesses act as stepping stones for retrieval of the correct information – and this benefit is seen in younger and older adults,” says lead investigator Andrée-Ann Cyr, a graduate student with Baycrest’s Rotman Research Institute and the Department of Psychology at the University of Toronto.
Cyr’s paper is posted online today in the Journal of Experimental Psychology: Learning, Memory, and Cognition (ahead of print publication). The study expands upon a previous paper she published in Psychology and Aging in 2012 that found that learning information the hard way by making mistakes (as opposed to just being told the correct answer) may be the best boot camp for older brains.
That paper raised eyebrows since the scientific literature has traditionally recommended that older adults avoid making mistakes – unlike their younger peers who actually benefit from them. But recent evidence from Cyr and other researchers is challenging this perspective and prompting professional educators and cognitive rehabilitation clinicians to take note.
Cyr’s latest research provides evidence that trial-and-error learning can benefit memory in both young and old when errors are meaningfully related to the right answer, and can actually harm memory when they are not.
In their latest study, 65 healthy younger adults (average age 22) and 64 healthy older adults (average age 72) learned target words (e.g., rose) based either on the semantic category it belongs to (e.g., a flower) or its word stem (e.g., a word that begins with the letters ‘ro’). For half of the words, participants were given the answer right away (e.g., “the answer is rose”) and for the other half, they were asked to guess at it before seeing the answer (e.g., a flower: “Is it tulip?” or ro___ : “is it rope?”).
On a later memory test, participants were shown the categories or word stems and had to come up with the right answer. The researchers wanted to know if participants would be better at remembering rose if they had made wrong guesses prior to studying it rather than seeing it right away. They found that this was only true if participants learned based on the categories (e.g., a flower). Guessing actually made memory worse when words were learned based on word stems (e.g., ro___). This was the case for both younger and older adults. Cyr and her colleagues suggest this is because our memory organizes information based on how it is conceptually rather than lexically related to other information. For example, when you think of the word pear, your mind is more likely to jump to another fruit, such as apple, than to a word that looks similar, such as peer. Wrong guesses only add value when they have something meaningful in common with right answers. The guess tulip may be wrong, but it is still conceptually close to the right answer rose (both are flowers).
By guessing first as opposed to just reading the answer, one is thinking harder about the information and making useful connections that can help memory. Indeed, younger and older participants were more likely to remember the answer if they also remembered their wrong guesses, suggesting that these acted as stepping stones. By contrast, when guesses only have letters in common with answers, they clutter memory because one cannot link them meaningfully. The word rope is nowhere close to rose in our memory. In these situations, where your guesses are likely to be out in left field, it is best to bypass mistakes altogether.
“The fact that this pattern was found for older adults as well shows that aging does not influence how we learn from mistakes,” says Cyr.
“These results have profound clinical and practical implications. They turn traditional views of best practices in memory rehabilitation for healthy seniors on their head by demonstrating that making the right kind of errors can be beneficial. They also provide great hope for lifelong learning and guidance for how seniors should study,” says Dr. Nicole Anderson, senior scientist with Baycrest’s Rotman Research Institute and senior author on the study.
The study was funded by the Canadian Institutes of Health Research.
October 9, 2014
The discovery of a new particle will “transform our understanding” of the fundamental force of nature that binds the nuclei of atoms, researchers argue.
Led by scientists from the University of Warwick, the discovery of the new particle will help provide greater understanding of the strong interaction, the fundamental force of nature found within the protons of an atom’s nucleus.
Named Ds3*(2860)ˉ, the particle, a new type of meson, was discovered by analysing data collected with the LHCb detector at CERN’s Large Hadron Collider (LHC) .
The new particle is bound together in a similar way to protons. Due to this similarity, the Warwick researchers argue that scientists will now be able to study the particle to further understand strong interactions.
Along with gravity, the electromagnetic interaction and weak nuclear force, strong-interactions are one of four fundamental forces. Lead scientist Professor Tim Gershon, from The University of Warwick’s Department of Physics, explains:
“Gravity describes the universe on a large scale from galaxies to Newton’s falling apple, whilst the electromagnetic interaction is responsible for binding molecules together and also for holding electrons in orbit around an atom’s nucleus.
“The strong interaction is the force that binds quarks, the subatomic particles that form protons within atoms, together. It is so strong that the binding energy of the proton gives a much larger contribution to the mass, through Einstein’s equation E = mc2, than the quarks themselves. ”
Due in part to the forces’ relative simplicity, scientists have previously been able to solve the equations behind gravity and electromagnetic interactions, but the strength of the strong interaction makes it impossible to solve the equations in the same way.
“Calculations of strong interactions are done with a computationally intensive technique called Lattice QCD,” says Professor Gershon. “In order to validate these calculations it is essential to be able to compare predictions to experiments. The new particle is ideal for this purpose because it is the first known that both contains a charm quark and has spin 3.”
There are six quarks known to physicists; Up, Down, Strange, Charm, Beauty and Top. Protons and neutrons are composed of up and down quarks, but particles produced in accelerators such as the LHC can contain the unstable heavier quarks. In addition, some of these particles have higher spin values than the naturally occurring stable particles.
“Because the Ds3*(2860)ˉ particle contains a heavy charm quark it is easier for theorists to calculate its properties. And because it has spin 3, there can be no ambiguity about what the particle is,” adds Professor Gershon. “Therefore it provides a benchmark for future theoretical calculations. Improvements in these calculations will transform our understanding of how nuclei are bound together.”
Spin is one of the labels used by physicists to distinguish between particles. It is a concept that arises in quantum mechanics that can be thought of as being similar to angular momentum: in this sense higher spin corresponds to the quarks orbiting each other faster than those with a lower spin.
Warwick Ph.D. student Daniel Craik, who worked on the study, adds “Perhaps the most exciting part of this new result is that it could be the first of many similar discoveries with LHC data. Whether we can use the same technique, as employed with our research into Ds3*(2860)ˉ, to also improve our understanding of the weak interaction is a key question raised by this discovery. If so, this could help to answer one of the biggest mysteries in physics: why there is more matter than antimatter in the Universe.”
The results are detailed in two papers that will be published in the next editions of the journals Physical Review Letters and Physical Review D. Both papers have been given the accolade of being selected as Editors’ Suggestions.
Contact: Tom Frew, International Press Officer.
P: +44 (0)2476575910
Notes for Editors:
The results are detailed in papers titled:
- “Observation of overlapping spin-1 and spin-3 D0K- resonances at mass 2.86 GeV/c2″, to be published inPhysical Review Letters
-”Dalitz plot analysis of Bs0→D0K-π+ decays”, to be published in Physical Review D
- The Ds3*(2860)ˉ particle is a meson that contains a charm anti-quark and a strange quark. The subscript 3 denotes that it has spin 3, while the number 2860 in parentheses is the mass of the particle in the units of MeV/c2 that are favoured by particle physicists. The value of 2860 MeV/c2 corresponds to approximately 3 times the mass of the proton.
- The particle was discovered in the decay chain Bs0→D0K–π+ , where the Bs0, D0, K– and π+ mesons contain respectively a bottom anti-quark and a strange quark, a charm anti-quark and an up quark, an up anti-quark and a strange quark, and a down anti-quark and an up quark. The Ds3*(2860)ˉ particle is observed as a peak in the mass of combinations of the D0 and K– mesons. The distributions of the angles between the D0, K– and π+ particles allow the spin of the Ds3*(2860)ˉ meson to be unambiguously determined.
- Quarks are bound by the strong interaction into one of two types of particles: baryons, such as the proton, are composed of three quarks; mesons are composed of one quark and one anti-quark, where an anti-quark is the antimatter version of a quark.
- CERN, the European Organization for Nuclear Research, is the world’s leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. Romania is a Candidate for Accession. Serbia is an Associate Member in the pre-stage to Membership. India, Japan, the Russian Federation, the United States of America, Turkey, the European Commission and UNESCO have Observer Status.
- The LHCb experiment is one of the four main experiments at the CERN Large Hadron Collider, and is set up to explore what happened after the Big Bang that allowed matter to survive and build the Universe we inhabit today. The LHCb collaboration comprises about 700 physicists from 67 institutes in 17 countries.
August 26, 2014
A unique experiment at the U.S. Department of Energy’s Fermi National Accelerator Laboratory called the Holometer has started collecting data that will answer some mind-bending questions about our universe – including whether we live in a hologram.
Much like characters on a television show would not know that their seemingly 3 – D world exists only on a 2 – D screen, we could be clueless that our 3 – D space is just an illusion. The information about everything in our universe could actually be encoded in tiny packets in two dimensions.
Get close enough to your TV screen and you’ll see pixels, small points of data that make a seamless image if you stand back. Scientists think that the universe’s information may be contained in the same way, and that the natural “pixel size” of space is roughly 10 trillion trillion times smaller than an atom, a distance that physicists refer to as the Planck scale.
“We want to find out whether spacetime is a quantum system just like matter is,” said Craig Hogan, director of Fermilab’s Center for Particle Astrophysics and the developer of the holographic noise theory. “If we see something, it will completely change ideas about space we’ve used for thousands of years.”
Quantum theory suggests that it is impossible to know both the exact location and the exact speed of subatomic particles. If space comes in 2-D bits with limited information about the precise location of objects, then space itself would fall under the same theory of uncertainty . The same way that matter continues to jiggle (as quantum waves) even when cooled to absolute zero, this digitized space should have built-in vibrations even in its lowest energy state.
Essentially, the experiment probes the limits of the universe’s ability to store information. If there are a set number of bits that tell you where something is, it eventually becomes impossible to find more specific information about the location – even in principle. The instrument testing these limits is Fermilab’s Holometer, or holographic interferometer, the most sensitive device ever created to measure the quantum jitter of space itself.
Now operating at full power, the Holometer uses a pair of interferometers placed close to one another. Each one sends a one-kilowatt laser beam (the equivalent of 200,000 laser pointers) at a beam splitter and down two perpendicular 40-meter arms. The light is then reflected back to the beam splitter where the two beams recombine, creating fluctuations in brightness if there is motion. Researchers analyze these fluctuations in the returning light to see if the beam splitter is moving in a certain way – being carried along on a jitter of space itself.
“Holographic noise” is expected to be present at all frequencies, but the scientists’ challenge is not to be fooled by other sources of vibrations. The Holometer is testing a frequency so high – millions of cycles per second – that motions of normal matter are not likely to cause problems. Rather, the dominant background noise is more often due to radio waves emitted by nearby electronics. The Holometer experiment is designed to identify and eliminate noise from such conventional sources.
“If we find a noise we can’t get rid of, we might be detecting something fundamental about nature – a noise that is intrinsic to spacetime,” said Fermilab physicist Aaron Chou, lead scientist and project manager for the Holometer. “It’s an exciting moment for physics. A positive result will open a whole new avenue of questioning about how space works.”
The Holometer experiment, funded by the U.S. Department of Energy Office of Science and other sources , is expected to gather data over the coming year.
The Holometer team comprises 21 scientists and students from Fermilab, Massachusetts Institute of Technology, University of Chicago, and University of Michigan. For more information about the experiment, visit http://holometer.fnal.gov/ .
Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website athttp://www.fnal.gov and follow us on Twitter at @FermilabToday .
The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov .
July 14, 2014
The discovery 30 years ago of soccer-ball-shaped carbon molecules called buckyballs helped to spur an explosion of nanotechnology research. Now, there appears to be a new ball on the pitch.
Researchers from Brown University, Shanxi University and Tsinghua University in China have shown that a cluster of 40 boron atoms forms a hollow molecular cage similar to a carbon buckyball. It’s the first experimental evidence that a boron cage structure – previously only a matter of speculation – does indeed exist.
“This is the first time that a boron cage has been observed experimentally,” said Lai-Sheng Wang, a professor of chemistry at Brown who led the team that made the discovery. “As a chemist, finding new molecules and structures is always exciting. The fact that boron has the capacity to form this kind of structure is very interesting.”
Wang and his colleagues describe the molecule, which they’ve dubbed borospherene, in the journal Nature Chemistry.
Carbon buckyballs are made of 60 carbon atoms arranged in pentagons and hexagons to form a sphere – like a soccer ball. Their discovery in 1985 was soon followed by discoveries of other hollow carbon structures including carbon nanotubes. Another famous carbon nanomaterial – a one-atom-thick sheet called graphene – followed shortly after.
After buckyballs, scientists wondered if other elements might form these odd hollow structures. One candidate was boron, carbon’s neighbor on the periodic table. But because boron has one less electron than carbon, it can’t form the same 60-atom structure found in the buckyball. The missing electrons would cause the cluster to collapse on itself. If a boron cage existed, it would have to have a different number of atoms.
Wang and his research group have been studying boron chemistry for years. In a paper published earlier this year, Wang and his colleagues showed that clusters of 36 boron atoms form one-atom-thick disks, which might be stitched together to form an analog to graphene, dubbed borophene. Wang’s preliminary work suggested that there was also something special about boron clusters with 40 atoms. They seemed to be abnormally stable compared to other boron clusters. Figuring out what that 40-atom cluster actually looks like required a combination of experimental work and modeling using high-powered supercomputers.
On the computer, Wang’s colleagues modeled over 10,000 possible arrangements of 40 boron atoms bonded to each other. The computer simulations estimate not only the shapes of the structures, but also estimate the electron binding energy for each structure – a measure of how tightly a molecule holds its electrons. The spectrum of binding energies serves as a unique fingerprint of each potential structure.
The next step is to test the actual binding energies of boron clusters in the lab to see if they match any of the theoretical structures generated by the computer. To do that, Wang and his colleagues used a technique called photoelectron spectroscopy.
Chunks of bulk boron are zapped with a laser to create vapor of boron atoms. A jet of helium then freezes the vapor into tiny clusters of atoms. The clusters of 40 atoms were isolated by weight then zapped with a second laser, which knocks an electron out of the cluster. The ejected electron flies down a long tube Wang calls his “electron racetrack.” The speed at which the electrons fly down the racetrack is used to determine the cluster’s electron binding energy spectrum – its structural fingerprint.
The experiments showed that 40-atom-clusters form two structures with distinct binding spectra. Those spectra turned out to be a dead-on match with the spectra for two structures generated by the computer models. One was a semi-flat molecule and the other was the buckyball-like spherical cage.
“The experimental sighting of a binding spectrum that matched our models was of paramount importance,” Wang said. “The experiment gives us these very specific signatures, and those signatures fit our models.”
The borospherene molecule isn’t quite as spherical as its carbon cousin. Rather than a series of five- and six-membered rings formed by carbon, borospherene consists of 48 triangles, four seven-sided rings and two six-membered rings. Several atoms stick out a bit from the others, making the surface of borospherene somewhat less smooth than a buckyball.
As for possible uses for borospherene, it’s a little too early to tell, Wang says. One possibility, he points out, could be hydrogen storage. Because of the electron deficiency of boron, borospherene would likely bond well with hydrogen. So tiny boron cages could serve as safe houses for hydrogen molecules.
But for now, Wang is enjoying the discovery.
“For us, just to be the first to have observed this, that’s a pretty big deal,” Wang said. “Of course if it turns out to be useful that would be great, but we don’t know yet. Hopefully this initial finding will stimulate further interest in boron clusters and new ideas to synthesize them in bulk quantities.”
The theoretical modeling was done with a group led by Prof. Si-Dian Li from Shanxi University and a group led by Prof. Jun Li from Tsinghua University. The work was supported by the U.S. National Science Foundation (CHE-1263745) and the National Natural Science Foundation of China.’
July 11, 2014
Why does a relentless stream of subjective experiences normally fill your mind? Maybe that’s just one of those mysteries that will always elude us.
Yet, research from Northwestern University suggests that consciousness lies well within the realm of scientific inquiry — as impossible as that may currently seem. Although scientists have yet to agree on an objective measure to index consciousness, progress has been made with this agenda in several labs around the world.
“The debate about the neural basis of consciousness rages because there is no widely accepted theory about what happens in the brain to make consciousness possible,” said Ken Paller, professor of psychology in the Weinberg College of Arts and Sciences and director of the Cognitive Neuroscience Program at Northwestern.
“Scientists and others acknowledge that damage to the brain can lead to systematic changes in consciousness. Yet, we don’t know exactly what differentiates brain activity associated with conscious experience from brain activity that is instead associated with mental activity that remains unconscious,” he said.
In a new article, Paller and Satoru Suzuki, also professor of psychology at Northwestern, point out flawed assumptions about consciousness to suggest that a wide range of scientific perspectives can offer useful clues about consciousness.
“It’s normal to think that if you attentively inspect something you must be aware of it and that analyzing it to a high level would necessitate consciousness,” Suzuki noted. “Results from experiments on perception belie these assumptions.
“Likewise, it feels like we can freely decide at a precise moment, when actually the process of deciding begins earlier, via neurocognitive processing that does not enter awareness,” he said.
The authors write that unconscious processing can influence our conscious decisions in ways we never suspect.
If these and other similar assumptions are incorrect, the researchers state in their article, then mistaken reasoning might be behind arguments for taking the science of consciousness off the table.
“Neuroscientists sometimes argue that we must focus on understanding other aspects of brain function, because consciousness is never going to be understood,” Paller said. “On the other hand, many neuroscientists are actively engaged in probing the neural basis of consciousness, and, in many ways, this is less of a taboo area of research than it used to be.”
Experimental evidence has supported some theories about consciousness that appeal to specific types of neural communication, which can be described in neural terms or more abstractly in computational terms. Further theoretical advances can be expected if specific measures of neural activity can be brought to bear on these ideas.
Paller and Suzuki both conduct research that touches on consciousness. Suzuki studies perception, and Paller studies memory. They said it was important for them to write the article to counter the view that it is hopeless to ever make progress through scientific research on this topic.
They outlined recent advances that provide reason to be optimistic about future scientific inquiries into consciousness and about the benefits that this knowledge could bring for society.
“For example, continuing research on the brain basis of consciousness could inform our concerns about human rights, help us explain and treat diseases that impinge on consciousness, and help us perpetuate environments and technologies that optimally contribute to the well being of individuals and of our society,” the authors wrote.
They conclude that research on human consciousness belongs within the purview of science, despite philosophical or religious arguments to the contrary.
Their paper, “The Source of Consciousness,” has been published online in the journal Trends in Cognitive Sciences.
July 11, 2014
Researchers from Salk Institute for Biological Studies, BGI, and other institutes for the first time evaluated the safety and reliability of the existing targeted gene correction technologies, and successfully developed a new method, TALEN-HDAdV, which could significantly increased gene-correction efficiency in human induced pluripotent stem cell (hiPSC). This study published online in Cell Stell Cell provides an important theoretical foundation for stem cell-based gene therapy.
The combination of stem cells and targeted genome editing technology provides a powerful tool to model human diseases and develop potential cell replacement therapy. Although the utility of genome editing has been extensively documented, but the impact of these technologies on mutational load at the whole-genome level remains unclear.
In the study, researchers performed whole-genome sequencing to evaluate the mutational load at single-base resolution in individual gene-corrected hiPSC clones in three different disease models, including Hutchinson-Gilford progeria syndrome (HGPS), sickle cell disease (SCD), and Parkinson’s disease (PD).
They evaluated the efficiencies of gene-targeting and gene-correction at the haemoglobin gene HBB locus with TALEN, HDAdV, CRISPR/CAS9 nuclease, and found the TALENs, HDAdVs and CRISPR/CAS9 mediated gene-correction methods have a similar efficiency at the gene HBB locus. In addition, the results of deep whole-genome sequencing indicated that TALEN and HDAdV could keep the patient’s genome integrated at a maximum level, proving the safety and reliability of these methods.
Through integrating the advantages of TALEN- and HDAdV-mediated genome editing, researchers developed a new TALEN-HDAdV hybrid vector (talHDAdV), which can significantly increase the gene-correction efficiency in hiPSCs. Almost all the genetic mutations at the gene HBB locus can be detected by telHDAdV, which allows this new developed technology can be applied into the gene repair of different kinds of hemoglobin diseases such as SCD and Thalassemia.
June 6, 2014
New research from Case Western Reserve University found that compassion can produce counterintuitive results, challenging prevailing views of empathy’s effects on moral judgment.
To understand how humans make moral choices, researchers asked subjects to respond to a variety of moral dilemmas, for instance: Whether to stay and defend a mortally wounded soldier until he dies or shoot him to protect him from enemy torture and enable you and five other soldiers to escape unharmed.
Leading research has said people make choices based on a struggle within their brains between thoughtful reason and automatic passion.
“But this simple reason versus passion model fails to capture that there’s a refined way of thinking with emotions, closely related to empathy and compassion,” said Anthony Jack, Director of Research at the Inamori International Center for Ethics and Excellence, associate professor of cognitive science, psychology and philosophy at Case Western Reserve and lead author of the new research.
Co-authors are Philip Robbins, of the department of philosophy at the University of Missouri, Jared P. Friedman, who just graduated with a BA in cognitive science and philosophy from Case Western Reserve, and Chris D. Meyers, of the department of philosophy at the University of Southern Mississippi. Their study is published in the journal Advances in Experimental Philosophy of Mind at: http://www.bloomsbury.com/us/advances-in-experimental-philosophy-of-mind-9781472507334/.
The researchers agree that there are two networks in the brain that fight to guide our moral decisions, but say that leading work, by Joshua Greene at Harvard University, mischaracterizes the networks involved and how they operate.
A new model
“There’s a tension between cold hard reasoning – what’s called analytic reasoning – and another type of reasoning important to emotions, self-regulation and social insight,” Jack explained. “The second type of reasoning isn’t characterized by being caught up in reflexive and primitive emotions, as Greene suggests. It’s critically important to understanding and appreciating the experiential point of view of others.”
Using functional magnetic resonance imagers (fMRI), Jack has found that the human brain has an analytic network and an empathetic network that tend to suppress one another.
For example, in a healthy brain, physics problems activate the analytic network and deactivate the empathetic. Meanwhile, videos or stories that put a subject in the shoes of another activate the empathetic network and deactivate the analytic.
In these studies, students from Case Western Reserve and groups of adults recruited through Amazon Mechanical Turk responded to a series of questions about themselves and their views. They were then asked to make choices about a series of moral conundrums.
Among the conundrums were questions involving euthanasia. The respondents clearly made different choices between actions taken for a suffering dog versus a suffering person.
“For humans, we privilege their autonomy or life spirit over their basic emotions, such as how much pain they’re in. In contrast, our view of non-human animals tends to be more reductive – we see them as little more than their emotions” Jack said.
“Even though people talk about euthanasia with animals as the humane thing to do, people who are more empathetic have the greatest opposition to euthanasia involving a human,” he said.
Subjects were presented scenarios that included passive euthanasia, such as halting medical intervention, and active euthanasia, such as assisting in the subject’s death.
“More compassionate people didn’t think euthanasia was appropriate for humans, even when we told them the person would be in pain for the rest of his or her life,” Jack said. “That is surprising, because the way we measure compassion is to assess how much people are concerned by the suffering of others.”
Here again, the researchers argue, Greene’s model falls short. According to Greene, those who oppose utilitarian thinking (e.g., euthanasia), should have higher levels of reflexive, primitive, raw emotion Instead, the researchers found that those who were more susceptible to personal distress were actually more likely to support euthanasia.
Opposition to utilitarian thinking was predicted specifically by compassion, not by measures of primitive or reflexive emotion. “Our culture often paints empathy as weakness,” Jack said, “Greene’s model plays into that view, suggesting that those who don’t like utilitarian thinking are intellectually weak and ruled by primitive passions. But these views are fundamentally misleading. Compassion is actually linked to stronger emotion regulation abilities. Decades of research shows that we have to overcome our reflexive feelings of aversion and distress to be ready and willing to help others.”
The researchers found that people judged to be more compassionate and empathetic by their peers – for instance better listeners – tended to oppose utilitarian choices such as sacrificing one to save the many or euthanasia.
The findings suggest that more compassionate people have more of a sense of the sanctity of human life. “The idea that life is sacred may be hard for the reductive, analytic mind to grasp, but it is hardly a primitive or reflexive sentiment” Jack said.
That’s not to say that, given more information, the compassionate will continue to oppose euthanasia. The conundrums were limited in an important way: the test subjects knew nothing about the wishes of the person suffering.
The researchers are continuing their studies. They expect to see a different relationship between compassion and moral judgments about euthanasia when more is understood about the person who is suffering, in particular when continued suffering undermines that person’s life narrative.