Materials for (ALL) the Ages ~ Nanomaterials and the (coming) Fourth Industrial Revolution

nano-vacince-28432767823_7110f5293b_oThis nano-vaccine can stimulate an anti-tumour response in patients with cancer. Brenda Melendez and Rita Serda, NIH Image Gallery/Flickr (CC BY-NC 2.0)


The kind of material used by a society has often served as a yardstick for how developed that society is. From the stone wheel to the iPhone, a bronze axe to a Boeing 747, materials technology has been our constant companion throughout the millennia, and a driving force for continued progress and societal change. Now it is believed that we may be on the cusp of another great materials revolution, this time powered by nanotechnology. With implications for fields ranging from clean energy to medicine, nanotechnology has the potential to have far-reaching impacts on many aspects of our lives, and may earn itself naming rights to the next age in the process.

Sticks and stones and metals

During the Stone Age, our ancestors used natural materials such as animal skins, plant fibres and, of course, stones. These materials were our bread and butter before bread or butter, until humans began to experiment with metalwork. Copper, alloyed with a bit of tin, had such superior properties to stone implements that if a society failed to use the new material, they found themselves in danger of being conquered. Thus, the Bronze Age was born. Bronze had its heyday for millennia, until bronze itself was surpassed by another stronger, more versatile metal.


Further advancement in metalwork allowed the production of iron tools and weapons, followed by ones crafted from steel. These implements were stronger and sharper than their bronze counterparts, without a significant increase in weight. There is actually some contention among historians about what constitutes the end of the Iron Age. A common demarcation uses an increase in the survival of written histories, which reduced the burden previously placed on archaeology. However, some believe the Iron Age may have never really ended as iron and steel still play a substantial role in contemporary society.


 Tools from the Stone age (left) gave way to those required for metal work in the Bronze and Iron ages (above). Patrick Gray/Flickr (CC BY 2.0) and Wikimedia Commons (public domain)

While naming time periods after their defining material has fallen somewhat out of vogue, the progression of society is still driven by advances in materials science and technology.

The industrial revolution, globalization and the Information Age

Coal and the steam engine literally and figuratively fueled the industrial revolution, moulding us into our modern consumer culture. Before the industrial revolution, a high percentage of the population had to farm the land to provide enough food for everyone to survive.

Mechanized farming practices reduced the burden on manpower, while also producing higher yields. As a result, few farmers were required to feed the growing urban populace. This freed up large sections of the population to pursue work in other fields, such as manufacturing, commerce and research. The importance of this transition is still evident today, including our tendency to group countries based on how industrialized they are.

Advances in lightweight materials, such as composites and light metals, facilitated the development of aircraft that fly us around an ever-shrinking globe, and allowed us to be propelled beyond our planet’s life-supporting atmosphere. In the final decades of the 20th century, the world got even smaller following the rapid development of silicon processing chips and personal electronics. The revolutionary impact these silicon products have had on modern society can’t be overstated. Indeed, this article was written, and is likely being read, on devices powered by what is effectively processed sand.

Much to the chagrin of silicon atoms everywhere, we are not currently in the silicon age, but the information or digital age. However, we are likely on the verge of another significant advance in materials technology.

The promises of the nanotechnology age

Scientists have been heralding the Nano Age, proclaiming “nanotechnology will become the most powerful tool the human species has ever used”. This is engineering on an atomic scale, the stuff of science fiction only decades ago. Now, some experts believe nanotechnology will prove to be the foundation of our wildest dreams (or darkest nightmares).
While such claims may seem sensational or outlandish, the inherent potential of nanotechnology is apparent in current research. The University of Queensland (UQ) boasts a nanomaterials research centre with a multidisciplinary team that is working to implement nanomaterials in three key research areas: energy, environment and health. If there can be consensus about issues that are integral to the survival of humanity, the shortlist must surely include these three.



Read About: Why Everyone Must Get Ready for the Fourth Industrial Revolution

Professor Lianzhou Wang is the director of the UQ Nanomaterials Centre, and his work is focused on the first two areas: energy and environment. Prof Wang’s group aims to use nanomaterials to improve the efficiency of solar cells. Due to Australia’s abundant sunshine, the country has a vested interest and solid track record in solar cell research. However, much of that research focuses on improving the efficiency of solar cells, and usually involves increasingly expensive materials and manufacturing techniques. Prof Wang has a more egalitarian approach and is focused on developing renewable energy technology that will be more accessible to the population at large. In his lab, nanomaterials such as metal oxides and quantum dots are used to create cheap, efficient solar cells with the hope of encouraging more widespread utilization of this green power source.


 Solar panels on rooftops allow residents to take advantage of the Australian sun. Wikimedia Commons (public domain)

Using nanotechnology, Prof Wang’s group can make solar cells that are cheaper than currently available commercial silicon and thin film solar cells. They are able to do this because nanomaterials have a much lower processing temperature than conventional materials, which corresponds to a decrease in manufacturing costs. Nanomaterials also impart flexibility during processing and design, as they can be printed on both flexible and rigid substrates.

“This is where nanomaterials can play a role: performance, of course, but also cost,” said Prof Wang. By reducing the cost of the solar cells, he hopes to lower the barrier to entry of the market and thereby introduce the technology to a greater proportion of the population. In the case of nanotechnology, it turns out that less really is more.

Solar Shades

 Flexible solar panels have greater utility than their rigid counterparts, and can be used in a wider variety of scenarios, such as on tents. Wikimedia Commons (public domain)

Flexible solar panels have greater utility than their rigid counterparts, and can be used in a wider variety of scenarios, such as on tents. Wikimedia Commons (public domain)
However, not content to call that a good day’s work, Prof Wang is also working toward a solution for another issue plaguing the green energy sector: power storage. Although not particularly nuanced, a common argument against green energies asks what happens when the sun isn’t shining or the wind isn’t blowing. As frustratingly reductive as this may seem, it still presents a serious challenge. The uptake of green energy sources, including solar, is severely limited by inadequate or expensive batteries. The inability to easily and effectively store unused power for a rainy day (pardon the pun) is a limiting factor for many renewable energy technologies.

In an effort to address this issue many research groups, including Prof Wang’s, intend to improve batteries with nanotechnology. As with solar cells, the advantage stems from their increased surface area. Nanoparticles, particularly nanocrystallites, have a higher surface area ratio than conventional battery materials, which allows shorter ion diffusion length and faster charge transfer. This not only increases the storage capacity of the battery, but also reduces charging time. Using this technique, Prof Wang’s group believe they have developed new cathode materials for lithium ion batteries that would potentially improve the mileage of electric cars from 450km/charge to 600-700km. “This is an increase of almost a third, and will make these cars competitive with most petrol-powered cars,” said Prof Wang.


 Electric cars such as the Tesla model S are only as good as their battery life, and nanomaterials have the potential to extend driving time on one charge. Wikimedia Commons (public domain)


Exploring how to harness nanomaterials for the betterment of the environment is another key research area for the UQ nanomaterials group. There are a variety of ways nanomaterials can assist in environmental management, but artificial photosynthesis is arguably one of the most innovative. Using nanoparticles as a photoactive catalyst, carbon dioxide in the atmosphere reacts with water to produce by-products including carbon monoxide, methane and hydrogen gas. Prof Wang sums up how remarkable this is: “We can not only remove the CO2 from the atmosphere, we [also] get something useful in the process.” All of the by-products mentioned (carbon monoxide, methane and hydrogen) are potential fuel or power sources. Consequently, artificial photosynthesis not only provides a useful tool for combating climate change, it also generates alternative fuel sources in the process.

Finally, nanotecnology may prove useful for health applications in fields as diverse as targeted drug delivery, gene therapy, diagnostics and tissue engineering, demonstrating its broad potential in medicine. It is thought by some that nanotechnology may hold the key to curing cancer at the genetic level, while also providing insights about immortality.


Whether the next great age of humanity is officially labelled the Nano Age or not, nanotechnology will almost certainly play an instrumental role in future innovations and will shape societies for decades to come. Whether it be tackling cancer or climate change, it appears that anything is possible, if we just think small enough.

Breakthrough in ‘wonder’ materials paves way for flexible tech


Credit: University of Warwick


Gadgets are set to become flexible, highly efficient and much smaller, following a breakthrough in measuring two-dimensional ‘wonder’ materials by the University of Warwick.

Dr Neil Wilson in the Department of Physics has developed a new technique to measure the electronic structures of stacks of two-dimensional materials – flat, atomically thin, highly conductive, and extremely strong materials – for the first time.

Multiple stacked layers of 2-D materials – known as heterostructures – create highly efficient optoelectronic devices with ultrafast electrical charge, which can be used in nano-circuits, and are stronger than materials used in traditional circuits.

Various heterostructures have been created using different 2-D materials – and stacking different combinations of 2-D materials creates new with new properties.

Dr Wilson’s technique measures the electronic properties of each layer in a stack, allowing researchers to establish the optimal structure for the fastest, most efficient transfer of electrical energy.

The technique uses the photoelectric effect to directly measure the momentum of electrons within each layer and shows how this changes when the layers are combined.

The ability to understand and quantify how 2-D material heterostructures work – and to create optimal semiconductor structures – paves the way for the development of highly efficient nano-circuitry, and smaller, flexible, more wearable gadgets.

Solar power could also be revolutionised with heterostructures, as the atomically thin layers allow for strong absorption and efficient power conversion with a minimal amount of photovoltaic material.

Dr Wilson comments on the work: “It is extremely exciting to be able to see, for the first time, how interactions between atomically thin layers change their electronic structure. This work also demonstrates the importance of an international approach to research; we would not have been able to achieve this outcome without our colleagues in the USA and Italy.”

Dr Wilson worked formulated the technique in collaboration with colleagues in the theory groups at the University of Warwick and University of Cambridge, at the University of Washington in Seattle, and the Elettra Light Source, near Trieste in Italy.

Understanding how interactions between the atomic layers change their required the help of computational models developed by Dr Nick Hine, also from Warwick’s Department of Physics.

Explore further: Model accurately predicts the electronic properties of a combination of 2-D semiconductors

More information: Neil R. Wilson et al. Determination of band offsets, hybridization, and exciton binding in 2D semiconductor heterostructures, Science Advances (2017). DOI: 10.1126/sciadv.1601832


Photonic Optical Fiber with ‘Einstein’s Theory’ effect

opticalfibreCoreless optical fibre: If a photonic crystal fibre is twisted, it does not require a core with a different refractive index to trap light at its centre. Credit: Science 2016/MPI for the Science of Light

Researchers at the Max Planck Institute for the Science of Light in Erlangen have discovered a new mechanism for guiding light in photonic crystal fibre (PCF). PCF is a hair-thin glass fibre with a regular array of hollow channels running along its length. When helically twisted, this spiralling array of hollow channels acts on light rays in an analogous manner to the bending of light rays when they travel through the gravitationally curved space around a star, as described by the general theory of relativity.

Optical fibres act as pipes for . And just as the inside of a pipe is enclosed by a wall, optical fibres normally have a light-guiding core, whose glass has a higher refractive index than the glass of the enclosing outer cladding. The difference in the refractive index causes the light to be reflected at the cladding interface and trapped in the core like water in a pipe. A team headed by Philip Russell, Director at the Max Planck Institute for the Science of Light, is the first to succeed in guiding light in a PCF with no core.

Photonic crystals give butterflies their colour and can also guide light

A typical photonic crystal consists of a piece of glass with holes arranged in regular periodic pattern throughout its volume. Since glass and the air have different refractive indices, the refractive index has a periodic structure. This is the reason these materials are called crystals—their atoms form an ordered, three-dimensional lattice as found in crystalline salt or silicon, for example. In a conventional crystal, the precise design of the 3-D structure determines the behaviour of electrons, resulting for example in electrical insulators, conductors and semiconductors.

In a similar manner, the optical properties of a photonic crystal depend on the periodic 3-D microstructure, which is responsible for the shimmering colours of some butterfly wings, for example. Being able to control the optical properties of materials is useful in a wide variety of applications. The photonic crystal fibres developed by Philip Russell and his team at the Erlangen-based Max Planck Institute can be used to filter specific wavelengths out of the visible spectrum or to produce very white light, for example.

As is the case with all optical fibres used in telecommunications, all conventional photonic crystal fibres have a core and cladding each with different refractive indices or optical properties. In PCF, the air-filled channels already give the glass a different from the one it would have if completely solid.

The holes define the space in a photonic crystal fibre

“We are the first to succeed in guiding light through a coreless fibre,” says Gordon Wong from the Max Planck Institute for the Science of Light in Erlangen. The researchers working in Philip Russell’s team have fabricated a photonic crystal fibre whose complete cross-section is closely packed with a large number of air-filled channels, each around one thousandth of a millimetre in diameter, which extend along its whole length.

While the core of a conventional PCF is solid glass, the cross-sectional view of the new optical fibre resembles a sieve. The holes have regular separations and are arranged so that every hole is surrounded by a regular hexagon of neighbouring holes. “This structure defines the space in the fibre,” explains Ramin Beravat, lead author of the publication. The holes can be thought of as distance markers. The interior of the fibre then has a kind of artificial spatial structure which is formed by the regular lattice of holes.

“We have now fabricated the fibre in a twisted form,” continues Beravat. The twisting causes the hollow channels to wind around the length of the fibre in helical lines. The researchers then transmitted laser light through the fibre. In the case of the regular, coreless cross-section, one would actually expect the light to distribute itself between the holes of the sieve as evenly as their pattern determines, i.e. at the edge just as much as in the centre. Instead, the physicists discovered something surprising: the light was concentrated in the central region, where the core of a conventional optical fibre is located.

In a twisted PCF, the light follows the shortest path in the interior of the fibre

“The effect is analogous to the curvature of space in Einstein’s general theory of relativity,” explains Wong. This predicts that a heavy mass such as the Sun will distort the space surrounding it – or more precisely, distort spacetime, i.e. the combination of the three spatial dimensions with the fourth dimension, time – like a sheet of rubber into which a lead sphere is placed. Light follows this curvature. The shortest path between two points is then no longer a straight line, but a curve. During a solar eclipse, stars which should really be hidden behind the Sun thus become visible. Physicists call these shortest connecting paths “geodesics”.

“By twisting the fibre, the ‘space’ in our photonic crystal fibre becomes twisted as well,” says Wong. This leads to helical geodesic lines along which light travels. This can be intuitively understood by taking into account the fact that light always takes the shortest route through a medium. The glass strands between the air-filled channels describe spirals, which define possible paths for the . The path through the wide spirals at the edge of the fibre is longer than that through the more closely wound spirals in its centre, however, resulting in curved ray paths that at a certain radius are reflected by a effect back towards the fibre axis.

A twisted PCF as a large-scale environmental sensor

The more the fibre is twisted, the narrower is the space within which the light concentrated. In analogy to Einstein’s theory, this corresponds to a stronger gravitational force and thus a greater deflection of the light. The Erlangen-based researchers write that they have created a “topological channel” for the light (topology is concerned with the properties of space which are conserved under continuous distortion).

The researchers emphasize that their work is basic research. They are one of the very few research groups working in this field anywhere in the world. Nevertheless, they can think of several applications for their discovery. A twisted fibre which is less twisted at certain intervals, for example, will allow a portion of the light to escape to the outside. Light could then interact with the environment at these defined locations. “This could be used for sensors which measure the absorption of a medium, for instance.” A network of these fibres could collect data over large areas as an environmental sensor.

Explore further: A photonic crystal fibre generates light from the ultraviolet to the mid-infrared

More information: R. Beravat et al. Twist-induced guidance in coreless photonic crystal fiber: A helical channel for light, Science Advances (2016). DOI: 10.1126/sciadv.1601421


The strange link between the human mind and quantum physics … “Imaginge a Time and Place ~ Maybe 2 .. Places at Once!

Nobody understands what consciousness is or how it works. Nobody understands quantum mechanics either. Could that be more than coincidence?

By Philip Ball for the BBC

16 February 2017

“I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.”

The American physicist Richard Feynman said this about the notorious puzzles and paradoxes of quantum mechanics, the theory physicists use to describe the tiniest objects in the Universe. But he might as well have been talking about the equally knotty problem of consciousness.

Some scientists think we already understand what consciousness is, or that it is a mere illusion. But many others feel we have not grasped where consciousness comes from at all.

The perennial puzzle of consciousness has even led some researchers to invoke quantum physics to explain it. That notion has always been met with skepticism, which is not surprising: it does not sound wise to explain one mystery with another. But such ideas are not obviously absurd, and neither are they arbitrary.

For one thing, the mind seemed, to the great discomfort of physicists, to force its way into early quantum theory. What’s more, quantum computers are predicted to be capable of accomplishing things ordinary computers cannot, which reminds us of how our brains can achieve things that are still beyond artificial intelligence. “Quantum consciousness” is widely derided as mystical woo, but it just will not go away.

What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)

Quantum mechanics is the best theory we have for describing the world at the nuts-and-bolts level of atoms and subatomic particles. Perhaps the most renowned of its mysteries is the fact that the outcome of a quantum experiment can change depending on whether or not we choose to measure some property of the particles involved.

When this “observer effect” was first noticed by the early pioneers of quantum theory, they were deeply troubled. It seemed to undermine the basic assumption behind all science: that there is an objective world out there, irrespective of us. If the way the world behaves depends on how – or if – we look at it, what can “reality” really mean?

The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”

Some of those researchers felt forced to conclude that objectivity was an illusion, and that consciousness has to be allowed an active role in quantum theory. To others, that did not make sense. Surely, Albert Einstein once complained, the Moon does not exist only when we look at it!

Today some physicists suspect that, whether or not consciousness influences quantum mechanics, it might in fact arise because of it. They think that quantum theory might be needed to fully understand how the brain works.

Might it be that, just as quantum objects can apparently be in two places at once, so a quantum brain can hold onto two mutually-exclusive ideas at the same time?

These ideas are speculative, and it may turn out that quantum physics has no fundamental role either for or in the workings of the mind. But if nothing else, these possibilities show just how strangely quantum theory forces us to think.

The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)

The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”. Imagine shining a beam of light at a screen that contains two closely-spaced parallel slits. Some of the light passes through the slits, whereupon it strikes another screen.

Light can be thought of as a kind of wave, and when waves emerge from two slits like this they can interfere with each other. If their peaks coincide, they reinforce each other, whereas if a peak and a trough coincide, they cancel out. This wave interference is called diffraction, and it produces a series of alternating bright and dark stripes on the back screen, where the light waves are either reinforced or cancelled out.

The implication seems to be that each particle passes simultaneously through both slits

This experiment was understood to be a characteristic of wave behaviour over 200 years ago, well before quantum theory existed.

The double slit experiment can also be performed with quantum particles like electrons; tiny charged particles that are components of atoms. In a counter-intuitive twist, these particles can behave like waves. That means they can undergo diffraction when a stream of them passes through the two slits, producing an interference pattern.

Now suppose that the quantum particles are sent through the slits one by one, and their arrival at the screen is likewise seen one by one. Now there is apparently nothing for each particle to interfere with along its route – yet nevertheless the pattern of particle impacts that builds up over time reveals interference bands.

The implication seems to be that each particle passes simultaneously through both slits and interferes with itself. This combination of “both paths at once” is known as a superposition state.

But here is the really odd thing.

The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)

If we place a detector inside or just behind one slit, we can find out whether any given particle goes through it or not. In that case, however, the interference vanishes. Simply by observing a particle’s path – even if that observation should not disturb the particle’s motion – we change the outcome.

The physicist Pascual Jordan, who worked with quantum guru Niels Bohr in Copenhagen in the 1920s, put it like this: “observations not only disturb what has to be measured, they produce it… We compel [a quantum particle] to assume a definite position.” In other words, Jordan said, “we ourselves produce the results of measurements.”

If that is so, objective reality seems to go out of the window.

And it gets even stranger.

Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)

If nature seems to be changing its behaviour depending on whether we “look” or not, we could try to trick it into showing its hand. To do so, we could measure which path a particle took through the double slits, but only after it has passed through them. By then, it ought to have “decided” whether to take one path or both.

The sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse

An experiment for doing this was proposed in the 1970s by the American physicist John Wheeler, and this “delayed choice” experiment was performed in the following decade. It uses clever techniques to make measurements on the paths of quantum particles (generally, particles of light, called photons) after they should have chosen whether to take one path or a superposition of two.

It turns out that, just as Bohr confidently predicted, it makes no difference whether we delay the measurement or not. As long as we measure the photon’s path before its arrival at a detector is finally registered, we lose all interference.

It is as if nature “knows” not just if we are looking, but if we are planning to look.

(Credit: Emilio Segre Visual Archives/American Institute Physics/Science Photo Library)

Whenever, in these experiments, we discover the path of a quantum particle, its cloud of possible routes “collapses” into a single well-defined state. What’s more, the delayed-choice experiment implies that the sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse. But does this mean that true collapse has only happened when the result of a measurement impinges on our consciousness?

It is hard to avoid the implication that consciousness and quantum mechanics are somehow linked

That possibility was admitted in the 1930s by the Hungarian physicist Eugene Wigner. “It follows that the quantum description of objects is influenced by impressions entering my consciousness,” he wrote. “Solipsism may be logically consistent with present quantum mechanics.”

Wheeler even entertained the thought that the presence of living beings, which are capable of “noticing”, has transformed what was previously a multitude of possible quantum pasts into one concrete history. In this sense, Wheeler said, we become participants in the evolution of the Universe since its very beginning. In his words, we live in a “participatory universe.”

To this day, physicists do not agree on the best way to interpret these quantum experiments, and to some extent what you make of them is (at the moment) up to you. But one way or another, it is hard to avoid the implication that consciousness and quantum mechanics are somehow linked.

Beginning in the 1980s, the British physicist Roger Penrose suggested that the link might work in the other direction. Whether or not consciousness can affect quantum mechanics, he said, perhaps quantum mechanics is involved in consciousness.

Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)

What if, Penrose asked, there are molecular structures in our brains that are able to alter their state in response to a single quantum event. Could not these structures then adopt a superposition state, just like the particles in the double slit experiment? And might those quantum superpositions then show up in the ways neurons are triggered to communicate via electrical signals?

Maybe, says Penrose, our ability to sustain seemingly incompatible mental states is no quirk of perception, but a real quantum effect.

Perhaps quantum mechanics is involved in consciousness

After all, the human brain seems able to handle cognitive processes that still far exceed the capabilities of digital computers. Perhaps we can even carry out computational tasks that are impossible on ordinary computers, which use classical digital logic.

Penrose first proposed that quantum effects feature in human cognition in his 1989 book The Emperor’s New Mind. The idea is called Orch-OR, which is short for “orchestrated objective reduction”. 

The phrase “objective reduction” means that, as Penrose believes, the collapse of quantum interference and superposition is a real, physical process, like the bursting of a bubble.

Orch-OR draws on Penrose’s suggestion that gravity is responsible for the fact that everyday objects, such as chairs and planets, do not display quantum effects. Penrose believes that quantum superpositions become impossible for objects much larger than atoms, because their gravitational effects would then force two incompatible versions of space-time to coexist.

Penrose developed this idea further with American physician Stuart Hameroff. In his 1994 book Shadows of the Mind, he suggested that the structures involved in this quantum cognition might be protein strands called microtubules. These are found in most of our cells, including the neurons in our brains. Penrose and Hameroff argue that vibrations of microtubules can adopt a quantum superposition.

But there is no evidence that such a thing is remotely feasible.

Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)

It has been suggested that the idea of quantum superpositions in microtubules is supported by experiments described in 2013, but in fact those studies made no mention of quantum effects.

Besides, most researchers think that the Orch-OR idea was ruled out by a study published in 2000. Physicist Max Tegmark calculated that quantum superpositions of the molecules involved in neural signaling could not survive for even a fraction of the time needed for such a signal to get anywhere.

Other researchers have found evidence for quantum effects in living beings

Quantum effects such as superposition are easily destroyed, because of a process called decoherence. This is caused by the interactions of a quantum object with its surrounding environment, through which the “quantumness” leaks away.

Decoherence is expected to be extremely rapid in warm and wet environments like living cells.

Nerve signals are electrical pulses, caused by the passage of electrically-charged atoms across the walls of nerve cells. If one of these atoms was in a superposition and then collided with a neuron, Tegmark showed that the superposition should decay in less than one billion billionth of a second. It takes at least ten thousand trillion times as long for a neuron to discharge a signal.

As a result, ideas about quantum effects in the brain are viewed with great skepticism.

However, Penrose is unmoved by those arguments and stands by the Orch-OR hypothesis. And despite Tegmark’s prediction of ultra-fast decoherence in cells, other researchers have found evidence for quantum effects in living beings. Some argue that quantum mechanics is harnessed by migratory birds that use magnetic navigation, and by green plants when they use sunlight to make sugars in photosynthesis.

Besides, the idea that the brain might employ quantum tricks shows no sign of going away. For there is now another, quite different argument for it.

Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)

In a study published in 2015, physicist Matthew Fisher of the University of California at Santa Barbara argued that the brain might contain molecules capable of sustaining more robust quantum superpositions. 

Specifically, he thinks that the nuclei of phosphorus atoms may have this ability.

Phosphorus atoms are everywhere in living cells. They often take the form of phosphate ions, in which one phosphorus atom joins up with four oxygen atoms.

Such ions are the basic unit of energy within cells. Much of the cell’s energy is stored in molecules called ATP, which contain a string of three phosphate groups joined to an organic molecule. When one of the phosphates is cut free, energy is released for the cell to use.

Cells have molecular machinery for assembling phosphate ions into groups and cleaving them off again. Fisher suggested a scheme in which two phosphate ions might be placed in a special kind of superposition called an “entangled state”.

Phosphorus spins could resist decoherence for a day or so, even in living cells

The phosphorus nuclei have a quantum property called spin, which makes them rather like little magnets with poles pointing in particular directions. In an entangled state, the spin of one phosphorus nucleus depends on that of the other.

Put another way, entangled states are really superposition states involving more than one quantum particle.

Fisher says that the quantum-mechanical behaviour of these nuclear spins could plausibly resist decoherence on human timescales. 

He agrees with Tegmark that quantum vibrations, like those postulated by Penrose and Hameroff, will be strongly affected by their surroundings “and will decohere almost immediately”. But nuclear spins do not interact very strongly with their surroundings.

All the same, quantum behaviour in the phosphorus nuclear spins would have to be “protected” from decoherence.

Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)

This might happen, Fisher says, if the phosphorus atoms are incorporated into larger objects called “Posner molecules”. These are clusters of six phosphate ions, combined with nine calcium ions. 
There is some evidence that they can exist in living cells, though this is currently far from conclusive.

I decided… to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions

In Posner molecules, Fisher argues, phosphorus spins could resist decoherence for a day or so, even in living cells. That means they could influence how the brain works.

The idea is that Posner molecules can be swallowed up by neurons. Once inside, the Posner molecules could trigger the firing of a signal to another neuron, by falling apart and releasing their calcium ions.  

Because of entanglement in Posner molecules, two such signals might thus in turn become entangled: a kind of quantum superposition of a “thought”, you might say. “If quantum processing with nuclear spins is in fact present in the brain, it would be an extremely common occurrence, happening pretty much all the time,” Fisher says.

He first got this idea when he started thinking about mental illness.

A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)

“My entry into the biochemistry of the brain started when I decided three or four years ago to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions,” Fisher says.

At this point, Fisher’s proposal is no more than an intriguing idea

Lithium drugs are widely used for treating bipolar disorder. They work, but nobody really knows how.

“I wasn’t looking for a quantum explanation,” Fisher says. But then he came across a paper reporting that lithium drugs had different effects on the behaviour of rats, depending on what form – or “isotope” – of lithium was used.

On the face of it, that was extremely puzzling. In chemical terms, different isotopes behave almost identically, so if the lithium worked like a conventional drug the isotopes should all have had the same effect.

Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)

But Fisher realised that the nuclei of the atoms of different lithium isotopes can have different spins. This quantum property might affect the way lithium drugs act. 
For example, if lithium substitutes for calcium in Posner molecules, the lithium spins might “feel” and influence those of phosphorus atoms, and so interfere with their entanglement.

We do not even know what consciousness is …

If this is true, it would help to explain why lithium can treat bipolar disorder.

At this point, Fisher’s proposal is no more than an intriguing idea. But there are several ways in which its plausibility can be tested, starting with the idea that phosphorus spins in Posner molecules can keep their quantum coherence for long periods. That is what Fisher aims to do next.

All the same, he is wary of being associated with the earlier ideas about “quantum consciousness”, which he sees as highly speculative at best.

Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)

Physicists are not terribly comfortable with finding themselves inside their theories. Most hope that consciousness and the brain can be kept out of quantum theory, and perhaps vice versa. After all, we do not even know what consciousness is, let alone have a theory to describe it.

We all know what red is like, but we have no way to communicate the sensation

It does not help that there is now a New Age cottage industry devoted to notions of “quantum consciousness”, claiming that quantum mechanics offers plausible rationales for such things as telepathy and telekinesis.

As a result, physicists are often embarrassed to even mention the words “quantum” and “consciousness” in the same sentence.

But setting that aside, the idea has a long history. Ever since the “observer effect” and the mind first insinuated themselves into quantum theory in the early days, it has been devilishly hard to kick them out. A few researchers think we might never manage to do so.

In 2016, Adrian Kent of the University of Cambridge in the UK, one of the most respected “quantum philosophers”, speculated that consciousness might alter the behaviour of quantum systems in subtle but detectable ways.  

We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)

Kent is very cautious about this idea. “There is no compelling reason of principle to believe that quantum theory is the right theory in which to try to formulate a theory of consciousness, or that the problems of quantum theory must have anything to do with the problem of consciousness,” he admits.

But he says that it is hard to see how a description of consciousness based purely on pre-quantum physics can account for all the features it seems to have.

“Every line of thought on the relationship of consciousness to physics runs into deep trouble,” says Kent.

One particularly puzzling question is how our conscious minds can experience unique sensations, such as the colour red or the smell of frying bacon. 

With the exception of people with visual impairments, we all know what red is like, but we have no way to communicate the sensation and there is nothing in physics that tells us what it should be like.

Sensations like this are called “qualia”. We perceive them as unified properties of the outside world, but in fact they are products of our consciousness – and that is hard to explain. Indeed, in 1995 philosopher David Chalmers dubbed it “the hard problem” of consciousness.

How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)

This has prompted him to suggest that “we could make some progress on understanding the problem of the evolution of consciousness if we supposed that consciousnesses alters (albeit perhaps very slightly and subtly) quantum probabilities.”

“Quantum consciousness” is widely derided as mystical woo, but it just will not go away

In other words, the mind could genuinely affect the outcomes of measurements.

It does not, in this view, exactly determine “what is real”. But it might affect the chance that each of the possible actualities permitted by quantum mechanics is the one we do in fact observe, in a way that quantum theory itself cannot predict. Kent says that we might look for such effects experimentally.

He even bravely estimates the chances of finding them. “I would give credence of perhaps 15% that something specifically to do with consciousness causes deviations from quantum theory, with perhaps 3% credence that this will be experimentally detectable within the next 50 years,” he says.

If that happens, it would transform our ideas about both physics and the mind. That seems a chance worth exploring!

U of Toronto: A Printable Solar Cell Closer to Commercial Reality

u-toronto-solar-cell-id45884A University of Toronto Engineering innovation could make printing solar cells as easy and inexpensive as printing a newspaper.

Dr. Hairen Tan and his team have cleared a critical manufacturing hurdle in the development of a relatively new class of solar devices called perovskite solar cells. This alternative solar technology could lead to low-cost, printable solar panels capable of turning nearly any surface into a power generator.


“Economies of scale have greatly reduced the cost of silicon manufacturing,” said Professor Ted Sargent, an expert in emerging solar technologies and the Canada Research Chair in Nanotechnology. “Perovskite solar cells can enable us to use techniques already established in the printing industry to produce solar cells at very low cost. Potentially, perovskites and silicon cells can be married to improve efficiency further, but only with advances in low-temperature processes.”


Perovskite Solar Cell
The new perovskite solar cells have achieved an efficiency of 20.1 per cent and can be manufactured at low temperatures, which reduces the cost and expands the number of possible applications. (Image: Kevin Soobrian)


Today, virtually all commercial solar cells are made from thin slices of crystalline silicon which must be processed to a very high purity. It’s an energy-intensive process, requiring temperatures higher than 1,000 degrees Celsius and large amounts of hazardous solvents.
In contrast, perovskite solar cells depend on a layer of tiny crystals — each about 1,000 times smaller than the width of a human hair — made of low-cost, light-sensitive materials. Because the perovskite raw materials can be mixed into a liquid to form a kind of ‘solar ink’, they could be printed onto glass, plastic or other materials using a simple inkjet printing process.
But, until now, there’s been a catch: in order to generate electricity, electrons excited by solar energy must be extracted from the crystals so they can flow through a circuit. That extraction happens in a special layer called the electron selective layer, or ESL. The difficulty of manufacturing a good ESL has been one of the key challenges holding back the development of perovskite solar cell devices.
“The most effective materials for making ESLs start as a powder and have to be baked at high temperatures, above 500 degrees Celsius,” said Tan. “You can’t put that on top of a sheet of flexible plastic or on a fully fabricated silicon cell — it will just melt.”
Tan and his colleagues developed a new chemical reaction than enables them to grow an ESL made of nanoparticles in solution, directly on top of the electrode. While heat is still required, the process always stays below 150 degrees C, much lower than the melting point of many plastics.
The new nanoparticles are coated with a layer of chlorine atoms, which helps them bind to the perovskite layer on top — this strong binding allows for efficient extraction of electrons. In a paper recently published in Science (“Efficient and stable solution-processed planar perovskite solar cells via contact passivation”), Tan and his colleagues report the efficiency of solar cells made using the new method at 20.1 per cent.
“This is the best ever reported for low-temperature processing techniques,” said Tan. He adds that perovskite solar cells using the older, high-temperature method are only marginally better at 22.1 per cent, and even the best silicon solar cells can only reach 26.3 per cent.
Another advantage is stability. Many perovskite solar cells experience a severe drop in performance after only a few hours, but Tan’s cells retained more than 90 per cent of their efficiency even after 500 hours of use. “I think our new technique paves the way toward solving this problem,” said Tan, who undertook this work as part of a Rubicon Fellowship.
“The Toronto team’s computational studies beautifully explain the role of the newly developed electron-selective layer. The work illustrates the rapidly-advancing contribution that computational materials science is making towards rational, next-generation energy devices,” said Professor Alan Aspuru-Guzik, an expert on computational materials science in the Department of Chemistry and Chemical Biology at Harvard University, who was not involved in the work.
“To augment the best silicon solar cells, next-generation thin-film technologies need to be process-compatible with a finished cell. This entails modest processing temperatures such as those in the Toronto group’s advance reported in Science,” said Professor Luping Yu of the University of Chicago’s Department of Chemistry. Yu is an expert on solution-processed solar cells and was not involved in the work.
Keeping cool during the manufacturing process opens up a world of possibilities for applications of perovskite solar cells, from smartphone covers that provide charging capabilities to solar-active tinted windows that offset building energy use. In the nearer term, Tan’s technology could be used in tandem with conventional solar cells.
“With our low-temperature process, we could coat our perovskite cells directly on top of silicon without damaging the underlying material,” said Tan. “If a hybrid perovskite-silicon cell can push the efficiency up to 30 per cent or higher, it makes solar power a much better economic proposition.”
Source: University of Toronto


MIT.nano ~ Inspiring Innovation at the ‘nano-scale’ … Making Our World Better – One Atom at a Time: Video



MIT-nanoMIT is constructing, at the heart of the campus, a new 200,000-square-foot center for nanoscience and nanotechnology. This advanced facility will be a place for tinkering with atoms, one by one—and for constructing, from these fantastically small building blocks, the innovations of the future. Watch the MIT Video then Read More …


Read More

“Science is not only the disciple of Reason, but also one of Romance and Passion ~ Stephen B. Hawking

Nanotechnology is so small it’s measured in billionths of meters, and it is revolutionizing every aspect of our lives … Dictionary Series - Science: nanotechnology

The past 70 years have seen the way we live and work transformed by two tiny inventions. The electronic transistor and the microchip are what make all modern electronics possible, and since their development in the 1940s they have been getting smaller. Today, one chip can contain as many as 5 billion transistors. If cars had followed the same development pathway, we would now be able to drive them at 300,000 mph and they would cost just $6.00 (US) each.AmorChem Nanotechnology-300x200

But to keep this progress going we need to be able to create circuits on the extremely small, nanometer scale. A nanometer (nm) is one billionth of a meter and so this kind of engineering involves manipulating individual atoms. We can do this, for example, by firing a beam of electrons at a material, or by vaporizing it and depositing the resulting gaseous atoms layer by layer onto a base.

Read More: Nanotechnology is Changing EVERYTHING … Health Care, Clean Energy, Clean Water, Quantum Computing …

Be sure to ‘Follow Us’ on Twitter for the Latest ‘Nano’ Updates, News and Research:

Twitter Icon 042616.jpgFollow Genesis Nanotech on Twitter

New catalyst splits water into hydrogen: Near Platinum Performance – But at Much Less Cost

A schematic diagram illustrating the preparation of Ru@C2N is shown in the figure above. (Ruthenium: shown in gold, Carbon: shown in grey , Nitrogen: shown in sky-blue). (Image: UNIST)

Ulsan National Institute of Science and Technology (UNIST) scientists have developed an exiting new catalyst that can split water into hydrogen almost as good as platinum, but less costly and found frequently on Earth.

As described in the journal Nature Nanotechnology (“An efficient and pH-universal ruthenium-based catalyst for the hydrogen evolution reaction”), this ruthenium (Ru)-based material works almost as efficient as platinum and likely shows the highest catalytic performance without being affected by the pH of the water.

The research team, led by Professor Jong-Beom Baek of the Energy and Chemical Engineering at UNIST has synthesized Ru and C2N, a two-dimensional organic structure, to verify its performance as a water-splitting catalyst. 

With the aid of this new catalyst, entitled Ru@C2N it is now possible to efficiently produce hydrogen.

The technology for producing hydrogen from water requires a good catalyst for commercial competitiveness. These water-splitting catalysts must exhibit high hydrogen conversion efficiency and excellent durability, operate well under low-voltage, and should be economical.

The Pt-based catalysts used in the hydrogen generation reaction are highly expensive noble metals, resulting in additional costs and difficulty of mass production. They are also less stable in an alkaline environment.

One solution, many researchers suggest, was to build catalysts made of cheap, non-noble metals. However, because these materials corrode rapidly under acidic condition and operate at very-high voltages, productivity was limited.

The Ru@C2N, developed by Professor Baek is a high-performance material that satisfies all four commercial competitiveness of water-splitting catalysts.

This material exhibits high turnover frequency (TOF) as high as Pt and can be operated on low-voltage supply. In addition, it is not affected by the pH of the water and can be used in any environment.

Above figure shows the comparison of the Turnover frequency (TOF) of Ru@C2N with other catalysts. (Image: UNIST) (click on image to enlarge)

The synthesis process of Ru@C2N is simple. Professor Baek and his colleagues simply mixed the ruthenium salt (RuCl3) with the monomers which forms the porous two-dimensional organic structure, C2N. The Ru@C2N catalyst is then produced after going through reduction and heat treatment processes.

The researchers used the same process to build M@C2N (M = Co, Ni, Pd, Pt) catalysts, using cobalt (Co), nickel (Ni), lead (Pb) and platinum (Pt).

When comparing their efficiency of hydrogen production, the Ru@C2N catalyst exhibited the highest catalytic performance at the lowest overvoltage, as well as superior catalytic activity.

“Our study not only suggests new directions in materials science, but also presents a wide range of possibilities from basic to applied science,” says Professor Baek.
“This material is expected to attract attention in many areas thanks to its scientific potential.”

Source: Ulsan National Institute of Science and Technology

Penn State U. – New ‘Flow-Cell’ Battery Recharged with Carbon Dioxide – Capturing CO2 Emissions for an Untapped Source of Energy

The pH-gradient flow cell has two channels: one containing an aqueous solution sparged with carbon dioxide (low pH) and the other containing an aqueous solution sparged with ambient air (high pH). The pH gradient causes ions to flow across …more

Researchers have developed a type of rechargeable battery called a flow cell that can be recharged with a water-based solution containing dissolved carbon dioxide (CO2) emitted from fossil fuel power plants. The device works by taking advantage of the CO2 concentration difference between CO2 emissions and ambient air, which can ultimately be used to generate electricity.

The new flow cell produces an average power density of 0.82 W/m2, which is almost 200 times higher than values obtained using previous similar methods. Although it is not yet clear whether the process could be economically viable on a large scale, the early results appear promising and could be further improved with future research.

The scientists, Taeyong Kim, Bruce E. Logan, and Christopher A. Gorski at The Pennsylvania State University, have published a paper on the new method of CO2-to-electricity conversion in a recent issue of Environmental Science & Technology Letters.

“This work offers an alternative, simpler means to capturing energy from CO2 emissions compared to existing technologies that require expensive catalyst materials and very high temperatures to convert CO2 into useful fuels,” said Gorski.

While the contrast of gray-white smoke against a blue sky illustrates the adverse environmental impact of burning , the large difference in CO2 concentration between the two gases is also what provides an untapped energy source for generating electricity.fossil-fuels-co2-to-green-images

In order to harness the potential energy in this concentration difference, the researchers first dissolved CO2 gas and in separate containers of an aqueous solution, in a process called sparging. At the end of this process, the CO2-sparged solution forms bicarbonate ions, which give it a lower pH of 7.7 compared to the air-sparged solution, which has a pH of 9.4.

After sparging, the researchers injected each solution into one of two channels in a flow cell, creating a pH gradient in the cell. The flow cell has electrodes on opposite sides of the two channels, along with a semi-porous membrane between the two channels that prevents instant mixing while still allowing ions to pass through. Due to the pH difference between the two solutions, various ions pass through the membrane, creating a voltage difference between the two electrodes and causing electrons to flow along a wire connecting the electrodes.

After the flow cell is discharged, it can be recharged again by switching the channels that the solutions flow through. By switching the solution that flows over each electrode, the charging mechanism is reversed so that the electrons flow in the opposite direction. Tests showed that the cell maintains its performance over 50 cycles of alternating solutions.

The results also showed that, the higher the pH difference between the two channels, the higher the average power density. Although the pH-gradient flow cell achieves a power density that is high compared to similar cells that convert waste CO2 to electricity, it is still much lower than the power densities of fuel cell systems that combine CO2 with other fuels, such as H2.

However, the new flow cell has certain advantages over these other devices, such as its use of inexpensive materials and room-temperature operation. These features make the flow cell attractive for practical applications at existing .

“A system containing numerous identical flow cells would be installed at power plants that combust fossil fuels,” Gorski said. “The flue gas emitted from fossil fuel combustion would need to be pre-cooled, then bubbled through a reservoir of water that can be pumped through the flow cells.”

In the future, the researchers plan to further improve the flow cell performance.

“We are currently looking to see how the solution conditions can be optimized to maximize the amount of energy produced,” Gorski said. “We are also investigating if we can dissolve chemicals in the water that exhibit pH-dependent redox properties, thus allowing us to increase the amount of energy that can be recovered. The latter approach would be analogous to a flow battery, which reduces and oxidizes dissolved chemicals in aqueous solutions, except we are causing them to be reduced and oxidized here by changing the solution pH with CO2.”

Explore further: Chemists present an innovative redox-flow battery based on organic polymers and water

More information: Taeyoung Kim et al. “A pH-Gradient Flow Cell for Converting Waste CO2 into Electricity.” Environmental Science & Technology Letters. DOI: 10.1021/acs.estlett.6b00467


Stanford University: Solving the “Storage Problem” for Renewable Energies: A New Cost Effective Re-Chargeable Aluminum Battery


One of the biggest missing links in renewable energy is affordable and high performance energy storage, but a new type of battery developed at Stanford University could be the solution.

Solar energy generation works great when the sun is shining [duh…like taking a Space Mission to the Sun .. but only at night! :-)] and wind energy is awesome when it’s windy (double duh…), but neither is very helpful for the grid after dark and when the air is still. That’s long been one of the arguments against renewable energy, even if there are plenty of arguments for developing additional solar and wind energy installations without large-scale energy storage solutions in place. However, if low-cost and high performance batteries were readily available, it could go a long way toward a more sustainable and cleaner grid, and a pair of Stanford engineers have developed what could be a viable option for grid-scale energy storage.

With three relatively abundant and low-cost materials, namely aluminum, graphite, and urea, Stanford chemistry Professor Hongjie Dai and doctoral candidate Michael Angell have created a rechargeable battery that is nonflammable, very efficient, and has a long lifecycle.

“So essentially, what you have is a battery made with some of the cheapest and most abundant materials you can find on Earth. And it actually has good performance. Who would have thought you could take graphite, aluminum, urea, and actually make a battery that can cycle for a pretty long time?” – Dai

A previous version of this rechargeable aluminum battery was found to be efficient and to have a long life, but it also employed an expensive electrolyte, whereas the latest iteration of the aluminum battery uses urea as the base for the electrolyte, which is already produced in large quantities for fertilizer and other uses (it’s also a component of urine, but while a pee-based home battery might seem like just the ticket, it’s probably not going to happen any time soon).

According to Stanford, the new development marks the first time urea has been used in a battery, and because urea isn’t flammable (as lithium-ion batteries are), this makes it a great choice for home energy storage, where safety is of utmost importance. And the fact that the new battery is also efficient and affordable makes it a serious contender when it comes to large-scale energy storage applications as well.

“I would feel safe if my backup battery in my house is made of urea with little chance of causing fire.” – Dai

According to Angell, using the new battery as grid storage “is the main goal,” thanks to the high efficiency and long life cycle, coupled with the low cost of its components. By one metric of efficiency, called Coulombic efficiency, which measures the relationship between the unit of charge put into the battery and the output charge, the new battery is rated at 99.7%, which is high.WEF solarpowersavemoney-628x330

In order to meet the needs of a grid-scale energy storage system, a battery would need to last at least a decade, and while the current urea-based aluminum ion batteries have been able to last through about 1500 charge cycles, the team is still looking into improving its lifetime in its goal of developing a commercial version.

The team has published some of its results in the Proceedings of the National Academy of Sciences, under the title “High Coulombic efficiency aluminum-ion battery using an AlCl3-urea ionic liquid analog electrolyte.”


PNL Battery Storage Systems 042016 rd1604_batteriesGrid-scale energy storage to manage our electricity supply would benefit from batteries that can withstand repeated cycling of discharging and charging. Current lithium-ion batteries have lifetimes of only 1,000-3,000 cycles. Now a team of researchers from Stanford University, Taiwan, and China have made a research prototype of an inexpensive, safe aluminum-ion battery that can withstand 7,500 cycles. In the aluminum-ion battery, one electrode is made from affordable aluminum, and the other is composed of carbon in the form of graphite.

Read: A step towards new, faster-charging, and safer batteries


Deeper look at unconventional oil and gas: Engineers enhance methods to characterize deposits, potential for extraction

Source: Rice University


Chemical engineers build simulations based on samples from unconventional, organic shale formations that can help predict how much oil and gas a well might produce and how best to extract it.

Simulations created by Rice University engineers will help them understand the effect of confinement on dipolar relaxation, a critical factor in interpreting nuclear magnetic resonance data. The image here is of heptane molecules in a nanotube. The study is a preliminary step toward understanding dipolar relaxation of fluids confined in the chemically and structurally heterogeneous kerogen matrix in oil shale. Credit: Dilip Asthagiri/Rice University
Understanding how oil and gas molecules, water and rocks interact at the nanoscale will help make extraction of hydrocarbons through hydraulic fracturing more efficient, according to Rice University researchers.

Rice engineers George Hirasaki and Walter Chapman are leading an effort to better characterize the contents of organic shale by combining standard nuclear magnetic resonance (NMR) — the same technology used by hospitals to see inside human bodies — with molecular dynamics simulations.

The work presented this month in the Journal of Magnetic Resonance details their method to analyze shale samples and validate simulations that may help producers determine how much oil and/or gas exist in a formation and how difficult they may be to extract.

Oil and gas drillers use NMR to characterize rock they believe contains hydrocarbons. NMR manipulates the hydrogen atoms’ nuclear magnetic moments, which can be forced to align by an applied external magnetic field. 
After the moments are perturbed by radio-frequency electromagnetic pulses, they “relax” back to their original orientation, and NMR can detect that. 

Because relaxation times differ depending on the molecule and its environment, the information gathered by NMR can help identify whether a molecule is gas, oil or water and the critical size of the pores that contain them.

“This is their eyes and ears for knowing what’s down there,” said Hirasaki, who said NMR instruments are among several tools in the string sent downhole to “log,” or gather information, about a well.

In conventional reservoirs, he said, the NMR log can distinguish gas, oil and water and quantify the amounts of each contained in the pores of the rock from their relaxation times — known as T1 and T2 — as well as how diffuse fluids are.

“If the rock is water-wet, then oil will relax at rates close to that of bulk oil, while water will have a surface-relaxation time that is a function of the pore size,” Hirasaki said. “This is because water is relaxed by sites at the water/mineral interface and the ratio of the mineral surface area to water volume is larger in smaller pores. The diffusivity is inversely proportional to the viscosity of the fluid. 

Thus gas is easily distinguished from oil and water by measuring diffusivity simultaneously with the T2 relaxation time.

“In unconventional reservoirs, both T1 and T2 relaxation times of water and oil are short and have considerable overlap,” he said. “Also the T1/T2 ratio can become very large in the smallest pores. The diffusivity is restricted by the nanometer-to-micron size of the pores. Thus it is a challenge to determine if the signal is from gas, oil or water.”

Hirasaki said there is debate on whether the short relaxation times in shale are due to paramagnetic sites on mineral surfaces and asphaltene aggregates and/or due to the restricted motion of the molecules confined in small pores. 
“We don’t have an answer yet, but this study is the first step,” he said.

“The development of technology to drill horizontal wells and apply multiple hydraulic fractures (up to about 50) is what made oil and gas production commercially viable from unconventional resources,” Hirasaki said. “These resources were previously known as the ‘source rock,’ from which oil and gas found in conventional reservoirs had originated and migrated. 

The source rock was too tight for commercial production using conventional technology.”
Fluids pumped downhole to fracture a horizontal well contain water, chemicals and sand that keeps the fracture “propped” open after the injection stops. The fluids are then pumped out to make room for the hydrocarbons to flow.

But not all the water sent downhole comes back. Often the chemical composition of the organic component of shale known as kerogen has an affinity that allows water molecules to bind and block the nanoscale pores that would otherwise let oil and gas molecules through.
“Kerogen is the organic material that resisted biodegradation during deep burial,” Hirasaki said. 

“When it gets to a certain temperature, the molecules start cracking and make hydrocarbon liquids. Higher temperature makes methane (natural gas). But the fluids are in pores that are so tight the technology developed for conventional reservoirs doesn’t apply anymore.”

The Rice project managed by lead author Philip Singer, a research scientist in Hirasaki’s lab, and co-author Dilip Asthagiri, a research scientist in Chapman’s lab, a lecturer and director of Rice’s Professional Master’s in Chemical Engineering program, applies NMR to kerogen samples and compares it to computer models that simulate how the substances interact, particularly in terms of material’s wettability, its affinity for binding to water, gas or oil molecules.

“NMR is very sensitive to fluid-surface interactions,” Singer said. “With shale, the complication we’re dealing with is the nanoscale pores. The NMR signal changes dramatically compared with measuring conventional rocks, in which pores are larger than a micron. So to understand what the NMR is telling us in shale, we need to simulate the interactions down to the nanoscale.”

The simulations mimic the molecules’ known relaxation properties and reveal how they move in such a restrictive environment. When matched with NMR signals, they help interpret conditions downhole. That knowledge could also lead to fracking fluids that are less likely to bind to the rock, improving the flow of hydrocarbons, Hirasaki said.

“If we can verify with measurements in the laboratory how fluids in highly confined or viscous systems behave, then we’ll be able to use the same types of models to describe what’s happening in the reservoir itself,” he said.

One goal is to incorporate the simulations into iSAFT — inhomogeneous Statistical Associating Fluid Theory — a pioneering method developed by Chapman and his group to simulate the free energy landscapes of complex materials and analyze their microstructures, surface forces, wettability and morphological transitions.

“Our results challenge approximations in models that have been used for over 50 years to interpret NMR and MRI (magnetic resonance imaging) data,” Chapman said. “Now that we have established the approach, we hope to explain results that have baffled scientists for years.”