MIT: New battery technology gobbles up carbon dioxide – Ultimately may help reduce the emission of the greenhouse gas to the atmosphere + Could Carbon Dioxide Capture Batteries Replace Phone and EV Batteries?


MIT-CO2_0

This scanning electron microscope image shows the carbon cathode of a carbon-dioxide-based battery made by MIT researchers, after the battery was discharged. It shows the buildup of carbon compounds on the surface, composed of carbonate material that could be derived from power plant emissions, compared to the original pristine surface (inset) Courtesy of the researchers

Lithium-based battery could make use of greenhouse gas before it ever gets into the atmosphere.

A new type of battery developed by researchers at MIT could be made partly from carbon dioxide captured from power plants. Rather than attempting to convert carbon dioxide to specialized chemicals using metal catalysts, which is currently highly challenging, this battery could continuously convert carbon dioxide into a solid mineral carbonate as it discharges.

convertingat

While still based on early-stage research and far from commercial deployment, the new battery formulation could open up new avenues for tailoring electrochemical carbon dioxide conversion reactions, which may ultimately help reduce the emission of the greenhouse gas to the atmosphere.

battery-atmosphereRead Also:  Scientists Have Created Batteries Using Carbon Dioxide From The Atmosphere Which Could Replace Phone And Electric Car Batteries

 

 

 

The battery is made from lithium metal, carbon, and an electrolyte that the researchers designed. The findings are described today in the journal Joule, in a paper by assistant professor of mechanical engineering Betar Gallant, doctoral student Aliza Khurram, and postdoc Mingfu He.

Currently, power plants equipped with carbon capture systems generally use up to 30 percent of the electricity they generate just to power the capture, release, and storage of carbon dioxide. Anything that can reduce the cost of that capture process, or that can result in an end product that has value, could significantly change the economics of such systems, the researchers say.

However, “carbon dioxide is not very reactive,” Gallant explains, so “trying to find new reaction pathways is important.” Generally, the only way to get carbon dioxide to exhibit significant activity under electrochemical conditions is with large energy inputs in the form of high voltages, which can be an expensive and inefficient process. Ideally, the gas would undergo reactions that produce something worthwhile, such as a useful chemical or a fuel. However, efforts at electrochemical conversion, usually conducted in water, remain hindered by high energy inputs and poor selectivity of the chemicals produced.

Gallant and her co-workers, whose expertise has to do with nonaqueous (not water-based) electrochemical reactions such as those that underlie lithium-based batteries, looked into whether carbon-dioxide-capture chemistry could be put to use to make carbon-dioxide-loaded electrolytes — one of the three essential parts of a battery — where the captured gas could then be used during the discharge of the battery to provide a power output.

This approach is different from releasing the carbon dioxide back to the gas phase for long-term storage, as is now used in carbon capture and sequestration, or CCS. That field generally looks at ways of capturing carbon dioxide from a power plant through a chemical absorption process and then either storing it in underground formations or chemically altering it into a fuel or a chemical feedstock.

Instead, this team developed a new approach that could potentially be used right in the power plant waste stream to make material for one of the main components of a battery.

While interest has grown recently in the development of lithium-carbon-dioxide batteries, which use the gas as a reactant during discharge, the low reactivity of carbon dioxide has typically required the use of metal catalysts. Not only are these expensive, but their function remains poorly understood, and reactions are difficult to control.

By incorporating the gas in a liquid state, however, Gallant and her co-workers found a way to achieve electrochemical carbon dioxide conversion using only a carbon electrode. The key is to pre-activate the carbon dioxide by incorporating it into an amine solution.

“What we’ve shown for the first time is that this technique activates the carbon dioxide for more facile electrochemistry,” Gallant says. “These two chemistries — aqueous amines and nonaqueous battery electrolytes — are not normally used together, but we found that their combination imparts new and interesting behaviors that can increase the discharge voltage and allow for sustained conversion of carbon dioxide.”

They showed through a series of experiments that this approach does work, and can produce a lithium-carbon dioxide battery with voltage and capacity that are competitive with that of state-of-the-art lithium-gas batteries. Moreover, the amine acts as a molecular promoter that is not consumed in the reaction.

The key was developing the right electrolyte system, Khurram explains. In this initial proof-of-concept study, they decided to use a nonaqueous electrolyte because it would limit the available reaction pathways and therefore make it easier to characterize the reaction and determine its viability. The amine material they chose is currently used for CCS applications, but had not previously been applied to batteries.

factory-air-pollution-environment-smoke-shutterstock_130778315-34gj4r8xdrgg8mj9r25a0wThis early system has not yet been optimized and will require further development, the researchers say. For one thing, the cycle life of the battery is limited to 10 charge-discharge cycles, so more research is needed to improve rechargeability and prevent degradation of the cell components. “Lithium-carbon dioxide batteries are years away” as a viable product, Gallant says, as this research covers just one of several needed advances to make them practical.

But the concept offers great potential, according to Gallant. Carbon capture is widely considered essential to meeting worldwide goals for reducing greenhouse gas emissions, but there are not yet proven, long-term ways of disposing of or using all the resulting carbon dioxide. Underground geological disposal is still the leading contender, but this approach remains somewhat unproven and may be limited in how much it can accommodate. It also requires extra energy for drilling and pumping.

The researchers are also investigating the possibility of developing a continuous-operation version of the process, which would use a steady stream of carbon dioxide under pressure with the amine material, rather than a preloaded supply the material, thus allowing it to deliver a steady power output as long as the battery is supplied with carbon dioxide. Ultimately, they hope to make this into an integrated system that will carry out both the capture of carbon dioxide from a power plant’s emissions stream, and its conversion into an electrochemical material that could then be used in batteries. “It’s one way to sequester it as a useful product,” Gallant says.

“It was interesting that Gallant and co-workers cleverly combined the prior knowledge from two different areas, metal-gas battery electrochemistry and carbon-dioxide capture chemistry, and succeeded in increasing both the energy density of the battery and the efficiency of the carbon-dioxide capture,” says Kisuk Kang, a professor at Seoul National University in South Korea, who was not associated with this research.

“Even though more precise understanding of the product formation from carbon dioxide may be needed in the future, this kind of interdisciplinary approach is very exciting and often offers unexpected results, as the authors elegantly demonstrated here,” Kang adds.

MIT’s Department of Mechanical Engineering provided support for the project.

Advertisements

MIT Study: Adding power choices reduces cost and risk of carbon-free electricity


52377624-renewable-energy-sources-vector-infographics-solar-wind-tidal-hydroelectric-geothermal-power-biofuel

New MIT research shows that, unless steady, continuous carbon-free sources of electricity are included in the mix, costs of decarbonizing the electrical system could be prohibitive and end up derailing attempts to mitigate the most severe effects of global climate change. Image: Chelsea Turner

To curb greenhouse gas emissions, nations, states, and cities should aim for a mix of fuel-saving, flexible, and highly reliable sources.

In major legislation passed at the end of August, California committed to creating a 100 percent carbon-free electricity grid — once again leading other nations, states, and cities in setting aggressive policies for slashing greenhouse gas emissions. Now, a study by MIT researchers provides guidelines for cost-effective and reliable ways to build such a zero-carbon electricity system.

MIT-Energy-Mix-01_0The best way to tackle emissions from electricity, the study finds, is to use the most inclusive mix of low-carbon electricity sources.

Costs have declined rapidly for wind power, solar power, and energy storage batteries in recent years, leading some researchers, politicians, and advocates to suggest that these sources alone can power a carbon-free grid. But the new study finds that across a wide range of scenarios and locations, pairing these sources with steady carbon-free resources that can be counted on to meet demand in all seasons and over long periods — such as nuclear, geothermal, bioenergy, and natural gas with carbon capture — is a less costly and lower-risk route to a carbon-free grid.

The new findings are described in a paper published today in the journal Joule, by MIT doctoral student Nestor Sepulveda, Jesse Jenkins PhD ’18, Fernando de Sisternes PhD ’14, and professor of nuclear science and engineering and Associate Provost Richard Lester.

The need for cost effectiveness

“In this paper, we’re looking for robust strategies to get us to a zero-carbon electricity supply, which is the linchpin in overall efforts to mitigate climate change risk across the economy,” Jenkins says. To achieve that, “we need not only to get to zero emissions in the electricity sector, but we also have to do so at a low enough cost that electricity is an attractive substitute for oil, natural gas, and coal in the transportation, heat, and industrial sectors, where decarbonization is typically even more challenging than in electricity. ”

Sepulveda also emphasizes the importance of cost-effective paths to carbon-free electricity, adding that in today’s world, “we have so many problems, and climate change is a very complex and important one, but not the only one. So every extra dollar we spend addressing climate change is also another dollar we can’t use to tackle other pressing societal problems, such as eliminating poverty or disease.” Thus, it’s important for research not only to identify technically achievable options to decarbonize electricity, but also to find ways to achieve carbon reductions at the most reasonable possible cost.

To evaluate the costs of different strategies for deep decarbonization of electricity generation, the team looked at nearly 1,000 different scenarios involving different assumptions about the availability and cost of low-carbon technologies, geographical variations in the availability of renewable resources, and different policies on their use.

Regarding the policies, the team compared two different approaches. The “restrictive” approach permitted only the use of solar and wind generation plus battery storage, augmented by measures to reduce and shift the timing of demand for electricity, as well as long-distance transmission lines to help smooth out local and regional variations. The  “inclusive” approach used all of those technologies but also permitted the option of using  continual carbon-free sources, such as nuclear power, bioenergy, and natural gas with a system for capturing and storing carbon emissions. Under every case the team studied, the broader mix of sources was found to be more affordable.

The cost savings of the more inclusive approach relative to the more restricted case were substantial. Including continual, or “firm,” low-carbon resources in a zero-carbon resource mix lowered costs anywhere from 10 percent to as much as 62 percent, across the many scenarios analyzed. That’s important to know, the authors stress, because in many cases existing and proposed regulations and economic incentives favor, or even mandate, a more restricted range of energy resources.

“The results of this research challenge what has become conventional wisdom on both sides of the climate change debate,” Lester says. “Contrary to fears that effective climate mitigation efforts will be cripplingly expensive, our work shows that even deep decarbonization of the electric power sector is achievable at relatively modest additional cost. But contrary to beliefs that carbon-free electricity can be generated easily and cheaply with wind, solar energy, and storage batteries alone, our analysis makes clear that the societal cost of achieving deep decarbonization that way will likely be far more expensive than is necessary.”

Light bulb RE images

A new taxonomy for electricity sources

In looking at options for new power generation in different scenarios, the team found that the traditional way of describing different types of power sources in the electrical industry — “baseload,” “load following,” and “peaking” resources — is outdated and no longer useful, given the way new resources are being used.

Rather, they suggest, it’s more appropriate to think of power sources in three new categories: “fuel-saving” resources, which include solar, wind and run-of-the-river (that is, without dams) hydropower; “fast-burst” resources, providing rapid but short-duration responses to fluctuations in electricity demand and supply, including battery storage and technologies and pricing strategies to enhance the responsiveness of demand; and “firm” resources, such as nuclear, hydro with large reservoirs, biogas, and geothermal.

“Because we can’t know with certainty the future cost and availability of many of these resources,” Sepulveda notes, “the cases studied covered a wide range of possibilities, in order to make the overall conclusions of the study robust across that range of uncertainties.”

Range of scenarios

The group used a range of projections, made by agencies such as the National Renewable Energy Laboratory, as to the expected costs of different power sources over the coming decades, including costs similar to today’s and anticipated cost reductions as new or improved systems are developed and brought online. For each technology, the researchers chose a projected mid-range cost, along with a low-end and high-end cost estimate, and then studied many combinations of these possible future costs.

Under every scenario, cases that were restricted to using fuel-saving and fast-burst technologies had a higher overall cost of electricity than cases using firm low-carbon sources as well, “even with the most optimistic set of assumptions about future cost reductions,” Sepulveda says.

That’s true, Jenkins adds, “even when we assume, for example, that nuclear remains as expensive as it is today, and wind and solar and batteries get much cheaper.”

The authors also found that across all of the wind-solar-batteries-only cases, the cost of electricity rises rapidly as systems move toward zero emissions, but when firm power sources are also available, electricity costs increase much more gradually as emissions decline to zero.

“If we decide to pursue decarbonization primarily with wind, solar, and batteries,” Jenkins says, “we are effectively ‘going all in’ and betting the planet on achieving very low costs for all of these resources,” as well as the ability to build out continental-scale  high-voltage transmission lines and to induce much more flexible electricity demand.

In contrast, “an electricity system that uses firm low-carbon resources together with solar, wind, and storage can achieve zero emissions with only modest increases in cost even under pessimistic assumptions about how cheap these carbon-free resources become or our ability to unlock flexible demand or expand the grid,” says Jenkins. This shows how the addition of firm low-carbon resources “is an effective hedging strategy that reduces both the cost and risk” for fully decarbonizing power systems, he says.

Even though a fully carbon-free electricity supply is years away in most regions, it is important to do this analysis today, Sepulveda says, because decisions made now about power plant construction, research investments, or climate policies have impacts that can last for decades.

“If we don’t start now” in developing and deploying the widest range of carbon-free alternatives, he says, “that could substantially reduce the likelihood of getting to zero emissions.”

David Victor, a professor of international relations at the University of California at San Diego, who was not involved in this study, says, “After decades of ignoring the problem of climate change, finally policymakers are grappling with how they might make deep cuts in emissions. This new paper in Joule shows that deep decarbonization must include a big role for reliable, firm sources of electric power. The study, one of the few rigorous numerical analyses of how the grid might actually operate with low-emission technologies, offers some sobering news for policymakers who think they can decarbonize the economy with wind and solar alone.”

The research received support from the MIT Energy Initiative, the Martin Family Trust, and the Chilean Navy.

MIT researchers 3-D print colloidal crystals – For the Scale-Up of optical sensors, color displays, and light-guided electronics + YouTube Video


3D MIT-Free-Form-Printing_0

3-D-printed colloidal crystals viewed under a light microscope. Image: Felice Franke

Technique could be used to scale-up self-assembled materials for use as optical sensors, color displays, and light-guided electronics.

MIT engineers have united the principles of self-assembly and 3-D printing using a new technique, which they highlight today in the journal Advanced Materials.

By their direct-write colloidal assembly process, the researchers can build centimeter-high crystals, each made from billions of individual colloids, defined as particles that are between 1 nanometer and 1 micrometer across.

“If you blew up each particle to the size of a soccer ball, it would be like stacking a whole lot of soccer balls to make something as tall as a skyscraper,” says study co-author Alvin Tan, a graduate student in MIT’s Department of Materials Science and Engineering. “That’s what we’re doing at the nanoscale.”

The researchers found a way to print colloids such as polymer nanoparticles in highly ordered arrangements, similar to the atomic structures in crystals. They printed various structures, such as tiny towers and helices, that interact with light in specific ways depending on the size of the individual particles within each structure.

Nanoparticles dispensed from a needle onto a rotating stage, creating a helical crystal containing billions of nanoparticles. (Credit: Alvin Tan)

The team sees the 3-D printing technique as a new way to build self-asssembled materials that leverage the novel properties of nanocrystals, at larger scales, such as optical sensors, color displays, and light-guided electronics.

“If you could 3-D print a circuit that manipulates photons instead of electrons, that could pave the way for future applications in light-based computing, that manipulate light instead of electricity so that devices can be faster and more energy efficient,” Tan says.

Tan’s co-authors are graduate student Justin Beroz, assistant professor of mechanical engineering Mathias Kolle, and associate professor of mechanical engineering A. John Hart.

Out of the fog

Colloids are any large molecules or small particles, typically measuring between 1 nanometer and 1 micrometer in diameter, that are suspended in a liquid or gas. Common examples of colloids are fog, which is made up of soot and other ultrafine particles dispersed in air, and whipped cream, which is a suspension of air bubbles in heavy cream. The particles in these everyday colloids are completely random in their size and the ways in which they are dispersed through the solution.

If uniformly sized colloidal particles are driven together via evaporation of their liquid solvent, causing them to assemble into ordered crystals, it is possible to create structures that, as a whole, exhibit unique optical, chemical, and mechanical properties. These crystals can exhibit properties similar to interesting structures in nature, such as the iridescent cells in butterfly wings, and the microscopic, skeletal fibers in sea sponges.

So far, scientists have developed techniques to evaporate and assemble colloidal particles into thin films to form displays that filter light and create colors based on the size and arrangement of the individual particles. But until now, such colloidal assemblies have been limited to thin films and other planar structures.

“For the first time, we’ve shown that it’s possible to build macroscale self-assembled colloidal materials, and we expect this technique can build any 3-D shape, and be applied to an incredible variety of materials,” says Hart, the senior author of the paper.

Building a particle bridge

The researchers created tiny three-dimensional towers of colloidal particles using a custom-built 3-D-printing apparatus consisting of a glass syringe and needle, mounted above two heated aluminum plates. The needle passes through a hole in the top plate and dispenses a colloid solution onto a substrate attached to the bottom plate.

The team evenly heats both aluminum plates so that as the needle dispenses the colloid solution, the liquid slowly evaporates, leaving only the particles. The bottom plate can be rotated and moved up and down to manipulate the shape of the overall structure, similar to how you might move a bowl under a soft ice cream dispenser to create twists or swirls.

Beroz says that as the colloid solution is pushed through the needle, the liquid acts as a bridge, or mold, for the particles in the solution. The particles “rain down” through the liquid, forming a structure in the shape of the liquid stream. After the liquid evaporates, surface tension between the particles holds them in place, in an ordered configuration.

As a first demonstration of their colloid printing technique, the team worked with solutions of polystyrene particles in water, and created centimeter-high towers and helices. Each of these structures contains 3 billion particles. In subsequent trials, they tested solutions containing different sizes of polystyrene particles and were able to print towers that reflected specific colors, depending on the individual particles’ size.

“By changing the size of these particles, you drastically change the color of the structure,” Beroz says. “It’s due to the way the particles are assembled, in this periodic, ordered way, and the interference of light as it interacts with particles at this scale. We’re essentially 3-D-printing crystals.”

The team also experimented with more exotic colloidal particles, namely silica and gold nanoparticles, which can exhibit unique optical and electronic properties. They printed millimeter-tall towers made from 200-nanometer diameter silica nanoparticles, and 80-nanometer gold nanoparticles, each of which reflected light in different ways.

“There are a lot of things you can do with different kinds of particles ranging from conductive metal particles to semiconducting quantum dots, which we are looking into,” Tan says. “Combining them into different crystal structures and forming them into different geometries for novel device architectures, I think that would be very effective in fields including sensing, energy storage, and photonics.”

This work was supported, in part, by the National Science Foundation, the Singapore Defense Science Organization Postgraduate Fellowship, and the National Defense Science and Engineering Graduate Fellowship Program.

 

MIT: Fish-eye lens may entangle pairs of atoms – may be a promising vehicle for necessary building blocks in designing quantum computers


MIT-Fish-Eye_0

James Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges.

Nearly 150 years ago, the physicist James Maxwell proposed that a circular lens that is thickest at its center, and that gradually thins out at its edges, should exhibit some fascinating optical behavior. Namely, when light is shone through such a lens, it should travel around in perfect circles, creating highly unusual, curved paths of light.

He also noted that such a lens, at least broadly speaking, resembles the eye of a fish. The lens configuration he devised has since been known in physics as Maxwell’s fish-eye lens — a theoretical construct that is only slightly similar to commercially available fish-eye lenses for cameras and telescopes.

Now scientists at MIT and Harvard University have for the first time studied this unique, theoretical lens from a quantum mechanical perspective, to see how individual atoms and photons may behave within the lens. In a study published Wednesday in Physical Review A, they report that the unique configuration of the fish-eye lens enables it to guide single photons through the lens, in such a way as to entangle pairs of atoms, even over relatively long distances.

Entanglement is a quantum phenomenon in which the properties of one particle are linked, or correlated, with those of another particle, even over vast distances. The team’s findings suggest that fish-eye lenses may be a promising vehicle for entangling atoms and other quantum bits, which are the necessary building blocks for designing quantum computers.

“We found that the fish-eye lens has something that no other two-dimensional device has, which is maintaining this entangling ability over large distances, not just for two atoms, but for multiple pairs of distant atoms,” says first author Janos Perczel, a graduate student in MIT’s Department of Physics. “Entanglement and connecting these various quantum bits can be really the name of the game in making a push forward and trying to find applications of quantum mechanics.”

The team also found that the fish-eye lens, contrary to recent claims, does not produce a perfect image. Scientists have thought that Maxwell’s fish-eye may be a candidate for a “perfect lens” — a lens that can go beyond the diffraction limit, meaning that it can focus light to a point that is smaller than the light’s own wavelength. This perfect imaging, scientist predict, should produce an image with essentially unlimited resolution and extreme clarity.

However, by modeling the behavior of photons through a simulated fish-eye lens, at the quantum level, Perczel and his colleagues concluded that it cannot produce a perfect image, as originally predicted.

“This tells you that there are these limits in physics that are really difficult to break,” Perczel says. “Even in this system, which seemed to be a perfect candidate, this limit seems to be obeyed. Perhaps perfect imaging may still be possible with the fish eye in some other, more complicated way, but not as originally proposed.”

Perczel’s co-authors on the paper are Peter Komar and Mikhail Lukin from Harvard University.

A circular path

Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. The denser a material, the slower light moves through it. This explains the optical effect when a straw is placed in a glass half full of water. Because the water is so much denser than the air above it, light suddenly moves more slowly, bending as it travels through water and creating an image that looks as if the straw is disjointed.

In the theoretical fish-eye lens, the differences in density are much more gradual and are distributed in a circular pattern, in such a way that it curves rather bends light, guiding light in perfect circles within the lens.

In 2009, Ulf Leonhardt, a physicist at the Weizmann Institute of Science in Israel was studying the optical properties of Maxwell’s fish-eye lens and observed that, when photons are released through the lens from a single point source, the light travels in perfect circles through the lens and collects at a single point at the opposite end, with very little loss of light.

“None of the light rays wander off in unwanted directions,” Perczel says. “Everything follows a perfect trajectory, and all the light will meet at the same time at the same spot.”

Leonhardt, in reporting his results, made a brief mention as to whether the fish-eye lens’ single-point focus might be useful in precisely entangling pairs of atoms at opposite ends of the lens.

“Mikhail [Lukin] asked him whether he had worked out the answer, and he said he hadn’t,” Perczel says. “That’s how we started this project and started digging deeper into how well this entangling operation works within the fish-eye lens.”

Playing photon ping-pong

To investigate the quantum potential of the fish-eye lens, the researchers modeled the lens as the simplest possible system, consisting of two atoms, one at either end of a two-dimensional fish-eye lens, and a single photon, aimed at the first atom. Using established equations of quantum mechanics, the team tracked the photon at any given point in time as it traveled through the lens, and calculated the state of both atoms and their energy levels through time.

They found that when a single photon is shone through the lens, it is temporarily absorbed by an atom at one end of the lens. It then circles through the lens, to the second atom at the precise opposite end of the lens. This second atom momentarily absorbs the photon before sending it back through the lens, where the light collects precisely back on the first atom.

“The photon is bounced back and forth, and the atoms are basically playing ping pong,” Perczel says. “Initially only one of the atoms has the photon, and then the other one. But between these two extremes, there’s a point where both of them kind of have it. It’s this mind-blowing quantum mechanics idea of entanglement, where the photon is completely shared equally between the two atoms.”

Perczel says that the photon is able to entangle the atoms because of the unique geometry of the fish-eye lens. The lens’ density is distributed in such a way that it guides light in a perfectly circular pattern and can cause even a single photon to bounce back and forth between two precise points along a circular path.

“If the photon just flew away in all directions, there wouldn’t be any entanglement,” Perczel says. “But the fish-eye gives this total control over the light rays, so you have an entangled system over long distances, which is a precious quantum system that you can use.”

As they increased the size of the fish-eye lens in their model, the atoms remained entangled, even over relatively large distances of tens of microns. They also observed that, even if some light escaped the lens, the atoms were able to share enough of a photon’s energy to remain entangled. Finally, as they placed more pairs of atoms in the lens, opposite to one another, along with corresponding photons, these atoms also became simultaneously entangled.

“You can use the fish eye to entangle multiple pairs of atoms at a time, which is what makes it useful and promising,” Perczel says.

Fishy secrets

In modeling the behavior of photons and atoms in the fish-eye lens, the researchers also found that, as light collected on the opposite end of the lens, it did so within an area that was larger than the wavelength of the photon’s light, meaning that the lens likely cannot produce a perfect image.

“We can precisely ask the question during this photon exchange, what’s the size of the spot to which the photon gets recollected? And we found that it’s comparable to the wavelength of the photon, and not smaller,” Perczel says. “Perfect imaging would imply it would focus on an infinitely sharp spot. However, that is not what our quantum mechanical calculations showed us.”

Going forward, the team hopes to work with experimentalists to test the quantum behaviors they observed in their modeling. In fact, in their paper, the team also briefly proposes a way to design a fish-eye lens for quantum entanglement experiments.

“The fish-eye lens still has its secrets, and remarkable physics buried in it,” Perczel says. “But now it’s making an appearance in quantum technologies where it turns out this lens could be really useful for entangling distant quantum bits, which is the basic building block for building any useful quantum computer or quantum information processing device.”

MIT: Introducing the latest in textiles: Soft hardware


MIT-Fiber-Diodes-01_0For the first time, the researchers from MIT and AFFOA have produced fibers with embedded electronics that are so flexible they can be woven into soft fabrics and made into wearable clothing.: Courtesy of the researchers

Researchers incorporate optoelectronic diodes into fibers and weave them into washable fabrics.

The latest development in textiles and fibers is a kind of soft hardware that you can wear: cloth that has electronic devices built right into it.

Researchers at MIT have now embedded high speed optoelectronic semiconductor devices, including light-emitting diodes (LEDs) and diode photodetectors, within fibers that were then woven at Inman Mills, in South Carolina, into soft, washable fabrics and made into communication systems. This marks the achievement of a long-sought goal of creating “smart” fabrics by incorporating semiconductor devices — the key ingredient of modern electronics — which until now was the missing piece for making fabrics with sophisticated functionality.

This discovery, the researchers  say, could unleash a new “Moore’s Law” for fibers — in other words, a rapid progression in which the capabilities of fibers would grow rapidly and exponentially over time, just as the capabilities of microchips have grown over decades.

The findings are described this week in the journal Nature in a paper by former MIT graduate student Michael Rein; his research advisor Yoel Fink, MIT professor of materials science and electrical engineering and CEO of AFFOA (Advanced Functional Fabrics of America); along with a team from MIT, AFFOA, Inman Mills, EPFL in Lausanne, Switzerland, and Lincoln Laboratory.

A spool of fine, soft fiber made using the new process shows the embedded LEDs turning on and off to demonstrate their functionality. The team has used similar fibers to transmit music to detector fibers, which work even when underwater. (Courtesy of the researchers)

Optical fibers have been traditionally produced by making a cylindrical object called a “preform,” which is essentially a scaled-up model of the fiber, then heating it. Softened material is then drawn or pulled downward under tension and the resulting fiber is collected on a spool.

The key breakthrough for producing  these new fibers was to add to the preform light-emitting semiconductor diodes the size of a grain of sand, and a pair of copper wires a fraction of a hair’s width. When heated in a furnace during the fiber-drawing process, the polymer preform partially liquified, forming a long fiber with the diodes lined up along its center and connected by the copper wires.

In this case, the solid components were two types of electrical diodes made using standard microchip technology: light-emitting diodes (LEDs) and photosensing diodes. “Both the devices and the wires maintain their dimensions while everything shrinks around them” in the drawing process, Rein says. The resulting fibers were then woven into fabrics, which were laundered 10 times to demonstrate their practicality as possible material for clothing.

“This approach adds a new insight into the process of making fibers,” says Rein, who was the paper’s lead author and developed the concept that led to the new process. “Instead of drawing the material all together in a liquid state, we mixed in devices in particulate form, together with thin metal wires.”

One of the advantages of incorporating function into the fiber material itself is that the resulting  fiber is inherently waterproof. To demonstrate this, the team placed some of the photodetecting fibers inside a fish tank. A lamp outside the aquarium transmitted music (appropriately, Handel’s “Water Music”) through the water to the fibers in the form of rapid optical signals. The fibers in the tank converted the light pulses — so rapid that the light appears steady to the naked eye — to electrical signals, which were then converted into music. The fibers survived in the water for weeks.

Though the principle sounds simple, making it work consistently, and making sure that the fibers could be manufactured reliably and in quantity, has been a long and difficult process. Staff at the Advanced Functional Fabric of America Institute, led by Jason Cox and Chia-Chun Chung, developed the pathways to increasing yield, throughput, and overall reliability, making these fibers ready for transitioning to industry. At the same time, Marty Ellis from Inman Mills developed techniques for weaving these fibers into fabrics using a conventional industrial manufacturing-scale loom.

“This paper describes a scalable path for incorporating semiconductor devices into fibers. We are anticipating the emergence of a ‘Moore’s law’ analog in fibers in the years ahead,” Fink says. “It is already allowing us to expand the fundamental capabilities of fabrics to encompass communications, lighting, physiological monitoring, and more. In the years ahead fabrics will deliver value-added services and will no longer just be selected for aesthetics and comfort.”

He says that the first commercial products incorporating this technology will be reaching the marketplace as early as next year — an extraordinarily short progression from laboratory research to commercialization. Such rapid lab-to-market development was a key part of the reason for creating an academic-industry-government collaborative such as AFFOA in the first place, he says. These initial applications will be specialized products involving communications and safety. “It’s going to be the first fabric communication system. We are right now in the process of transitioning the technology to domestic manufacturers and industry at an unprecendented speed and scale,” he says.

In addition to commercial applications, Fink says the U.S. Department of Defense — one of AFFOA’s major supporters — “is exploring applications of these ideas to our women and men in uniform.”

Beyond communications, the fibers could potentially have significant applications in the biomedical field, the researchers say. For example, devices using such fibers might be used to make a wristband that could measure pulse or blood oxygen levels, or be woven into a bandage to continuously monitor the healing  process.

The research was supported in part by the MIT Materials Research Science and Engineering Center (MRSEC) through the MRSEC Program of the National Science Foundation, by the U.S. Army Research Laboratory and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies. This work was also supported by the Assistant Secretary of Defense for Research and Engineering.

MIT Uses Nanotech to Miniaturize Electronics Into Spray Form


The ‘aerosolized electronics’ are so small they can be sprayed through the air. MIT researchers say the tiny devices could be used to in oil and gas pipelines or even in the human digestive system to detect problems.

Researchers at MIT have built electronics so small they can be sprayed out like a mist.

The electronics are about the size of a human egg cell, and can act as tiny manmade indicators with the ability to sense their surroundings and store data.

On Monday, the team of MIT researchers published their findings, which involve grafting microscopic circuits on “colloidal particles.” These particles are so tiny —from 1 to 1000 nanometers in diameter— that they can suspend themselves indefinitely in a liquid or air.

To create the tiny machines, the MIT researchers used graphene and other compounds to form circuits that can chemically detect when certain particles, say some poisonous ammonia, are nearby and conductive. The circuits were then grafted on colloidal particles made out of a polymer called SU-8. For power, the machines rely on a photodiode that converts light into electrical current.

“What we created is a state machine that can be in two states. We start with OFF and if both light AND a chemical is detected, the particle changes its state to ON,” said Volodymyr Koman, one of the researchers, in an email. “So, there are two inputs, one output (1-bit memory) and one logical statement.”

In their experiments, the researchers successfully used the tiny electronics to identify whether toxic ammonia was present in a pipeline by spraying the machines in aerosolized form. In another experiment, the electronics were able to detect the presence of soot. As a result, the researchers say the technology could be handy in factories or gas pipelines to detect potential problems.

Another use case is for medical care. The tiny machines could be sent through someone’s digestive system to scan for evidence of diseases.

However, one big limitation with the “aerosolized electronics” is they can’t communicate wirelessly. All data is stored on the tiny machines, which can be scooped out from a liquid or caught in air, and then scanned to access the results.

To make them easy to spot (at least under a microscope), the electronics are fitted with tiny reflectors. But in the future, the MIT researchers hope to add some communication capabilities to the machines, so that all data can be fetched remotely.

RELATED

DARPA Is Working on Insect Scale Disaster Robots

“We are excited about this, because on-board electronics has modular nature, i.e. we will be able to extend number of components in the future, increasing complexity,” Koman said.

The researchers published their findings in Nature Nanotechnology on Monday.

Editor’s note: This story has been updated with comment from one of the researchers.

Form Energy – A formidable (and notable) Startup Company Tackling the Toughest Problem(s) in Energy Storage


Industry veterans from Tesla, Aquion and A123 are trying to create cost-effective energy storage to last for weeks and months.

A crew of battle-tested cleantech veterans raised serious cash to solve the thorniest problem in clean energy.

As wind and solar power supply more and more of the grid’s electricity, seasonal swings in production become a bigger obstacle. A low- or no-carbon electricity system needs a way to dispatch clean energy on demand, even when wind and solar aren’t producing at their peaks.

Four-hour lithium-ion batteries can help on a given day, but energy storage for weeks or months has yet to arrive at scale.

Into the arena steps Form Energy, a new startup whose founders hope for commercialization not in a couple of years, but in the next decade.

More surprising, they’ve secured $9 million in Series A funding from investors who are happy to wait that long. The funders include both a major oil company and an international consortium dedicated to stopping climate change.

“Renewables have already gotten cheap,” said co-founder Ted Wiley, who worked at saltwater battery company Aquion prior to its bankruptcy. “They are cheaper than thermal generation. In order to foster a change, they need to be just as dependable and just as reliable as the alternative. Only long-duration storage can make that happen.”

It’s hard to overstate just how difficult it will be to deliver.

The members of Form will have to make up the playbook as they go along. The founders, though, have a clear-eyed view of the immense risks. They’ve systematically identified materials that they think can work, and they have a strategy for proving them out.

Wiley and Mateo Jaramillo, who built the energy storage business at Tesla, detailed their plans in an exclusive interview with Greentech Media, describing the pathway to weeks- and months-long energy storage and how it would reorient the entirety of the grid.

The team

Form Energy tackles its improbable mission with a team of founders who have already made their mark on the storage industry, and learned from its most notable failures.

There’s Jaramillo, the former theology student who built the world’s most recognizable stationary storage brand at Tesla before stepping away in late 2016. Soon after, he started work on the unsolved long-duration storage problem with a venture he called Verse Energy.

Separately, MIT professor Yet-Ming Chiang set his sights on the same problem with a new venture, Baseload Renewables. His battery patents made their mark on the industry and launched A123 and 24M. More recently, he’d been working with the Department of Energy’s Joint Center on Energy Storage Research on an aqueous sulfur formula for cost-effective long-duration flow batteries.

He brought on Wiley, who had helped found Aquion and served as vice president of product and corporate strategy before he stepped away in 2015. Measured in real deployments, Aquion led the pack of long-duration storage companies until it suddenly went bankrupt in March 2017.

Chiang and Wiley focused on storing electricity for days to weeks; Jaramillo was looking at weeks to months. MIT’s “tough tech” incubator The Engine put in $2 million in seed funding, while Jaramillo had secured a term sheet of his own. In an unusual move, they elected to join forces rather than compete.

Rounding out the team are Marco Ferrara, the lead storage modeler at IHI who holds two Ph.D.s; and Billy Woodford, an MIT-trained battery scientist and former student of Chiang’s.

The product

Form doesn’t think of itself as a battery company.

It wants to build what Jaramillo calls a “bidirectional power plant,” one which produces renewable energy and delivers it precisely when it is needed. This would create a new class of energy resource: “deterministic renewables.”

By making renewable energy dispatchable throughout the year, this resource could replace the mid-range and baseload power plants that currently burn fossil fuels to supply the grid.

Without such a tool, transitioning to high levels of renewables creates problems.

Countries could overbuild their renewable generation to ensure that the lowest production days still meet demand, but that imposes huge costs and redundancies. One famous 100 percent renewables scenario notoriously relied on a 15x increase in U.S. hydropower capacity to balance the grid in the winter.

The founders are remaining coy about the details of the technology itself.

Jaramillo and Wiley confirmed that both products in development use electrochemical energy storage. The one Chiang started developing uses aqueous sulfur, chosen for its abundance and cheap price relative to its storage ability. Jaramillo has not specified what he chose for seasonal storage.

What I did confirm is that they have been studying all the known materials that can store electricity, and crossing off the ones that definitely won’t work for long duration based on factors like abundance and fundamental cost per embodied energy.

“Because we’ve done the work looking at all the options in the electrochemical set, you can positively prove that almost all of them will not work,” Jaramillo said. “We haven’t been able to prove that these won’t work.”

The company has small-scale prototypes in the lab, but needs to prove that they can scale up to a power plant that’s not wildly expensive. It’s one thing to store energy for months, it’s another to do so at a cost that’s radically lower than currently available products.

“We can’t sit here and tell you exactly what the business model is, but we know that we’re engaged with the right folks to figure out what it is, assuming the technical work is successful,” Jaramillo said.

Given the diversity of power markets around the world, there likely won’t be one single business model.

The bidirectional power plant may bid in just like gas plants do today, but the dynamics of charging up on renewable energy could alter the way it engages with traditional power markets. Then again, power markets themselves could look very different by that time.

If the team can characterize a business case for the technology, the next step will be developing a full-scale pilot. If that works, full deployment comes next.

But don’t bank on that happening in a jiffy.

“It’s a decade-long project,” Jaramillo said. “The first half of that is spent on developing things and the second half is hopefully spent deploying things.”

The backer says

The Form founders had to find financial backers who were comfortable chasing a market that doesn’t exist with a product that won’t arrive for up to a decade.

That would have made for a dubious proposition for cleantech VCs a couple of years ago, but the funding landscape has shifted.

The Engine, an offshoot of MIT, started in 2016 to commercialize “tough tech” with long-term capital.

“We’re here for the long shots, the unimaginable, and the unbelievable,” its website proclaims. That group funded Baseload Renewables with $2 million before it merged into Form.

Breakthrough Energy Ventures, the entity Bill Gates launched to provide “patient, risk-tolerant capital” for clean energy game-changers, joined for the Series A.

San Francisco venture capital firm Prelude Ventures joined as well. It previously bet on next-gen battery companies like the secretive QuantumScape and Natron Energy.

The round also included infrastructure firm Macquarie Capital, which has shown an interest in owning clean energy assets for the long haul.

Saudi Aramco, one of the largest oil and gas supermajors in the world, is another backer.

Saudi Arabia happens to produce more sulfur than most other countries, as a byproduct of its petrochemical industry.

While the kingdom relies on oil revenues currently, the leadership has committed to investing billions of dollars in clean energy as a way to scope out a more sustainable energy economy.

“It’s very much consistent with all of the oil supermajors taking a hard look at what the future is,” Jaramillo said. “That entire sector is starting to look beyond petrochemicals.”

Indeed, oil majors have emerged as a leading source of cleantech investment in recent months.

BP re-entered the solar industry with a $200 million investment in developer Lightsource. Total made the largest battery acquisition in history when it bought Saft in 2016; it also has a controlling stake in SunPower. Shell has ramped up investments in distributed energy, including the underappreciated thermal energy storage subsegment.

The $9 million won’t put much steel in the ground, but it’s enough to fund the preliminary work refining the technology.

“We would like to come out of this round with a clear understanding of the market need and a clear understanding of exactly how our technology meets the market need,” Wiley said.

The many paths to failure

Throughout the conversation, Jaramillo and Wiley avoided the splashy rhetoric one often hears from new startups intent on saving the world.

Instead, they acknowledge that the project could fail for a multitude of reasons. Here are just a few possibilities:

• The technologies don’t achieve radically lower cost.

• They can’t last for the 20- to 25-year lifetime expected of infrastructural assets.

• Power markets don’t allow this type of asset to be compensated.

• Financiers don’t consider the product bankable.

• Societies build a lot more transmission lines.

• Carbon capture technology removes the greenhouse gases from conventional generation.

• Small modular nuclear plants get permitting, providing zero-carbon energy on demand.

• The elusive hydrogen economy materializes.

Those last few scenarios face problems of their own. Transmission lines cost billions of dollars and provoke fierce local opposition.

Carbon capture technology hasn’t worked economically yet, although many are trying.

Small modular reactors face years of scrutiny before they can even get permission to operate in the U.S.

The costliness of hydrogen has thwarted wide-scale adoption.

One thing the Form Energy founders are not worried about is that lithium-ion makes an end run around their technology on price. That tripped up the initial wave of flow batteries, Wiley noted.

“By the time they were technically mature enough to be deployed, lithium-ion had declined in price to be at or below the price that they could deploy at,” he said.

Those early flow batteries, though, weren’t delivering much longer duration than commercially available lithium-ion. When the storage has to last for weeks or months, the cost of lithium-ion components alone makes it prohibitive.

“Our view is, just from a chemical standpoint, [lithium-ion] is not capable of declining another order of magnitude, but there does seem to be a need for storage that is an order of magnitude cheaper and an order of magnitude longer in duration than is currently being deployed,” Wiley explained.

They also plan to avoid a scenario that helped bring down many a storage startup, Aquion and A123 included: investing lots of capital in a factory before the market had arrived.

Form Energy isn’t building small commoditized products; it’s constructing a power plant.

“When we say we’re building infrastructure, we mean that this is intended to be infrastructure,” Wiley said.

So far, at least, there isn’t much competition to speak of in the super-long duration battery market.

That could start to change. Now that brand-name investors have gotten involved, others are sure to take notice. The Department of Energy launched its own long-duration storage funding opportunity in May, targeting the 10- to 100-hour range.

It may be years before Form’s investigations produce results, if they ever do.

But the company has already succeeded in expanding the realm of what’s plausible and fundable in the energy storage industry.

* From Greentech Media J. Spector

MIT: Novel transmitter protects wireless data from hackers


MIT-Frequenxy-Hopping1

MIT researchers developed a transmitter that frequency hops data bits ultrafast to prevent signal jamming on wireless devices. The transmitter’s design (pictured) features bulk acoustic wave resonators (side boxes) that rapidly switch between radio frequency channels, sending data bits with each hop. A channel generator (top box) each microsecond selects the random channels to send bits. Two transmitters work in alternating paths (center boxes), so one receives channel selection, while the other sends data, to ensure ultrafast speeds. Courtesy of the researchers

Device uses ultrafast “frequency hopping” and data encryption to protect signals from being intercepted and jammed.

Today, more than 8 billion devices are connected around the world, forming an “internet of things” that includes medical devices, wearables, vehicles, and smart household and city technologies. By 2020, experts estimate that number will rise to more than 20 billion devices, all uploading and sharing data online.

But those devices are vulnerable to hacker attacks that locate, intercept, and overwrite the data, jamming signals and generally wreaking havoc. One method to protect the data is called “frequency hopping,” which sends each data packet, containing thousands of individual bits, on a random, unique radio frequency (RF) channel, so hackers can’t pin down any given packet. Hopping large packets, however, is just slow enough that hackers can still pull off an attack.

Now MIT researchers have developed a novel transmitter that frequency hops each individual 1 or 0 bit of a data packet, every microsecond, which is fast enough to thwart even the quickest hackers.

The transmitter leverages frequency-agile devices called bulk acoustic wave (BAW) resonators and rapidly switches between a wide range of RF channels, sending information for a data bit with each hop. In addition, the researchers incorporated a channel generator that, each microsecond, selects the random channel to send each bit. On top of that, the researchers developed a wireless protocol — different from the protocol used today — to support the ultrafast frequency hopping.

“With the current existing [transmitter] architecture, you wouldn’t be able to hop data bits at that speed with low power,” says Rabia Tugce Yazicigil, a postdoc in the Department of Electrical Engineering and Computer Science and first author on a paper describing the transmitter, which is being presented at the IEEE Radio Frequency Integrated Circuits Symposium. “By developing this protocol and radio frequency architecture together, we offer physical-layer security for connectivity of everything.” Initially, this could mean securing smart meters that read home utilities, control heating, or monitor the grid.

“More seriously, perhaps, the transmitter could help secure medical devices, such as insulin pumps and pacemakers, that could be attacked if a hacker wants to harm someone,” Yazicigil says. “When people start corrupting the messages [of these devices] it starts affecting people’s lives.”

Co-authors on the paper are Anantha P. Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science (EECS); former MIT postdoc Phillip Nadeau; former MIT undergraduate student Daniel Richman; EECS graduate student Chiraag Juvekar; and visiting research student Kapil Vaidya.

Ultrafast frequency hopping

One particularly sneaky attack on wireless devices is called selective jamming, where a hacker intercepts and corrupts data packets transmitting from a single device but leaves all other nearby devices unscathed. Such targeted attacks are difficult to identify, as they’re often mistaken for poor a wireless link and are difficult to combat with current packet-level frequency-hopping transmitters.

With frequency hopping, a transmitter sends data on various channels, based on a predetermined sequence shared with the receiver. Packet-level frequency hopping sends one data packet at a time, on a single 1-megahertz channel, across a range of 80 channels. A packet takes around 612 microseconds for BLE-type transmitters to send on that channel. But attackers can locate the channel during the first 1 microsecond and then jam the packet.

“Because the packet stays in the channel for long time, and the attacker only needs a microsecond to identify the frequency, the attacker has enough time to overwrite the data in the remainder of packet,” Yazicigil says.

To build their ultrafast frequency-hopping method, the researchers first replaced a crystal oscillator — which vibrates to create an electrical signal — with an oscillator based on a BAW resonator. However, the BAW resonators only cover about 4 to 5 megahertz of frequency channels, falling far short of the 80-megahertz range available in the 2.4-gigahertz band designated for wireless communication. Continuing recent work on BAW resonators — in a 2017 paper co-authored by Chandrakasan, Nadeau, and Yazicigil — the researchers incorporated components that divide an input frequency into multiple frequencies. An additional mixer component combines the divided frequencies with the BAW’s radio frequencies to create a host of new radio frequencies that can span about 80 channels.

Randomizing everything

The next step was randomizing how the data is sent. In traditional modulation schemes, when a transmitter sends data on a channel, that channel will display an offset — a slight deviation in frequency. With BLE modulations, that offset is always a fixed 250 kilohertz for a 1 bit and a fixed -250 kilohertz for a 0 bit. A receiver simply notes the channel’s 250-kilohertz or -250-kilohertz offset as each bit is sent and decodes the corresponding bits.

But that means, if hackers can pinpoint the carrier frequency, they too have access to that information. If hackers can see a 250-kilohertz offset on, say, channel 14, they’ll know that’s an incoming 1 and begin messing with the rest of the data packet.

To combat that, the researchers employed a system that each microsecond generates a pair of separate channels across the 80-channel spectrum. Based on a preshared secret key with the transmitter, the receiver does some calculations to designate one channel to carry a 1 bit and the other to carry a 0 bit. But the channel carrying the desired bit will always display more energy. The receiver then compares the energy in those two channels, notes which one has a higher energy, and decodes for the bit sent on that channel.

For example, by using the preshared key, the receiver will calculate that 1 will be sent on channel 14 and a 0 will be sent on channel 31 for one hop. But the transmitter only wants the receiver to decode a 1. The transmitter will send a 1 on channel 14, and send nothing on channel 31. The receiver sees channel 14 has a higher energy and, knowing that’s a 1-bit channel, decodes a 1. In the next microsecond, the transmitter selects two more random channels for the next bit and repeats the process.

Because the channel selection is quick and random, and there is no fixed frequency offset, a hacker can never tell which bit is going to which channel. “For an attacker, that means they can’t do any better than random guessing, making selective jamming infeasible,” Yazicigil says.

As a final innovation, the researchers integrated two transmitter paths into a time-interleaved architecture. This allows the inactive transmitter to receive the selected next channel, while the active transmitter sends data on the current channel. Then, the workload alternates. Doing so ensures a 1-microsecond frequency-hop rate and, in turn, preserves the 1-megabyte-per-second data rate similar to BLE-type transmitters.

“Most of the current vulnerability [to signal jamming] stems from the fact that transmitters hop slowly and dwell on a channel for several consecutive bits. Bit-level frequency hopping makes it very hard to detect and selectively jam the wireless link,” says Peter Kinget, a professor of electrical engineering and chair of the department at Columbia University. “This innovation was only possible by working across the various layers in the communication stack requiring new circuits, architectures, and protocols. It has the potential to address key security challenges in IoT devices across industries.”

The work was supported by Hong Kong Innovation and Technology Fund, the National Science Foundation, and Texas Instruments. The chip fabrication was supported by TSMC University Shuttle Program.

MIT engineers configure RFID tags to work as sensors


MIT-RFID-Sensing_0

MIT researchers are developing RFID stickers that sense their environment, enabling low-cost monitoring of chemicals and other signals in the environment Image: Chelsea Turner, MIT

Platform may enable continuous, low-cost, reliable devices that detect chemicals in the environment.

 

These days, many retailers and manufacturers are tracking their products using RFID, or radio-frequency identification tags. Often, these tags come in the form of paper-based labels outfitted with a simple antenna and memory chip. When slapped on a milk carton or jacket collar, RFID tags act as smart signatures, transmitting information to a radio-frequency reader about the identity, state, or location of a given product.

In addition to keeping tabs on products throughout a supply chain, RFID tags are used to trace everything from casino chips and cattle to amusement park visitors and marathon runners.

The Auto-ID Lab at MIT has long been at the forefront of developing RFID technology. Now engineers in this group are flipping the technology toward a new function: sensing. They have developed a new ultra-high-frequency, or UHF, RFID tag-sensor configuration that senses spikes in glucose and wirelessly transmits this information. In the future, the team plans to tailor the tag to sense chemicals and gases in the environment, such as carbon monoxide.

“People are looking toward more applications like sensing to get more value out of the existing RFID infrastructure,” says Sai Nithin Reddy Kantareddy, a graduate student in MIT’s Department of Mechanical Engineering. “Imagine creating thousands of these inexpensive RFID tag sensors which you can just slap onto the walls of an infrastructure or the surrounding objects to detect common gases like carbon monoxide or ammonia, without needing an additional battery. You could deploy these cheaply, over a huge network.”

Kantareddy developed the sensor with Rahul Bhattacharya, a research scientist in the group, and Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president of open learning at MIT. The researchers presented their design at the IEEE International Conference on RFID, and their results appear online this week.

“RFID is the cheapest, lowest-power RF communication protocol out there,” Sarma says. “When generic RFID chips can be deployed to sense the real world through tricks in the tag, true pervasive sensing can become reality.”

Confounding waves

Currently, RFID tags are available in a number of configurations, including battery-assisted and “passive” varieties. Both types of tags contain a small antenna which communicates with a remote reader by backscattering the RF signal, sending it a simple code or set of data that is stored in the tag’s small integrated chip. Battery-assisted tags include a small battery that powers this chip. Passive RFID tags are designed to harvest energy from the reader itself, which naturally emits just enough radio waves within FCC limits to power the tag’s memory chip and receive a reflected signal.

Recently, researchers have been experimenting with ways to turn passive RFID tags into sensors that can operate over long stretches of time without the need for batteries or replacements. These efforts have typically focused on manipulating a tag’s antenna, engineering it in such a way that its electrical properties change in response to certain stimuli in the environment. As a result, an antenna should reflect radio waves back to a reader at a characteristically different frequency or signal-strength, indicating that a certain stimuli has been detected.

For instance, Sarma’s group previously designed an RFID tag-antenna that changes the way it transmits radio waves in response to moisture content in the soil. The team also fabricated an antenna to sense signs of anemia in blood flowing across an RFID tag.

But Kantareddy says there are drawbacks to such antenna-centric designs, the main one being “multipath interference,” a confounding effect in which radio waves, even from a single source such as an RFID reader or antenna, can reflect off multiple surfaces.

“Depending on the environment, radio waves are reflecting off walls and objects before they reflect off the tag, which interferes and creates noise,” Kantareddy says. “With antenna-based sensors, there’s more chance you’ll get false positives or negatives, meaning a sensor will tell you it sensed something even if it didn’t, because it’s affected by the interference of the radio fields. So it makes antenna-based sensing a little less reliable.”

Chipping away

Sarma’s group took a new approach: Instead of manipulating a tag’s antenna, they tried tailoring its memory chip. They purchased off-the-shelf integrated chips that are designed to switch between two different power modes: an RF energy-based mode, similar to fully passive RFIDs; and a local energy-assisted mode, such as from an external battery or capacitor, similar to semipassive RFID tags.

The team worked each chip into an RFID tag with a standard radio-frequency antenna. In a key step, the researchers built a simple circuit around the memory chip, enabling the chip to switch to a local energy-assisted mode only when it senses a certain stimuli. When in this assisted mode (commercially called battery-assisted passive mode, or BAP), the chip emits a new protocol code, distinct from the normal code it transmits when in a passive mode. A reader can then interpret this new code as a signal that a stimuli of interest has been detected.

Kantareddy says this chip-based design can create more reliable RFID sensors than antenna-based designs because it essentially separates a tag’s sensing and communication capabilities. In antenna-based sensors, both the chip that stores data and the antenna that transmits data are dependent on the radio waves reflected in the environment. With this new design, a chip does not have to depend on confounding radio waves in order to sense something.

“We hope reliability in the data will increase,” Kantareddy says. “There’s a new protocol code along with the increased signal strength whenever you’re sensing, and there’s less chance for you to confuse when a tag is sensing versus not sensing.”

“This approach is interesting because it also solves the problem of information overload that can be associated with large numbers of tags in the environment,” Bhattacharyya says. “Instead of constantly having to parse through streams of information from short-range passive tags, an RFID reader can be placed far enough away so that only events of significance are communicated and need to be processed.”

“Plug-and-play” sensors

As a demonstration, the researchers developed an RFID glucose sensor. They set up commercially available glucose-sensing electrodes, filled with the electrolyte glucose oxidase. When the electrolyte interacts with glucose, the electrode produces an electric charge, acting as a local energy source, or battery.

The researchers attached these electrodes to an RFID tag’s memory chip and circuit. When they added glucose to each electrode, the resulting charge caused the chip to switch from its passive RF power mode, to the local charge-assisted power mode. The more glucose they added, the longer the chip stayed in this secondary power mode.

Kantareddy says that a reader, sensing this new power mode, can interpret this as a signal that glucose is present. The reader can potentially determine the amount of glucose by measuring the time during which the chip stays in the battery-assisted mode: The longer it remains in this mode, the more glucose there must be.

While the team’s sensor was able to detect glucose, its performance was below that of commercially available glucose sensors. The goal, Kantareddy says, was not necessarily to develop an RFID glucose sensor, but to show that the group’s design could be manipulated to sense something more reliably than antenna-based sensors.

“With our design, the data is more trustable,” Kantareddy says.

The design is also more efficient. A tag can run passively on RF energy reflected from a nearby reader until a stimuli of interest comes around. The stimulus itself produces a charge, which powers a tag’s chip to send an alarm code to the reader. The very act of sensing, therefore, produces additional power to power the integrated chip.

“Since you’re getting energy from RF and your electrodes, this increases your communication range,” Kantareddy says. “With this design, your reader can be 10 meters away, rather than 1 or 2. This can decrease the number and cost of readers that, say, a facility requires.”

Going forward, he plans to develop an RFID carbon monoxide sensor by combining his design with different types of electrodes engineered to produce a charge in the presence of the gas.

“With antenna-based designs, you have to design specific antennas for specific applications,” Kantareddy says. “With ours, you can just plug and play with these commercially available electrodes, which makes this whole idea scalable. Then you can deploy hundreds or thousands, in your house or in a facility where you could monitor boilers, gas containers, or pipes.”

This research was supported, in part, by the GS1 organization.

MIT Technology Review: Sustainable Energy: The daunting math of climate change means we’ll need carbon capture … eventually


 

MIT CC Friedman unknown-1_4

 Net Power’s pilot natural gas plant with carbon capture, near Houston, Texas.

An Interview with Julio Friedmann

At current rates of greenhouse-gas emissions, the world could lock in 1.5 ˚C of warming as soon as 2021, an analysis by the website Carbon Brief has found. We’re on track to blow the carbon budget for 2 ˚C by 2036.

Amid this daunting climate math, many researchers argue that capturing carbon dioxide from power plants, factories, and the air will have to play a big part in any realistic efforts to limit the dangers of global warming.

If it can be done economically, carbon capture and storage (CCS) offers the world additional flexibility and time to make the leap to cleaner systems. It means we can retrofit, rather than replace, vast parts of the global energy infrastructure. And once we reach disastrous levels of warming, so-called direct air capture offers one of the only ways to dig our way out of trouble, since carbon dioxide otherwise stays in the atmosphere for thousands of years.

Julio Friedmann has emerged as one of the most ardent advocates of these technologies. He oversaw research and development efforts on clean coal and carbon capture at the US Department of Energy’s Office of Fossil Energy under the last administration. Among other roles, he’s now working with or advising the Global CCS Institute, the Energy Futures Initiative, and Climeworks, a Switzerland-based company already building pilot plants that pull carbon dioxide from the air.

In an interview with MIT Technology Review, Friedmann argues that the technology is approaching a tipping point: a growing number of projects demonstrate that it works in the real world, and that it is becoming more reliable and affordable. He adds that the boosted US tax credit for capturing and storing carbon, passed in the form of the Future Act as part of the federal budget earlier this year, will push forward many more projects and help create new markets for products derived from carbon dioxide (see “The carbon-capture era may finally be starting”).

But serious challenges remain. Even with the tax credit, companies will incur steep costs by adding carbon capture systems to existing power plants. And a widely cited 2011 study, coauthored by MIT researcher Howard Herzog, found that direct air capture will require vast amounts of energy and cost 10 times as much as scrubbing carbon from power plants.

(This interview has been edited for length and clarity.)

In late February, you wrote a Medium post saying that with the passage of the increased tax credit for carbon capture and storage, we’ve “launched the climate counter-strike.” Why is that a big deal?

It actually sets a price on carbon formally. It says you should get paid to not emit carbon dioxide, and you should get paid somewhere between $35 a ton and $50 a ton. So that is already a massive change. In addition to that, it says you can do one of three things: you can store CO2, you can use it for enhanced oil recovery, or you can turn it into stuff. Fundamentally, it says not emitting has value.

As I’ve said many times before, the lack of progress in deploying CCS up until this point is not a question of cost. It’s really been a question of finance.

The Future Act creates that financing.

I identified an additional provision which said not only can you consider a power plant a source or an industrial site a source, you can consider the air a source.

Even if we zeroed out all our emissions today, we still have a legacy of harm of two trillion tons of CO2 in the air, and we need to do something about that.

And this law says, yeah, we should. It says we can take carbon dioxide out of the air and turn it into stuff.

At the Petra Nova plant in Texas, my understanding is the carbon capture costs are something like $60 to $70 a ton, which is still going to outstrip the tax credit today. How are we going to close that gap?

There are many different ways to go about it. For example, the state of New Jersey today passed a 90 percent clean energy portfolio standard. Changing the policy from a renewable portfolio standard [which would exclude CCS technologies] to a clean energy standard [which would allow them] allowed higher ambition.

In that context, somebody who would build a CCS project and would get a contract to deliver that power, or deliver that emissions abatement, can actually again get staked, get financed, and get built. That can happen without any technology advancement.

The technology today is already cost competitive. CCS today, as a retrofit, is cheaper than a whole bunch of stuff. It’s cheaper than new-build nuclear, it’s cheaper than offshore wind. It’s cheaper than a whole bunch of things we like, and it’s cheaper than rooftop solar, almost everywhere. It’s cheaper than utility-scale concentrating solar pretty much everywhere, and it is cheaper than what solar and wind were 10 years ago.

What do you make of the critique that this is all just going to perpetuate the fossil-fuel industry?

The enemy is not fossil fuels; the enemy is emissions.

In a place like California that has terrific renewable resources and a good infrastructure for renewable energy, maybe you can get to zero [fossil fuels] someday.

If you’re in Saskatchewan, you really can’t do that. It is too cold for too much of the year, and they don’t have solar resources, and their wind resources are problematic because they’re so strong they tear up the turbines. Which is why they did the CCS project in Saskatchewan. For them it was the right solution.

Shifting gears to direct air capture, the basic math says that you’re moving 2,500 molecules to capture one of CO2. How good are we getting at this, and how cheaply can we do this at this point?

If you want to optimize the way that you would reduce carbon dioxide economy-wide, direct air capture is the last thing you would tackle. Turns out, though, that we don’t live in that society. We are not optimizing anything in any way.

So instead we realize we have this legacy of emissions in the atmosphere and we need tools to manage that. So there are companies like ClimeworksCarbon Engineering, and Global Thermostat. Those guys said we know we’re going to need this technology, so I’m going to work now. They’ve got decent financing, and the costs are coming down and improving (see “Can sucking CO2 out of the atmosphere really work?”).

The cost for all of these things now today, all-in costs, is somewhere between $300 and $600 a ton. I’ve looked inside all those companies and I believe all of them are on a glide path to get to below $200 a ton by somewhere between 2022 and 2025. And I believe that they’re going to get down to $100 a ton by 2030. At that point, these are real options.

At $200 a ton, we know today unambiguously that pulling CO2 out of the air is cheaper than trying to make a zero-carbon airplane, by a lot. So it becomes an option that you use to go after carbon in the hard-to-scrub parts of the economy.

Is it ever going to work as a business, or is it always going to be kind of a public-supported enterprise to buy ourselves out of climate catastrophes?

Direct air capture is not competitive today broadly, but there are places where the value proposition is real. So let me give you a couple of examples.

In many parts of the world there are no sources of CO2. If you’re running a Pepsi or a Coca-Cola plant in Sri Lanka, you literally burn diesel fuel and capture the CO2 from it to put into your cola, at a bonkers price. It can cost $300 to $800 a ton to get that CO2. So there are already going to be places in some people’s supply chain where direct air capture could be cheaper.

We talk to companies like Goodyear, Firestone, or Michelin. They make tires, and right now the way that they get their carbon black [a material used in tire production that’s derived from fossil fuel] is basically you pyrolize bunker fuel in the Gulf Coast, which is a horrible, environmentally destructive process. And then you ship it by rail cars to wherever they’re making the tires.

If they can decouple from that market by gathering CO2 wherever they are and turn that into carbon black, they can actually avoid market shocks. So even if it costs a little more, the value to that company might be high enough to bring it into the market. That’s where I see direct air actually gaining real traction in the next few years.

It’s not going to be enough for climate. We know that we will have to do carbon storage, for sure, if we want to really manage the atmospheric emissions. But there’s a lot of ground to chase this, and we never know quite where technology goes.

In one of your earlier Medium posts you said that we’re ultimately going to have to pull 10 billion tons of CO2 out of the atmosphere every year. Climeworks is doing about 50 [at their pilot plant in Iceland]. So what does that scale-up look like?

You don’t have to get all 10 billion tons with direct air capture. So let’s say you just want one billion.

Right now, Royal Dutch Shell as a company moves 300 million tons of refined product every year. This means that you need three to four companies the size of Royal Dutch Shell to pull CO2 out of the atmosphere.

The good news is we don’t need that billion tons today. We have 10 or 20 or 30 years to get to a billion tons of direct air capture. But in fact we’ve seen that kind of scaling in other kinds of clean-tech markets. There’s nothing in the laws of physics or chemistry that stops that.