Smoke and steam billows from Belchatow Power Station, Europe’s largest coal-fired power plant near Belchatow, Poland on November 28, 2018. Inventors claim a new carbon capture “battery” could be retrofitted for industrial plants but also for mobile sources of CO2 emissions like cars and airplanes. Photo by REUTERS/Kacper Pempel
Renewable energy alone is not enough to turn the tide of the climate crisis. Despitethe rapid expansionof wind, solar and other clean energy technologies, human behavior and consumption are flooding our skies with too much carbon, and simply supplanting fossil fuels won’t stop global warming.
That’s why carbon capture technology is slowly being integrated into energy and industrial facilities across the globe. Typically set up to collect carbon from an exhaust stream, this technology sops up greenhouse gases before they spread into Earth’s airways.
But those industrial practices work because these factories produce gas pollutants like carbon dioxide and methane at high concentrations. Carbon capture can’t draw CO2 from regular open air, where the concentration of this prominent pollutant is too diffuse.
Moreover, the energy sector’s transition toward decarbonization is moving too slowly. It will take years —likely decades— before the world’s hundreds of CO2-emitting industrial plants adopt capture technology.
Humans have pumped about2,000 gigatonnes— billions of metric tons — of carbon dioxide into the air since industrialization, and there will be more.
But what if you could have a personal-sized carbon capture machine on your car, commercial airplane or solar-powered home?
Chemical engineers at the Massachusetts Institute of Technology have created a new device that can remove carbon dioxide from the air at any concentration.
Published in October in the journal Energy & Environmental Science, the project is the latest bid to directly capture CO2 emissions and keep them from accelerating and worsening future climate disasters.
Think of the invention as a quasi-battery, in terms of its shape, its construction and how it works to collect carbon dioxide. You pump electricity into the battery, and while the device stores this charge, a chemical reaction occurs that absorbs CO2 from the surrounding atmosphere — a process known as direct air capture. The CO2 can be extracted by discharging the battery, releasing the gas, so the CO2 then can be pumped into the ground. The researchers describe this back-and-forth as electroswing adsorption.
I realized there was a gap in the spectrum of solutions,” said Sahag Voskian, who co-led the project with fellow MIT chemical engineer T. Alan Hatton. “Many current systems, for instance, are very bulky and can only be used for large-scale power plants or industrial applications.”
Relative to current technology, this electroswing adsorber could be retrofitted onto smaller, mobile sources of emissions like autos and planes, the study states.
Voskian also pictures the battery being scaled to plug into power plants powered by renewables, such as wind farms and solar fields, which are known to create more energy than they can store. Rather than lose this power, these renewable plants could set up a side hustle where their excess energy is used to capture carbon.
“That’s one of the nice aspects of this technology — is that direct linkage with renewables,” said Jennifer Wilcox, a chemical engineer at Worcester Polytechnic Institute, who was not involved in the study.
The advantage of an electricity-based system for carbon capture is that it scales linearly. If you need 10 times more capacity, you simply build 10 times more of these “electroswing batteries” and stack them, Voskian said.
He estimates that if you cover a football field with these devices in stacks that are tens of feet high, they could remove about 200,000 to 400,000 metric tons of CO2 a year. Build another 100,000 of these fields, and they could bring carbon dioxide in the atmosphere back to preindustrial levels within 40 years.
One hundred thousand installations sounds like a lot, but keep in mind that these devices can be built to any size and run off the excess electricity created by renewables like wind and solar, which at the momentcannot be easily stored. Imagine turning the more than2 million U.S. homes with rooftop solarinto mini-carbon capture plants.
On paper, this invention sounds like a game changer. But it has a number of feasibility hurdles to surmount before it leaves the laboratory.
How electroswing battery works
The idea of using electricity to trigger a chemical reaction — electrochemistry — as a means for capturing carbon dioxide isn’t new. It has been around fornearly 25 years, in fact.
But Voskian and Hatton have now added two special materials into the equation: quinone and carbon nanotubes.
A carbon nanotube is a human-made atom-sized cylinder — a sheet of carbon molecules spread into a single layer and wrapped up like a tube. Aside from being more than100 times strongerthan stainless steel or titanium, carbon nanotubes are excellent conductors of electricity, making them sturdy building blocks for electrified equipment.
Much like a regular battery, Voskian and Hatton’s device has a positive electrode and a negative electrode — “plus” and “minus” sides. But the minus side — the negative electrode — is infused with quinone, a chemical that, after being electrically charged, reacts and sticks to CO2.
“You can think of it like the charge and discharge of a battery,” Voskian said. “When you charge the battery, you have carbon capture. When you discharge it, you release the carbon that you captured.”
Their approach is unique because all the energy required for their direct air capture comes from electricity. The three major startups in this emerging space —Climeworks,Global ThermostatandCarbon Engineering— rely on a mixture of electric and thermal (heat) energy, Wilcox said, with thermal energy being the dominant factor.
For power plants and industrial facilities, that excess heat — or waste heat, a byproduct of their everyday work, isn’t a perfect fit for carbon capture. Waste heat isn’t very consistent. Imagine standing next to a fire — its warmth changes as the flames flit about.
In Voskian’s operation, “We don’t have any of that. We have full control over the energetics of our process,” he said.
Will it work?
Voskian and Hatton, who have launched a startup called Verdox, write in their study that operating electroswing carbon capture would cost between $50 to $100 per metric ton of CO2.
“If it’s true, that’s a great breakthrough,” said Richard Newell, president and CEO of Resources for the Future, a nonprofit research organizationthat develops energy and environmental policy on carbon capture. But, he cautioned, “the distance between showing something in the laboratory and then demonstrating it at a commercial scale is very big.”
In this diagram of the new system, air entering from top right passes to one of two chambers (the gray rectangular structures) containing battery electrodes that attract the carbon dioxide. Then the airflow is switched to the other chamber, while the accumulated carbon dioxide in the first chamber is flushed into a separate storage tank (at right). These alternating flows allow for continuous operation of the two-step process. Image courtesy of the researchers
The process could work on the gas at any concentrations, from power plant emissions to open air
A new way of removing carbon dioxide from a stream of air could provide a significant tool in the battle against climate change. The new system can work on the gas at virtually any concentration level, even down to the roughly 400 parts per million currently found in the atmosphere.
Most methods of removing carbon dioxide from a stream of gas require higher concentrations, such as those found in the flue emissions from fossil fuel-based power plants. A few variations have been developed that can work with the low concentrations found in air, but the new method is significantly less energy-intensive and expensive, the researchers say.
The technique, based on passing air through a stack of charged electrochemical plates, is described in a new paper in the journal Energy and Environmental Science, by MIT postdoc Sahag Voskian, who developed the work during his PhD, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering.
The device is essentially a large, specialized battery that absorbs carbon dioxide from the air (or other gas stream) passing over its electrodes as it is being charged up, and then releases the gas as it is being discharged. In operation, the device would simply alternate between charging and discharging, with fresh air or feed gas being blown through the system during the charging cycle, and then the pure, concentrated carbon dioxide being blown out during the discharging.
As the battery charges, an electrochemical reaction takes place at the surface of each of a stack of electrodes. These are coated with a compound called poly-anthraquinone, which is composited with carbon nanotubes. The electrodes have a natural affinity for carbon dioxide and readily react with its molecules in the airstream or feed gas, even when it is present at very low concentrations. The reverse reaction takes place when the battery is discharged — during which the device can provide part of the power needed for the whole system — and in the process ejects a stream of pure carbon dioxide. The whole system operates at room temperature and normal air pressure.
“The greatest advantage of this technology over most other carbon capture or carbon absorbing technologies is the binary nature of the adsorbent’s affinity to carbon dioxide,” explains Voskian. In other words, the electrode material, by its nature, “has either a high affinity or no affinity whatsoever,” depending on the battery’s state of charging or discharging. Other reactions used for carbon capture require intermediate chemical processing steps or the input of significant energy such as heat, or pressure differences.
“This binary affinity allows capture of carbon dioxide from any concentration, including 400 parts per million, and allows its release into any carrier stream, including 100 percent CO2,” Voskian says. That is, as any gas flows through the stack of these flat electrochemical cells, during the release step the captured carbon dioxide will be carried along with it. For example, if the desired end-product is pure carbon dioxide to be used in the carbonation of beverages, then a stream of the pure gas can be blown through the plates. The captured gas is then released from the plates and joins the stream.
In some soft-drink bottling plants, fossil fuel is burned to generate the carbon dioxide needed to give the drinks their fizz. Similarly, some farmers burn natural gas to produce carbon dioxide to feed their plants in greenhouses. The new system could eliminate that need for fossil fuels in these applications, and in the process actually be taking the greenhouse gas right out of the air, Voskian says. Alternatively, the pure carbon dioxide stream could be compressed and injected underground for long-term disposal, or even made into fuel through a series of chemical and electrochemical processes.
The process this system uses for capturing and releasing carbon dioxide “is revolutionary” he says. “All of this is at ambient conditions — there’s no need for thermal, pressure, or chemical input. It’s just these very thin sheets, with both surfaces active, that can be stacked in a box and connected to a source of electricity.”
“In my laboratories, we have been striving to develop new technologies to tackle a range of environmental issues that avoid the need for thermal energy sources, changes in system pressure, or addition of chemicals to complete the separation and release cycles,” Hatton says. “This carbon dioxide capture technology is a clear demonstration of the power of electrochemical approaches that require only small swings in voltage to drive the separations.”
In a working plant — for example, in a power plant where exhaust gas is being produced continuously — two sets of such stacks of the electrochemical cells could be set up side by side to operate in parallel, with flue gas being directed first at one set for carbon capture, then diverted to the second set while the first set goes into its discharge cycle. By alternating back and forth, the system could always be both capturing and discharging the gas. In the lab, the team has proven the system can withstand at least 7,000 charging-discharging cycles, with a 30 percent loss in efficiency over that time. The researchers estimate that they can readily improve that to 20,000 to 50,000 cycles.
The electrodes themselves can be manufactured by standard chemical processing methods. While today this is done in a laboratory setting, it can be adapted so that ultimately they could be made in large quantities through a roll-to-roll manufacturing process similar to a newspaper printing press, Voskian says. “We have developed very cost-effective techniques,” he says, estimating that it could be produced for something like tens of dollars per square meter of electrode.
Compared to other existing carbon capture technologies, this system is quite energy efficient, using about one gigajoule of energy per ton of carbon dioxide captured, consistently. Other existing methods have energy consumption which vary between 1 to 10 gigajoules per ton, depending on the inlet carbon dioxide concentration, Voskian says.
The researchers have set up a company called Verdox to commercialize the process, and hope to develop a pilot-scale plant within the next few years, he says. And the system is very easy to scale up, he says: “If you want more capacity, you just need to make more electrodes.”
This work was supported by an MIT Energy Initiative Seed Fund grant and by Eni S.p.A.
In a demonstration of the basic chemical reactions used in the new process, electrolysis takes place in neutral water. Dyes show how acid (pink) and base (purple) are produced at the positive and negative electrodes. A variation of this process can be used to convert calcium carbonate (CaCO3) into calcium hydroxide (Ca(OH)2), which can then be used to make Portland cement without producing any greenhouse gas emissions. Cement production currently causes 8 percent of global carbon emissions. Image: Felice Frankel
MIT researchers find a way to eliminate carbon emissions from cement production — a major global source of greenhouse gases.
It’s well known that the production of cement — the world’s leading construction material — is a major source of greenhouse gas emissions, accounting for about 8 percent of all such releases. If cement production were a country, it would be the world’s third-largest emitter.
A team of researchers at MIT has come up with a new way of manufacturing the material that could eliminate these emissions altogether, and could even make some other useful products in the process.
The findings are being reported today in the journal PNAS in a paper by Yet-Ming Chiang, the Kyocera Professor of Materials Science and Engineering at MIT, with postdoc Leah Ellis, graduate student Andres Badel, and others.
“About 1 kilogram of carbon dioxide is released for every kilogram of cement made today,” Chiang says. That adds up to 3 to 4 gigatons (billions of tons) of cement, and of carbon dioxide emissions, produced annually today, and that amount is projected to grow. The number of buildings worldwide is expected to double by 2060, which is equivalent to “building one new New York City every 30 days,” he says. And the commodity is now very cheap to produce: It costs only about 13 cents per kilogram, which he says makes it cheaper than bottled water.
So it’s a real challenge to find ways of reducing the material’s carbon emissions without making it too expensive. Chiang and his team have spent the last year searching for alternative approaches, and hit on the idea of using an electrochemical process to replace the current fossil-fuel-dependent system.
Ordinary Portland cement, the most widely used standard variety, is made by grinding up limestone and then cooking it with sand and clay at high heat, which is produced by burning coal. The process produces carbon dioxide in two different ways: from the burning of the coal, and from gases released from the limestone during the heating. Each of these produces roughly equal contributions to the total emissions. The new process would eliminate or drastically reduce both sources, Chiang says. Though they have demonstrated the basic electrochemical process in the lab, the process will require more work to scale up to industrial scale.
First of all, the new approach could eliminate the use of fossil fuels for the heating process, substituting electricity generated from clean, renewable sources. “In many geographies renewable electricity is the lowest-cost electricity we have today, and its cost is still dropping,” Chiang says. In addition, the new process produces the same cement product. The team realized that trying to gain acceptance for a new type of cement — something that many research groups have pursued in different ways — would be an uphill battle, considering how widely used the material is around the world and how reluctant builders can be to try new, relatively untested materials.
The new process centers on the use of an electrolyzer, something that many people have encountered as part of high school chemistry classes, where a battery is hooked up to two electrodes in a glass of water, producing bubbles of oxygen from one electrode and bubbles of hydrogen from the other as the electricity splits the water molecules into their constituent atoms. Importantly, the electrolyzer’s oxygen-evolving electrode produces acid, while the hydrogen-evolving electrode produces a base.
In the new process, the pulverized limestone is dissolved in the acid at one electrode and high-purity carbon dioxide is released, while calcium hydroxide, generally known as lime, precipitates out as a solid at the other. The calcium hydroxide can then be processed in another step to produce the cement, which is mostly calcium silicate.
The carbon dioxide, in the form of a pure, concentrated stream, can then be easily sequestered, harnessed to produce value-added products such as a liquid fuel to replace gasoline, or used for applications such as oil recovery or even in carbonated beverages and dry ice. The result is that no carbon dioxide is released to the environment from the entire process, Chiang says. By contrast, the carbon dioxide emitted from conventional cement plants is highly contaminated with nitrogen oxides, sulfur oxides, carbon monoxide and other material that make it impractical to “scrub” to make the carbon dioxide usable.
Calculations show that the hydrogen and oxygen also emitted in the process could be recombined, for example in a fuel cell, or burned to produce enough energy to fuel the whole rest of the process, Ellis says, producing nothing but water vapor.
In a demonstration of the basic chemical reactions used in the new process, electrolysis takes place in neutral water. Dyes show how acid (pink) and base (purple) are produced at the positive and negative electrodes. A variation of this process can be used to convert calcium carbonate (CaCO3) into calcium hydroxide (Ca(OH)2), which can then be used to make Portland cement without producing any greenhouse gas emissions. Cement production currently causes 8 percent of global carbon emissions.
In their laboratory demonstration, the team carried out the key electrochemical steps required, producing lime from the calcium carbonate, but on a small scale. The process looks a bit like shaking a snow-globe, as it produces a flurry of suspended white particles inside the glass container as the lime precipitates out of the solution.
While the technology is simple and could, in principle, be easily scaled up, a typical cement plant today produces about 700,000 tons of the material per year. “How do you penetrate an industry like that and get a foot in the door?” asks Ellis, the paper’s lead author. One approach, she says, is to try to replace just one part of the process at a time, rather than the whole system at once, and “in a stepwise fashion” gradually add other parts.
The initial proposed system the team came up with is “not because we necessarily think we have the exact strategy” for the best possible approach, Chiang says, “but to get people in the electrochemical sector to start thinking more about this,” and come up with new ideas. “It’s an important first step, but not yet a fully developed solution.”
The research was partly supported by the Skolkovo Institute of Science and Technology.
MIT researchers have developed a new model of gene control, in which the cellular machinery that transcribes DNA into RNA forms specialized droplets called condensates.
Image: Steven H. Lee
Along the genome, proteins form liquid-like droplets that appear to boost the expression of particular genes.
In recent years, MIT scientists have developed a new model for how key genes are controlled that suggests the cellular machinery that transcribes DNA into RNA forms specialized droplets called condensates. These droplets occur only at certain sites on the genome, helping to determine which genes are expressed in different types of cells.
In a new study that supports that model, researchers at MIT and the Whitehead Institute for Biomedical Research have discovered physical interactions between proteins and with DNA that help explain why these droplets, which stimulate the transcription of nearby genes, tend to cluster along specific stretches of DNA known as super enhancers. These enhancer regions do not encode proteins but instead regulate other genes.
“This study provides a fundamentally important new approach to deciphering how the ‘dark matter’ in our genome functions in gene control,” says Richard Young, an MIT professor of biology and member of the Whitehead Institute.
Young is one of the senior authors of the paper, along with Phillip Sharp, an MIT Institute Professor and member of MIT’s Koch Institute for Integrative Cancer Research; and Arup K. Chakraborty, the Robert T. Haslam Professor in Chemical Engineering, a professor of physics and chemistry, and a member of MIT’s Institute for Medical Engineering and Science and the Ragon Institute of MGH, MIT, and Harvard.
Graduate student Krishna Shrinivas and postdoc Benjamin Sabari are the lead authors of the paper, which appears in Molecular Cell on Aug. 8.
“A biochemical factory”
Every cell in an organism has an identical genome, but cells such as neurons or heart cells express different subsets of those genes, allowing them to carry out their specialized functions. Previous research has shown that many of these genes are located near super enhancers, which bind to proteins called transcription factors that stimulate the copying of nearby genes into RNA.
About three years ago, Sharp, Young, and Chakraborty joined forces to try to model the interactions that occur at enhancers.
In a 2017 Cell paper, based on computational studies, they hypothesized that in these regions, transcription factors form droplets called phase-separated condensates. Similar to droplets of oil suspended in salad dressing, these condensates are collections of molecules that form distinct cellular compartments but have no membrane separating them from the rest of the cell.
In a 2018 Science paper, the researchers showed that these dynamic droplets do form at super enhancer locations. Made of clusters of transcription factors and other molecules, these droplets attract enzymes such as RNA polymerases that are needed to copy DNA into messenger RNA, keeping gene transcription active at specific sites.
“We had demonstrated that the transcription machinery forms liquid-like droplets at certain regulatory regions on our genome, however we didn’t fully understand how or why these dewdrops of biological molecules only seemed to condense around specific points on our genome,” Shrinivas says.
As one possible explanation for that site specificity, the research team hypothesized that weak interactions between intrinsically disordered regions of transcription factors and other transcriptional molecules, along with specific interactions between transcription factors and particular DNA elements, might determine whether a condensate forms at a particular stretch of DNA. Biologists have traditionally focused on “lock-and-key” style interactions between rigidly structured protein segments to explain most cellular processes, but more recent evidence suggests that weak interactions between floppy protein regions also play an important role in cell activities.
In this study, computational modeling and experimentation revealed that the cumulative force of these weak interactions conspire together with transcription factor-DNA interactions to determine whether a condensate of transcription factors will form at a particular site on the genome. Different cell types produce different transcription factors, which bind to different enhancers. When many transcription factors cluster around the same enhancers, weak interactions between the proteins are more likely to occur. Once a critical threshold concentration is reached, condensates form.
“Creating these local high concentrations within the crowded environment of the cell enables the right material to be in the right place at the right time to carry out the multiple steps required to activate a gene,” Sabari says. “Our current study begins to tease apart how certain regions of the genome are capable of pulling off this trick.”
These droplets form on a timescale of seconds to minutes, and they blink in and out of existence depending on a cell’s needs.
“It’s an on-demand biochemical factory that cells can form and dissolve, as and when they need it,” Chakraborty says. “When certain signals happen at the right locus on a gene, the condensates form, which concentrates all of the transcription molecules. Transcription happens, and when the cells are done with that task, they get rid of them.”
“A functional condensate has to be more than the sum of its parts, and how the protein and DNA components work together is something we don’t fully understand,” says Rohit Pappu, director of the Center for Science and Engineering of Living Systems at Washington University, who was not involved in the research. “This work gets us on the road to thinking about the interplay among protein-protein, protein-DNA, and possibly DNA-DNA interactions as determinants of the outputs of condensates.”
A new view
Weak cooperative interactions between proteins may also play an important role in evolution, the researchers proposed in a 2018 Proceedings of the National Academy of Sciences paper.
The sequences of intrinsically disordered regions of transcription factors need to change only a little to evolve new types of specific functionality. In contrast, evolving new specific functions via “lock-and-key” interactions requires much more significant changes.
“If you think about how biological systems have evolved, they have been able to respond to different conditions without creating new genes.
We don’t have any more genes that a fruit fly, yet we’re much more complex in many of our functions,” Sharp says. “The incremental expanding and contracting of these intrinsically disordered domains could explain a large part of how that evolution happens.”
Similar condensates appear to play a variety of other roles in biological systems, offering a new way to look at how the interior of a cell is organized.
Instead of floating through the cytoplasm and randomly bumping into other molecules, proteins involved in processes such as relaying molecular signals may transiently form droplets that help them interact with the right partners.
“This is a very exciting turn in the field of cell biology,” Sharp says. “It is a whole new way of looking at biological systems that is richer and more meaningful.”
Some of the MIT researchers, led by Young, have helped form a company called Dewpoint Therapeutics to develop potential treatments for a wide variety of diseases by exploiting cellular condensates.
There is emerging evidence that cancer cells use condensates to control sets of genes that promote cancer, and condensates have also been linked to neurodegenerative disorders such as amyotrophic lateral sclerosis (ALS) and Huntington’s disease.
The research was funded by the National Science Foundation, the National Institutes of Health, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Large anions with long tails (blue) in ionic liquids can make them self-assemble into sandwich-like bilayer structures on electrode surfaces. Ionic liquids with such structures have much improved energy storage capabilities.
Image: Xianwen Mao, MIT
Novel class of “ionic liquids” may store more energy than conventional electrolytes — with less risk of catching fire.
Supercapacitors, electrical devices that store and release energy, need a layer of electrolyte — an electrically conductive material that can be solid, liquid, or somewhere in between. Now, researchers at MIT and several other institutions have developed a novel class of liquids that may open up new possibilities for improving the efficiency and stability of such devices while reducing their flammability.
“This proof-of-concept work represents a new paradigm for electrochemical energy storage,” the researchers say in their paper describing the finding, which appears today in the journalNature Materials.
For decades, researchers have been aware of a class of materials known as ionic liquids — essentially, liquid salts — but this team has now added to these liquids a compound that is similar to a surfactant, like those used to disperse oil spills. With the addition of this material, the ionic liquids “have very new and strange properties,” including becoming highly viscous, says MIT postdoc Xianwen Mao PhD ’14, the lead author of the paper.
“It’s hard to imagine that this viscous liquid could be used for energy storage,” Mao says, “but what we find is that once we raise the temperature, it can store more energy, and more than many other electrolytes.”
That’s not entirely surprising, he says, since with other ionic liquids, as temperature increases, “the viscosity decreases and the energy-storage capacity increases.”
But in this case, although the viscosity stays higher than that of other known electrolytes, the capacity increases very quickly with increasing temperature. That ends up giving the material an overall energy density — a measure of its ability to store electricity in a given volume — that exceeds those of many conventional electrolytes, and with greater stability and safety.
The key to its effectiveness is the way the molecules within the liquid automatically line themselves up, ending up in a layered configuration on the metal electrode surface. The molecules, which have a kind of tail on one end, line up with the heads facing outward toward the electrode or away from it, and the tails all cluster in the middle, forming a kind of sandwich. This is described as a self-assembled nanostructure.
“The reason why it’s behaving so differently” from conventional electrolytes is because of the way the molecules intrinsically assemble themselves into an ordered, layered structure where they come in contact with another material, such as the electrode inside a supercapacitor, says T. Alan Hatton, a professor of chemical engineering at MIT and the paper’s senior author. “It forms a very interesting, sandwich-like, double-layer structure.”
This highly ordered structure helps to prevent a phenomenon called “overscreening” that can occur with other ionic liquids, in which the first layer of ions (electrically charged atoms or molecules) that collect on an electrode surface contains more ions than there are corresponding charges on the surface.
This can cause a more scattered distribution of ions, or a thicker ion multilayer, and thus a loss of efficiency in energy storage; “whereas with our case, because of the way everything is structured, charges are concentrated within the surface layer,” Hatton says.
The new class of materials, which the researchers call SAILs, for surface-active ionic liquids, could have a variety of applications for high-temperature energy storage, for example for use in hot environments such as in oil drilling or in chemical plants, according to Mao. “Our electrolyte is very safe at high temperatures, and even performs better,” he says. In contrast, some electrolytes used in lithium-ion batteries are quite flammable.
The material could help to improve performance of supercapacitors, Mao says. Such devices can be used to store electrical charge and are sometimes used to supplement battery systems in electric vehicles to provide an extra boost of power.
Using the new material instead of a conventional electrolyte in a supercapacitor could increase its energy density by a factor of four or five, Mao says. Using the new electrolyte, future supercapacitors may even be able to store more energy than batteries, he says, potentially even replacing batteries in applications such as electric vehicles, personal electronics, or grid-level energy storage facilities.
The material could also be useful for a variety of emerging separation processes, Mao says. “A lot of newly developed separation processes require electrical control,” in various chemical processing and refining applications and in carbon dioxide capture, for example, as well as resource recovery from waste streams. These ionic liquids, being highly conductive, could be well-suited to many such applications, he says.
The material they initially developed is just an example of a variety of possible SAIL compounds. “The possibilities are almost unlimited,” Mao says. The team will continue to work on different variations and on optimizing its parameters for particular uses. “It might take a few months or years,” he says, “but working on a new class of materials is very exciting to do. There are many possibilities for further optimization.”
The research team included Paul Brown, Yinying Ren, Agilio Padua, and Margarida Costa Gomes at MIT; Ctirad Cervinka at École Normale Supérieure de Lyon, in France; Gavin Hazell and Julian Eastoe at the University of Bristol, in the U.K.; Hua Li and Rob Atkin at the University of Western Australia; and Isabelle Grillo at the Institut Max-von-Laue-Paul-Langevin in Grenoble, France. The researchers dedicate their paper to the memory of Grillo, who recently passed away.
“It is a very exciting result that surface-active ionic liquids (SAILs) with amphiphilic structures can self-assemble on electrode surfaces and enhance charge storage performance at electrified surfaces,” says Yi Cui, a professor of materials science and engineering at Stanford University, who was not associated with this research. “The authors have studied and understood the mechanism. The work here might have a great impact on the design of high energy density supercapacitors, and could also help improve battery performance,” he says.
Nicholas Abbott, the Tisch University Professor at Cornell University, who also was not involved in this work, says “The paper describes a very clever advance in interfacial charge storage, elegantly demonstrating how knowledge of molecular self-assembly at interfaces can be leveraged to address a contemporary technological challenge.”
The work was supported by the MIT Energy Initiative, an MIT Skoltech fellowship, and the Czech Science Foundation.
MIT engineers have devised a way to incorporate crystallized immunosuppressant drugs into devices carrying encapsulated islet cells, which could allow them to be implanted as a long-term
Crystallized drug prevents immune system rejection of transplanted pancreatic islet cells.
When medical devices are implanted in the body, the immune system often attacks them, producing scar tissue around the device. This buildup of tissue, known as fibrosis, can interfere with the device’s function.
MIT researchers have now come up with a novel way to prevent fibrosis from occurring, by incorporating a crystallized immunosuppressant drug into devices. After implantation, the drug is slowly secreted to dampen the immune response in the area immediately surrounding the device.
“We developed a crystallized drug formulation that can target the key players involved in the implant rejection, suppressing them locally and allowing the device to function for more than a year,” says Shady Farah, an MIT and Boston Children’s Hospital postdoc and co-first author of the study, who is soon starting a new position as an assistant professor of the Wolfson Faculty of Chemical Engineering and the Russell Berrie Nanotechnology Institute at Technion-Israel Institute of Technology.
The researchers showed that these crystals could dramatically improve the performance of encapsulated islet cells, which they are developing as a possible treatment for patients with type 1 diabetes. Such crystals could also be applied to a variety of other implantable medical devices, such as pacemakers, stents, or sensors.
Former MIT postdoc Joshua Doloff, now an assistant professor of Biomedical and Materials Science Engineering and member of the Translational Tissue Engineering Center at Johns Hopkins University School of Medicine, is also a lead author of the paper, which appears in the June 24 issue of Nature Materials. Daniel Anderson, an associate professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), is the senior author of the paper.
Anderson’s lab is one of many research groups working on ways to encapsulate islet cells and transplant them into diabetic patients, in hopes that such cells could replace the patients’ nonfunctioning pancreatic cells and eliminate the need for daily insulin injections.
Fibrosis is a major obstacle to this approach, because scar tissue can block the islet cells’ access to the oxygen and nutrients. In a 2017 study, Anderson and his colleagues showed that systemic administration of a drug that blocks cell receptors for a protein called CSF-1 can prevent fibrosis by suppressing the immune response to implanted devices. This drug targets immune cells called macrophages, which are the primary cells responsible for initiating the inflammation that leads to fibrosis.
“That work was focused on identifying next-generation drug targets, namely which cell and cytokine players were essential for fibrotic response,” says Doloff, who was the lead author on that study, which also involved Farah. He adds, “After knowing what we had to target to block fibrosis, and screening drug candidates needed to do so, we still had to find a sophisticated way of achieving local delivery and release for as long as possible.”
In the new study, the researchers set out to find a way to load the drug directly into an implantable device, to avoid giving patients drugs that would suppress their entire immune system.
“If you have a small device implanted in your body, you don’t want to have your whole body exposed to drugs that are affecting the immune system, and that’s why we’ve been interested in creating ways to release drugs from the device itself,” Anderson says.
To achieve that, the researchers decided to try crystallizing the drugs and then incorporating them into the device. This allows the drug molecules to be very tightly packed, allowing the drug-releasing device to be miniaturized. Another advantage is that crystals take a long time to dissolve, allowing for long-term drug delivery. Not every drug can be easily crystallized, but the researchers found that the CSF-1 receptor inhibitor they were using can form crystals and that they could control the size and shape of the crystals, which determines how long it takes for the drug to break down once in the body.
“We showed that the drugs released very slowly and in a controlled fashion,” says Farah. “We took those crystals and put them in different types of devices and showed that with the help of those crystals, we can allow the medical device to be protected for a long time, allowing the device to keep functioning.”
Encapsulated islet cells
To test whether these drug crystalline formulations could boost the effectiveness of encapsulated islet cells, the researchers incorporated the drug crystals into 0.5-millimeter-diameter spheres of alginate, which they used to encapsulate the cells. When these spheres were transplanted into the abdomen or under the skin of diabetic mice, they remained fibrosis-free for more than a year. During this time, the mice did not need any insulin injections, as the islet cells were able to control their blood sugar levels just as the pancreas normally would.
“In the past three-plus years, our team has published seven papers in Nature journals — this being the seventh — elucidating the mechanisms of biocompatibility,” says Robert Langer, the David H. Koch Institute Professor at MIT and an author of the paper. “These include an understanding of the key cells and receptors involved, optimal implant geometries and physical locations in the body, and now, in this paper, specific molecules that can confer biocompatibility. Taken together, we hope these papers will open the door to a new generation of biomedical implants to treat diabetes and other diseases.”
The researchers believe that it should be possible to create crystals that last longer than those they studied in these experiments, by altering the structure and composition of the drug crystals. Such formulations could also be used to prevent fibrosis of other types of implantable devices. In this study, the researchers showed that crystalline drug could be incorporated into PDMS, a polymer frequently used for medical devices, and could also be used to coat components of a glucose sensor and an electrical muscle stimulation device, which include materials such as plastic and metal.
“It wasn’t just useful for our islet cell therapy, but could also be useful to help get a number of different devices to work long-term,” Anderson says.
The research was funded by JDRF, the National Institutes of Health, the Leona M. and Harry B. Helmsley Charitable Trust Foundation, and the Tayebati Family Foundation.
Other authors of the paper include MIT Principal Research Scientist Peter Muller; MIT grad students Atieh Sadraei and Malia McAvoy; MIT research affiliate Hye Jung Han; former MIT postdoc Katy Olafson; MIT technical associate Keval Vyas; former MIT grad student Hok Hei Tam; MIT postdoc Piotr Kowalski; former MIT undergraduates Marissa Griffin and Ashley Meng; Jennifer Hollister-Locke and Gordon Weir of the Joslin Diabetes Center; Adam Graham of Harvard University; James McGarrigle and Jose Oberholzer of the University of Illinois at Chicago; and Dale Greiner of the University of Massachusetts Medical School.
MIT engineers have designed a magnetic microrobot that can help push drug-delivery particles into tumor tissue (left). They also employed swarms of naturally magnetic bacteria to achieve the same effect (right). Image courtesy of the researchers.
Tiny robots powered by magnetic fields could help drug-delivery nanoparticles reach their targets.
MIT engineers have designed tiny robots that can help drug-delivery nanoparticles push their way out of the bloodstream and into a tumor or another disease site. Like crafts in “Fantastic Voyage” — a 1960s science fiction film in which a submarine crew shrinks in size and roams a body to repair damaged cells — the robots swim through the bloodstream, creating a current that drags nanoparticles along with them.
The magnetic microrobots, inspired by bacterial propulsion, could help to overcome one of the biggest obstacles to delivering drugs with nanoparticles: getting the particles to exit blood vessels and accumulate in the right place.
“When you put nanomaterials in the bloodstream and target them to diseased tissue, the biggest barrier to that kind of payload getting into the tissue is the lining of the blood vessel,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science, and the senior author of the study.
“Our idea was to see if you can use magnetism to create fluid forces that push nanoparticles into the tissue,” adds Simone Schuerle, a former MIT postdoc and lead author of the paper, which appears in the April 26 issue of Science Advances.
In the same study, the researchers also showed that they could achieve a similar effect using swarms of living bacteria that are naturally magnetic. Each of these approaches could be suited for different types of drug delivery, the researchers say.
Schuerle, who is now an assistant professor at the Swiss Federal Institute of Technology (ETH Zurich), first began working on tiny magnetic robots as a graduate student in Brad Nelson’s Multi-scale Robotics Lab at ETH Zurich. When she came to Bhatia’s lab as a postdoc in 2014, she began investigating whether this kind of bot could help to make nanoparticle drug delivery more efficient.
In most cases, researchers target their nanoparticles to disease sites that are surrounded by “leaky” blood vessels, such as tumors. This makes it easier for the particles to get into the tissue, but the delivery process is still not as effective as it needs to be.
The MIT team decided to explore whether the forces generated by magnetic robots might offer a better way to push the particles out of the bloodstream and into the target site.
The robots that Schuerle used in this study are 35 hundredths of a millimeter long, similar in size to a single cell, and can be controlled by applying an external magnetic field. This bio-inspired robot, which the researchers call an “artificial bacterial flagellum,” consists of a tiny helix that resembles the flagella that many bacteria use to propel themselves. These robots are 3-D-printed with a high-resolution 3-D printer and then coated with nickel, which makes them magnetic.
To test a single robot’s ability to control nearby nanoparticles, the researchers created a microfluidic system that mimics the blood vessels that surround tumors. The channel in their system, between 50 and 200 microns wide, is lined with a gel that has holes to simulate the broken blood vessels seen near tumors.
Using external magnets, the researchers applied magnetic fields to the robot, which makes the helix rotate and swim through the channel. Because fluid flows through the channel in the opposite direction, the robot remains stationary and creates a convection current, which pushes 200-nanometer polystyrene particles into the model tissue. These particles penetrated twice as far into the tissue as nanoparticles delivered without the aid of the magnetic robot.
This type of system could potentially be incorporated into stents, which are stationary and would be easy to target with an externally applied magnetic field. Such an approach could be useful for delivering drugs to help reduce inflammation at the site of the stent, Bhatia says.
The researchers also developed a variant of this approach that relies on swarms of naturally magnetotactic bacteria instead of microrobots. Bhatia has previously developed bacteria that can be used to deliver cancer-fighting drugs and to diagnose cancer, exploiting bacteria’s natural tendency to accumulate at disease sites.
For this study, the researchers used a type of bacteria called Magnetospirillum magneticum, which naturally produces chains of iron oxide. These magnetic particles, known as magnetosomes, help bacteria orient themselves and find their preferred environments.
The researchers discovered that when they put these bacteria into the microfluidic system and applied rotating magnetic fields in certain orientations, the bacteria began to rotate in synchrony and move in the same direction, pulling along any nanoparticles that were nearby. In this case, the researchers found that nanoparticles were pushed into the model tissue three times faster than when the nanoparticles were delivered without any magnetic assistance.
This bacterial approach could be better suited for drug delivery in situations such as a tumor, where the swarm, controlled externally without the need for visual feedback, could generate fluidic forces in vessels throughout the tumor.
The particles that the researchers used in this study are big enough to carry large payloads, including the components required for the CRISPR genome-editing system, Bhatia says. She now plans to collaborate with Schuerle to further develop both of these magnetic approaches for testing in animal models.
The research was funded by the Swiss National Science Foundation, the Branco Weiss Fellowship, the National Institutes of Health, the National Science Foundation, and the Howard Hughes Medical Institute.
Stronger and more flexible than graphene, a single-atom layer of boron could revolutionize sensors, batteries, and catalytic chemistry.
Not so long ago, graphene was the great new wonder material. A super-strong, atom-thick sheet of carbon “chicken wire,” it can form tubes, balls, and other curious shapes.
And because it conducts electricity, materials scientists raised the prospect of a new era of graphene-based computer processing and a lucrative graphene chip industry to boot. The European Union invested €1 billion to kick-start a graphene industry.
This brave new graphene-based world has yet to materialize. But it has triggered an interest in other two-dimensional materials. And the most exciting of all is borophene: a single layer of boron atoms that form various crystalline structures.
The reason for the excitement is the extraordinary range of applications that borophene looks good for. Electrochemists think borophene could become the anode material in a new generation of more powerful lithium-ion batteries.
Chemists are entranced by its catalytic capabilities. And physicists are testing its abilities as a sensor to detect numerous kinds of atoms and molecules.
Today, Zhi-Qiang Wang at Xiamen University in China and a number of colleagues review the remarkable properties of borophene and the applications they might lead to.
Borophene has a short history. Physicists first predicted its existence in the 1990s using computer simulations to show how boron atoms could form a monolayer.
But this exotic substance wasn’t synthesized until 2015, using chemical vapor deposition. This is a process in which a hot gas of boron atoms condenses onto a cool surface of pure silver.
The regular arrangement of silver atoms forces boron atoms into a similar pattern, each binding to as many as six other atoms to create a flat hexagonal structure. However, a significant proportion of boron atoms bind only with four or five other atoms, and this creates vacancies in the structure. The pattern of vacancies is what gives borophene crystals their unique properties.
Since borophene’s synthesis, chemists have been eagerly characterizing its properties. Borophene turns out to be stronger than graphene, and more flexible. It a good conductor of both electricity and heat, and it also superconducts. These properties vary depending on the material’s orientation and the arrangement of vacancies. This makes it “tunable,” at least in principle. That’s one reason chemists are so excited.
Borophene is also light and fairly reactive. That makes it a good candidate for storing metal ions in batteries. “Borophene is a promising anode material for Li, Na, and Mg ion batteries due to high theoretical specific capacities, excellent electronic conductivity and outstanding ion transport properties,” say Wang and co.
Hydrogen atoms also stick easily to borophene’s single-layer structure, and this adsorption property, combined with the huge surface area of atomic layers, makes borophene a promising material for hydrogen storage. Theoretical studies suggest borophene could store over 15% of its weight in hydrogen, significantly outperforming other materials.
Then there is borophene’s ability to catalyze the breakdown of molecular hydrogen into hydrogen ions, and water into hydrogen and oxygen ions.
“Outstanding catalytic performances of borophene have been found in hydrogen evolution reaction, oxygen reduction reaction, oxygen evolution reaction, and CO2electroreduction reaction,” say the team. That could usher in a new era of water-based energy cycles.
Nevertheless, chemists have some work to do before borophene can be more widely used. For a start, they have yet to find a way to make borophene in large quantities.
And the material’s reactivity means it is vulnerable to oxidation, so it needs to be carefully protected. Both factors make borophene expensive to make and hard to handle. So there is work ahead.
But chemists have great faith. Borophene may just become the next wonder material to entrance the world.
This is Part II of MIT’s 10 Technology Breakthroughs for 2019′ Re-Posted from MIT Technology Review, with Guest Curator Bill Gates. You can Read Part I Here
Part I Into from Bill Gates: How We’ll Invent the Future
I was honored when MIT Technology Review invited me to be the first guest curator of its 10 Breakthrough Technologies. Narrowing down the list was difficult. I wanted to choose things that not only will create headlines in 2019 but captured this moment in technological history—which got me thinking about how innovation has evolved over time.
Why it matters If robots could learn to deal with the messiness of the real world, they could do many more tasks.
Key Players OpenAI
Carnegie Mellon University
University of Michigan
Availability 3-5 years
Robots are teaching themselves to handle the physical world.
For all the talk about machines taking jobs, industrial robots are still clumsy and inflexible. A robot can repeatedly pick up a component on an assembly line with amazing precision and without ever getting bored—but move the object half an inch, or replace it with something slightly different, and the machine will fumble ineptly or paw at thin air.
But while a robot can’t yet be programmed to figure out how to grasp any object just by looking at it, as people do, it can now learn to manipulate the object on its own through virtual trial and error.
One such project is Dactyl, a robot that taught itself to flip a toy building block in its fingers. Dactyl, which comes from the San Francisco nonprofit OpenAI, consists of an off-the-shelf robot hand surrounded by an array of lights and cameras. Using what’s known as reinforcement learning, neural-network software learns how to grasp and turn the block within a simulated environment before the hand tries it out for real. The software experiments, randomly at first, strengthening connections within the network over time as it gets closer to its goal.
It usually isn’t possible to transfer that type of virtual practice to the real world, because things like friction or the varied properties of different materials are so difficult to simulate. The OpenAI team got around this by adding randomness to the virtual training, giving the robot a proxy for the messiness of reality.
We’ll need further breakthroughs for robots to master the advanced dexterity needed in a real warehouse or factory. But if researchers can reliably employ this kind of learning, robots might eventually assemble our gadgets, load our dishwashers, and even help Grandma out of bed. —Will Knight
New-wave nuclear power
Advanced fusion and fission reactors are edging closer to reality.
New nuclear designs that have gained momentum in the past year are promising to make this power source safer and cheaper. Among them are generation IV fission reactors, an evolution of traditional designs; small modular reactors; and fusion reactors, a technology that has seemed eternally just out of reach. Developers of generation IV fission designs, such as Canada’s Terrestrial Energy and Washington-based TerraPower, have entered into R&D partnerships with utilities, aiming for grid supply (somewhat optimistically, maybe) by the 2020s.
Small modular reactors typically produce in the tens of megawatts of power (for comparison, a traditional nuclear reactor produces around 1,000 MW). Companies like Oregon’s NuScale say the miniaturized reactors can save money and reduce environmental and financial risks.
From sodium-cooled fission to advanced fusion, a fresh generation of projects hopes to rekindle trust in nuclear energy.
There has even been progress on fusion. Though no one expects delivery before 2030, companies like General Fusion and Commonwealth Fusion Systems, an MIT spinout, are making some headway. Many consider fusion a pipe dream, but because the reactors can’t melt down and don’t create long-lived, high-level waste, it should face much less public resistance than conventional nuclear. (Bill Gates is an investor in TerraPower and Commonwealth Fusion Systems.) —Leigh Phillips
Why it matters 15 million babies are born prematurely every year; it’s the leading cause of death for children under age five
Key player Akna Dx
Availability A test could be offered in doctor’s offices within five years
A simple blood test can predict if a pregnant woman is at risk of giving birth prematurely.
Our genetic material lives mostly inside our cells. But small amounts of “cell-free” DNA and RNA also float in our blood, often released by dying cells. In pregnant women, that cell-free material is an alphabet soup of nucleic acids from the fetus, the placenta, and the mother.
Stephen Quake, a bioengineer at Stanford, has found a way to use that to tackle one of medicine’s most intractable problems: the roughly one in 10 babies born prematurely.
Free-floating DNA and RNA can yield information that previously required invasive ways of grabbing cells, such as taking a biopsy of a tumor or puncturing a pregnant woman’s belly to perform an amniocentesis. What’s changed is that it’s now easier to detect and sequence the small amounts of cell-free genetic material in the blood. In the last few years researchers have begun developing blood tests for cancer (by spotting the telltale DNA from tumor cells) and for prenatal screening of conditions like Down syndrome.
The tests for these conditions rely on looking for genetic mutations in the DNA. RNA, on the other hand, is the molecule that regulates gene expression—how much of a protein is produced from a gene. By sequencing the free-floating RNA in the mother’s blood, Quake can spot fluctuations in the expression of seven genes that he singles out as associated with preterm birth. That lets him identify women likely to deliver too early. Once alerted, doctors can take measures to stave off an early birth and give the child a better chance of survival.
Complications from preterm birth are the leading cause of death worldwide in children under five.
The technology behind the blood test, Quake says, is quick, easy, and less than $10 a measurement. He and his collaborators have launched a startup, Akna Dx, to commercialize it. —Bonnie Rochman
Gut probe in a pill
Why it matters The device makes it easier to screen for and study gut diseases, including one that keeps millions of children in poor countries from growing properly
Key player Massachusetts General Hospital
Availability Now used in adults; testing in infants begins in 2019
A small, swallowable device captures detailed images of the gut without anesthesia, even in infants and children.
Environmental enteric dysfunction (EED) may be one of the costliest diseases you’ve never heard of. Marked by inflamed intestines that are leaky and absorb nutrients poorly, it’s widespread in poor countries and is one reason why many people there are malnourished, have developmental delays, and never reach a normal height. No one knows exactly what causes EED and how it could be prevented or treated.
Practical screening to detect it would help medical workers know when to intervene and how. Therapies are already available for infants, but diagnosing and studying illnesses in the guts of such young children often requires anesthetizing them and inserting a tube called an endoscope down the throat. It’s expensive, uncomfortable, and not practical in areas of the world where EED is prevalent.
So Guillermo Tearney, a pathologist and engineer at Massachusetts General Hospital (MGH) in Boston, is developing small devices that can be used to inspect the gut for signs of EED and even obtain tissue biopsies. Unlike endoscopes, they are simple to use at a primary care visit.
Tearney’s swallowable capsules contain miniature microscopes. They’re attached to a flexible string-like tether that provides power and light while sending images to a briefcase-like console with a monitor. This lets the health-care worker pause the capsule at points of interest and pull it out when finished, allowing it to be sterilized and reused. (Though it sounds gag-inducing, Tearney’s team has developed a technique that they say doesn’t cause discomfort.) It can also carry technologies that image the entire surface of the digestive tract at the resolution of a single cell or capture three-dimensional cross sections a couple of millimeters deep.
The technology has several applications; at MGH it’s being used to screen for Barrett’s esophagus, a precursor of esophageal cancer. For EED, Tearney’s team has developed an even smaller version for use in infants who can’t swallow a pill. It’s been tested on adolescents in Pakistan, where EED is prevalent, and infant testing is planned for 2019.
The little probe will help researchers answer questions about EED’s development—such as which cells it affects and whether bacteria are involved—and evaluate interventions and potential treatments. —Courtney Humphrie
Custom cancer vaccines
Why it matters Conventional chemotherapies take a heavy toll on healthy cells and aren’t always effective against tumors
Key players BioNTech
Availability In human testing
The treatment incites the body’s natural defenses to destroy only cancer cells by identifying mutations unique to each tumor
Scientists are on the cusp of commercializing the first personalized cancer vaccine. If it works as hoped, the vaccine, which triggers a person’s immune system to identify a tumor by its unique mutations, could effectively shut down many types of cancers.
By using the body’s natural defenses to selectively destroy only tumor cells, the vaccine, unlike conventional chemotherapies, limits damage to healthy cells. The attacking immune cells could also be vigilant in spotting any stray cancer cells after the initial treatment.
The possibility of such vaccines began to take shape in 2008, five years after the Human Genome Project was completed, when geneticists published the first sequence of a cancerous tumor cell.
Soon after, investigators began to compare the DNA of tumor cells with that of healthy cells—and other tumor cells. These studies confirmed that all cancer cells contain hundreds if not thousands of specific mutations, most of which are unique to each tumor.
A few years later, a German startup called BioNTech provided compelling evidence that a vaccine containing copies of these mutations could catalyze the body’s immune system to produce T cells primed to seek out, attack, and destroy all cancer cells harboring them.
In December 2017, BioNTech began a large test of the vaccine in cancer patients, in collaboration with the biotech giant Genentech. The ongoing trial is targeting at least 10 solid cancers and aims to enroll upwards of 560 patients at sites around the globe.
The two companies are designing new manufacturing techniques to produce thousands of personally customized vaccines cheaply and quickly. That will be tricky because creating the vaccine involves performing a biopsy on the patient’s tumor, sequencing and analyzing its DNA, and rushing that information to the production site. Once produced, the vaccine needs to be promptly delivered to the hospital; delays could be deadly. —Adam Pior
The cow-free burger
Why it matters Livestock production causes catastrophic deforestation, water pollution, and greenhouse-gas emissions
Key players Beyond Meat
Availability Plant-based now; lab-grown around 2020
Both lab-grown and plant-based alternatives approximate the taste and nutritional value of real meat without the environmental devastation.
The UN expects the world to have 9.8 billion people by 2050. And those people are getting richer. Neither trend bodes well for climate change—especially because as people escape poverty, they tend to eat more meat.
By that date, according to the predictions, humans will consume 70% more meat than they did in 2005. And it turns out that raising animals for human consumption is among the worst things we do to the environment.
Depending on the animal, producing a pound of meat protein with Western industrialized methods requires 4 to 25 times more water, 6 to 17 times more land, and 6 to 20 times more fossil fuels than producing a pound of plant protein.
The problem is that people aren’t likely to stop eating meat anytime soon. Which means lab-grown and plant-based alternatives might be the best way to limit the destruction.
Making lab-grown meat involves extracting muscle tissue from animals and growing it in bioreactors. The end product looks much like what you’d get from an animal, although researchers are still working on the taste. Researchers at Maastricht University in the Netherlands, who are working to produce lab-grown meat at scale, believe they’ll have a lab-grown burger available by next year. One drawback of lab-grown meat is that the environmental benefits are still sketchy at best—a recent World Economic Forum report says the emissions from lab-grown meat would be only around 7% less than emissions from beef production.
Meat production spews tons of greenhouse gas and uses up too much land and water. Is there an alternative that won’t make us do without?
The better environmental case can be made for plant-based meats from companies like Beyond Meat and Impossible Foods (Bill Gates is an investor in both companies), which use pea proteins, soy, wheat, potatoes, and plant oils to mimic the texture and taste of animal meat.
Beyond Meat has a new 26,000-square-foot (2,400-square-meter) plant in California and has already sold upwards of 25 million burgers from 30,000 stores and restaurants. According to an analysis by the Center for Sustainable Systems at the University of Michigan, a Beyond Meat patty would probably generate 90% less in greenhouse-gas emissions than a conventional burger made from a cow. —Markkus Rovito
Carbon dioxide catcher
Why it matters Removing CO2 from the atmosphere might be one of the last viable ways to stop catastrophic climate change
Key players Carbon Engineering
Availability 5-10 years
Practical and affordable ways to capture carbon dioxide from the air can soak up excess greenhouse-gas emissions.
Even if we slow carbon dioxide emissions, the warming effect of the greenhouse gas can persist for thousands of years. To prevent a dangerous rise in temperatures, the UN’s climate panel now concludes, the world will need to remove as much as 1 trillion tons of carbon dioxide from the atmosphere this century.
In a surprise finding last summer, Harvard climate scientist David Keith calculated that machines could, in theory, pull this off for less than $100 a ton, through an approach known as direct air capture. That’s an order of magnitude cheaper than earlier estimates that led many scientists to dismiss the technology as far too expensive—though it will still take years for costs to fall to anywhere near that level.
But once you capture the carbon, you still need to figure out what to do with it.
Carbon Engineering, the Canadian startup Keith cofounded in 2009, plans to expand its pilot plant to ramp up production of its synthetic fuels, using the captured carbon dioxide as a key ingredient. (Bill Gates is an investor in Carbon Engineering.)
Zurich-based Climeworks’s direct air capture plant in Italy will produce methane from captured carbon dioxide and hydrogen, while a second plant in Switzerland will sell carbon dioxide to the soft-drinks industry. So will Global Thermostat of New York, which finished constructing its first commercial plant in Alabama last year.
Klaus Lackner’s once wacky idea increasingly looks like an essential part of solving climate change.
Still, if it’s used in synthetic fuels or sodas, the carbon dioxide will mostly end up back in the atmosphere. The ultimate goal is to lock greenhouse gases away forever. Some could be nested within products like carbon fiber, polymers, or concrete, but far more will simply need to be buried underground, a costly job that no business model seems likely to support.
In fact, pulling CO2 out of the air is, from an engineering perspective, one of the most difficult and expensive ways of dealing with climate change. But given how slowly we’re reducing emissions, there are no good options left. —James Temple
An ECG on your wrist
Regulatory approval and technological advances are making it easier for people to continuously monitor their hearts with wearable devices.
Fitness trackers aren’t serious medical devices. An intense workout or loose band can mess with the sensors that read your pulse. But an electrocardiogram—the kind doctors use to diagnose abnormalities before they cause a stroke or heart attack— requires a visit to a clinic, and people often fail to take the test in time.
ECG-enabled smart watches, made possible by new regulations and innovations in hardware and software, offer the convenience of a wearable device with something closer to the precision of a medical one.
An Apple Watch–compatible band from Silicon Valley startup AliveCor that can detect atrial fibrillation, a frequent cause of blood clots and stroke, received clearance from the FDA in 2017. Last year, Apple released its own FDA-cleared ECG feature, embedded in the watch itself.
Making complex heart tests available at the push of a button has far-reaching consequences.
The health-device company Withings also announced plans for an ECG-equipped watch shortly after.
Current wearables still employ only a single sensor, whereas a real ECG has 12. And no wearable can yet detect a heart attack as it’s happening.
But this might change soon. Last fall, AliveCor presented preliminary results to the American Heart Association on an app and two-sensor system that can detect a certain type of heart attack. —Karen Hao
Sanitation without sewers
Why it matters 2.3 billion people lack safe sanitation, and many die as a result
Key players Duke University
University of South Florida
California Institute of Technology
Availability 1-2 years
Energy-efficient toilets can operate without a sewer system and treat waste on the spot.
About 2.3 billion people don’t have good sanitation. The lack of proper toilets encourages people to dump fecal matter into nearby ponds and streams, spreading bacteria, viruses, and parasites that can cause diarrhea and cholera. Diarrhea causes one in nine child deaths worldwide.
Now researchers are working to build a new kind of toilet that’s cheap enough for the developing world and can not only dispose of waste but treat it as well.
In 2011 Bill Gates created what was essentially the X Prize in this area—the Reinvent the Toilet Challenge. Since the contest’s launch, several teams have put prototypes in the field. All process the waste locally, so there’s no need for large amounts of water to carry it to a distant treatment plant.
Most of the prototypes are self-contained and don’t need sewers, but they look like traditional toilets housed in small buildings or storage containers. The NEWgenerator toilet, designed at the University of South Florida, filters out pollutants with an anaerobic membrane, which has pores smaller than bacteria and viruses. Another project, from Connecticut-based Biomass Controls, is a refinery the size of a shipping container; it heats the waste to produce a carbon-rich material that can, among other things, fertilize soil.
One drawback is that the toilets don’t work at every scale. The Biomass Controls product, for example, is designed primarily for tens of thousands of users per day, which makes it less well suited for smaller villages. Another system, developed at Duke University, is meant to be used only by a few nearby homes.
So the challenge now is to make these toilets cheaper and more adaptable to communities of different sizes. “It’s great to build one or two units,” says Daniel Yeh, an associate professor at the University of South Florida, who led the NEWgenerator team. “But to really have the technology impact the world, the only way to do that is mass-produce the units.” —Erin Winick
Smooth-talking AI assistants
Why it matters AI assistants can now perform conversation-based tasks like booking a restaurant reservation or coordinating a package drop-off rather than just obey simple commands
Key players Google
Availability 1-2 years
New techniques that capture semantic relationships between words are making machines better at understanding natural language.
We’re used to AI assistants—Alexa playing music in the living room, Siri setting alarms on your phone—but they haven’t really lived up to their alleged smarts. They were supposed to have simplified our lives, but they’ve barely made a dent. They recognize only a narrow range of directives and are easily tripped up by deviations.
But some recent advances are about to expand your digital assistant’s repertoire. In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps.
These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online.
Some are already here. Google Duplex, the eerily human-like upgrade of Google Assistant, can pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments.
In China, consumers are getting used to Alibaba’s AliMe, which coordinates package deliveries over the phone and haggles about the price of goods over chat.
But while AI programs have gotten better at figuring out what you want, they still can’t understand a sentence. Lines are scripted or generated statistically, reflecting how hard it is to imbue machines with true language understanding. Once we cross that hurdle, we’ll see yet another evolution, perhaps from logistics coordinator to babysitter, teacher—or even friend? —Karen Hao