MIT: An Experimental Peptide could Block Covid-19


MIT-Covid19-Drug-01_0

MIT chemists are testing a protein fragment that may inhibit coronaviruses’ ability to enter human lung cells.

The research described in this article has been published on a preprint server but has not yet been peer-reviewed by scientific or medical experts.

In hopes of developing a possible treatment for Covid-19, a team of MIT chemists has designed a drug candidate that they believe may block coronaviruses’ ability to enter human cells. The potential drug is a short protein fragment, or peptide, that mimics a protein found on the surface of human cells.

The researchers have shown that their new peptide can bind to the viral protein that coronaviruses use to enter human cells, potentially disarming it.

“We have a lead compound that we really want to explore, because it does, in fact, interact with a viral protein in the way that we predicted it to interact, so it has a chance of inhibiting viral entry into a host cell,” says Brad Pentelute, an MIT associate professor of chemistry, who is leading the research team.

The MIT team reported its initial findings in a preprint posted on bioRxiv, an online preprint server, on March 20. They have sent samples of the peptide to collaborators who plan to carry out tests in human cells.

Molecular targeting

Pentelute’s lab began working on this project in early March, after the Cryo-EM structure of the coronavirus spike protein, along with the human cell receptor that it binds to, was published by a research group in China. Coronaviruses, including SARS-CoV-2, which is causing the current Covid-19 outbreak, have many protein spikes protruding from their viral envelope.

Studies of SARS-CoV-2 have also shown that a specific region of the spike protein, known as the receptor binding domain, binds to a receptor called angiotensin-converting enzyme 2 (ACE2). This receptor is found on the surface of many human cells, including those in the lungs. The ACE2 receptor is also the entry point used by the coronavirus that caused the 2002-03 SARS outbreak.

Virus_800x800_jpg_GupOnS9x-300x300

What Does COVID Mean>

In hopes of developing drugs that could block viral entry, Genwei Zhang, a postdoc in Pentelute’s lab, performed computational simulations of the interactions between the ACE2 receptor and the receptor binding domain of the coronavirus spike protein. These simulations revealed the location where the receptor binding domain attaches to the ACE2 receptor — a stretch of the ACE2 protein that forms a structure called an alpha helix.

“This kind of simulation can give us views of how atoms and biomolecules interact with each other, and which parts are essential for this interaction,” Zhang says. “Molecular dynamics helps us narrow down particular regions that we want to focus on to develop therapeutics.”

The MIT team then used peptide synthesis technology that Pentelute’s lab has previously developed, to rapidly generate a 23-amino acid peptide with the same sequence as the alpha helix of the ACE2 receptor. Their benchtop flow-based peptide synthesis machine can form linkages between amino acids, the buildings blocks of proteins, in about 37 seconds, and it takes less than an hour to generate complete peptide molecules containing up to 50 amino acids.

“We’ve built these platforms for really rapid turnaround, so I think that’s why we’re at this point right now,” Pentelute says. “It’s because we have these tools we’ve built up at MIT over the years.”

They also synthesized a shorter sequence of only 12 amino acids found in the alpha helix, and then tested both of the peptides using equipment at MIT’s Biophysical Instrumentation Facility that can measure how strongly two molecules bind together. They found that the longer peptide showed strong binding to the receptor binding domain of the Covid-19 spike protein, while the shorter one showed negligible binding.

Many variants

Although MIT has been scaling back on-campus research since mid-March, Pentelute’s lab was granted special permission allowing a small group of researchers to continue to work on this project. They are now developing about 100 different variants of the peptide in hopes of increasing its binding strength and making it more stable in the body.

“We have confidence that we know exactly where this molecule is interacting, and we can use that information to further guide refinement, so that we can hopefully get a higher affinity and more potency to block viral entry in cells,” Pentelute says.

In the meantime, the researchers have already sent their original 23-amino acid peptide to a research lab at the Icahn School of Medicine at Mount Sinai for testing in human cells and potentially in animal models of Covid-19 infection.

While dozens of research groups around the world are using a variety of approaches to seek new treatments for Covid-19, Pentelute believes his lab is one of a few currently working on peptide drugs for this purpose. One advantage of such drugs is that they are relatively easy to manufacture in large quantities. They also have a larger surface area than small-molecule drugs.

“Peptides are larger molecules, so they can really grip onto the coronavirus and inhibit entry into cells, whereas if you used a small molecule, it’s difficult to block that entire area that the virus is using,” Pentelute says. “Antibodies also have a large surface area, so those might also prove useful. Those just take longer to manufacture and discover.”

One drawback of peptide drugs is that they typically can’t be taken orally, so they would have to be either administered intravenously or injected under the skin. They would also need to be modified so that they can stay in the bloodstream long enough to be effective, which Pentelute’s lab is also working on.

“It’s hard to project how long it will take to have something we can test in patients, but my aim is to have something within a matter of weeks. If it turns out to be more challenging, it may take months,” he says.

In addition to Pentelute and Zhang, other researchers listed as authors on the preprint are postdoc Sebastian Pomplun, grad student Alexander Loftis, and research scientist Andrei Loas.

MIT and University of Waterloo Lead the Way: Quantum Radar Reliably Demonstrated – Making it useful for Biomedical and Security (Stealth) Applications


quantum radar 3 quantump

A radar device that relies on entangled photons works at such low power that it can hide behind background noise, making it useful for biomedical and security (stealthy radar) applications.

One of the advantages of the quantum revolution is the ability to sense the world in a new way. The general idea is to use the special properties of quantum mechanics to make measurements or produce images that are otherwise impossible.

Much of this work is done with photons. But as far as the electromagnetic spectrum is concerned, the quantum revolution has been a little one-sided. Almost all the advances in quantum computing, cryptography, teleportation, and so on have involved visible or near-visible light.

Today that changes thanks to the work of Shabir Barzanjeh at the Institute of Science and Technology Austria and a few colleagues. This team has used entangled microwaves to create the world’s first quantum radar. Their device, which can detect objects at a distance using only a few photons, raises the prospect of stealthy radar systems that emit little detectable electromagnetic radiation.

The device is simple in essence. The researchers create pairs of entangled microwave photons using a superconducting device called a Josephson parametric converter. They beam the first photon, called the signal photon, toward the object of interest and listen for the reflection.

Quantum radar

In the meantime, they store the second photon, called the idler photon. When the reflection arrives, it interferes with this idler photon, creating a signature that reveals how far the signal photon has traveled. Voila—quantum radar!

This technique has some important advantages over conventional radar. Ordinary radar works in a similar way but fails at low power levels that involve small numbers of microwave photons. That’s because hot objects in the environment emit microwaves of their own.

In a room temperature environment, this amounts to a background of around 1,000 microwave photons at any instant, and these overwhelm the returning echo. This is why radar systems use powerful transmitters.

Entangled photons overcome this problem. The signal and idler photons are so similar that it is easy to filter out the effects of other photons. So it becomes straightforward to detect the signal photon when it returns.

Of course, entanglement is a fragile property of the quantum world, and the process of reflection destroys it.  Nevertheless, the correlation between the signal and idler photons is still strong enough to distinguish them from background noise.

This allows Barzanjeh and co to detect a room temperature object in a room temperature environment with just a handful of photons, in a way that is impossible to do with ordinary photons. “We generate entangled fields using a Josephson parametric converter at millikelvin temperatures to illuminate a room-temperature object at a distance of 1 meter in a proof of principle radar setup,” they say.

The researchers go on to compare their quantum radar with conventional systems operating with similarly low numbers of photons and say it significantly outperforms them, albeit only over relatively short distances.

That’s interesting work revealing the significant potential of quantum radar and a first application of microwave-based entanglement. But it also shows the potential application of quantum illumination more generally.

 

A big advantage is the low levels of electromagnetic radiation required. “Our experiment shows the potential as a non-invasive scanning method for biomedical applications, e.g., for imaging of human tissues or non-destructive rotational spectroscopy of proteins,” say Barzanjeh and co.

Then there is the obvious application as a stealthy radar that is difficult for adversaries to detect over background noise. The researchers say it could be useful for short-range low-power radar for security applications in closed and populated environments.

quantum radar 2 2018-05-14-s20_quantuim_radar_stealth_aircraft_entanglement_canada

Ref: arxiv.org/abs/1908.03058 : Experimental Microwave Quantum Illumination

University of Waterloo Leads The Way in Canada

Waterloo Institute for Nanotechnology (WIN) members, in Professor Zbig Wasilewski partnership with the Institute for Quantum Computing (IQC), is developing the next generation of radar, quantum radar. Professor Zbig Wasilewski, from the Department of Electrical and Computer Engineering, is fabricating the materials for quantum radar.

The two other co-PIs on this project are Professor Jonathan Baugh and Professor Mike Reimer. Professor Baugh is a member of both WIN and IQC, while Professor Reimer main focus is on quantum photonics as a member of IQC.

In April 2018, the Government of Canada announced they would invest $2.7 million in the joint quantum radar project. The state-ofthe-art facilities in Lazaridis Centre make this project possible. Professor Wasilewski’s Molecular Beam Epitaxy (MBE) lab will grow the quantum material to adequate perfection to meet the challenge. The IQC houses the necessary quantum device processing and photonic labs. This ambitious project is not possible at many research institutions in the world. The MBE lab allows Wasilewski to create quantum structures with atomic precision. These materials will in turn form the foundation of the quantum radar. “Many challenges lie ahead,” said Professor Wasilewski. “Building up quantum illumination sources to the scale needed for quantum radar calls for the very best in material growth, nanofabrication and quantum engineering. We have an excellent interdisciplinary team with the diverse expertise needed to tackle all these challenges. It would be hard to assemble a better one in Canada or internationally.”

quantum_radar 1

“We have an excellent interdisciplinary team with the diverse expertise needed to tackle all these challenges. It would be hard to assemble a better one in Canada or internationally.”

– Professor Zbig Wasilewski, Department of Electrical and Computer Engineering, University of Waterloo

Professor Jonathan Baugh said, “By developing a fast, on-demand source of quantum light, we hope to bring techniques like quantum illumination from the lab to the real world. This project would not be possible without the right team, and we are fortunate to have a uniquely strong multidisciplinary collaboration based entirely at Waterloo, one which strengthens ties between WIN and IQC.”

The proposed quantum radar will help operators cut through heavy background noise and isolate objects in Canada’s far north. Standard radar systems are unable to detect stealth aircraft in the high-arctic due to the aurora borealis. This natural phenomenon sends electromagnetic energy at varying wavelengths down to Earth.

It is hypothesized that quantum radar works by separating two entangled light particles. You keep one on earth and send the entangled partner into the sky. If the light particle bounces off of your source and back to your detector you have located a stealth aircraft.

Quantum radar’s viability outside of a lab still needs to be determined. The goal of this project is to demonstrate its capability in the field.

The $2.7 million is being invested under the Department of National Defence’s All Domain Situational Awareness (ADSA) Science and Technology program.

MIT – Researchers develop a roadmap for growth of new solar cells – Could become Competitive with Silicon


MIT-Scaling-Perovskite_0Perovskites, a family of materials defined by a particular kind of molecular structure as illustrated here, have great potential for new kinds of solar cells. A new study from MIT shows how these materials could gain a foothold in the solar marketplace. Image: Christine Daniloff, MIT

Starting with higher-value niche markets and then expanding could help perovskite-based solar panels become competitive with silicon

Materials called perovskites show strong potential for a new generation of solar cells, but they’ve had trouble gaining traction in a market dominated by silicon-based solar cells. Now, a study by researchers at MIT and elsewhere outlines a roadmap for how this promising technology could move from the laboratory to a significant place in the global solar market.

The “technoeconomic” analysis shows that by starting with higher-value niche markets and gradually expanding, solar panel manufacturers could avoid the very steep initial capital costs that would be required to make perovskite-based panels directly competitive with silicon for large utility-scale installations at the outset. Rather than making a prohibitively expensive initial investment, of hundreds of millions or even billions of dollars, to build a plant for utility-scale production, the team found that starting with more specialized applications could be accomplished for more realistic initial capital investment on the order of $40 million.

The results are described in a paper in the journal Joule by MIT postdoc Ian Mathews, research scientist Marius Peters, professor of mechanical engineering Tonio Buonassisi, and five others at MIT, Wellesley College, and Swift Solar Inc.

Solar cells based on perovskites — a broad category of compounds characterized by a certain arrangement of their molecular structure — could provide dramatic improvements in solar installations. Their constituent materials are inexpensive, and they could be manufactured in a roll-to-roll process like printing a newspaper, and printed onto lightweight and flexible backing material. This could greatly reduce costs associated with transportation and installation, although they still require further work to improve their durability. Other promising new solar cell materials are also under development in labs around the world, but none has yet made inroads in the marketplace.

“There have been a lot of new solar cell materials and companies launched over the years,” says Mathews, “and yet, despite that, silicon remains the dominant material in the industry and has been for decades.”

Why is that the case? “People have always said that one of the things that holds new technologies back is that the expense of constructing large factories to actually produce these systems at scale is just too much,” he says. “It’s difficult for a startup to cross what’s called ‘the valley of death,’ to raise the tens of millions of dollars required to get to the scale where this technology might be profitable in the wider solar energy industry.”

But there are a variety of more specialized solar cell applications where the special qualities of perovskite-based solar cells, such as their light weight, flexibility, and potential for transparency, would provide a significant advantage, Mathews says. By focusing on these markets initially, a startup solar company could build up to scale gradually, leveraging the profits from the premium products to expand its production capabilities over time.

Describing the literature on perovskite-based solar cells being developed in various labs, he says, “They’re claiming very low costs. But they’re claiming it once your factory reaches a certain scale. And I thought, we’ve seen this before — people claim a new photovoltaic material is going to be cheaper than all the rest and better than all the rest. That’s great, except we need to have a plan as to how we actually get the material and the technology to scale.”

As a starting point, he says, “We took the approach that I haven’t really seen anyone else take: Let’s actually model the cost to manufacture these modules as a function of scale. So if you just have 10 people in a small factory, how much do you need to sell your solar panels at in order to be profitable? And once you reach scale, how cheap will your product become?”

The analysis confirmed that trying to leap directly into the marketplace for rooftop solar or utility-scale solar installations would require very large upfront capital investment, he says. But “we looked at the prices people might get in the internet of things, or the market in building-integrated photovoltaics. People usually pay a higher price in these markets because they’re more of a specialized product. They’ll pay a little more if your product is flexible or if the module fits into a building envelope.” Other potential niche markets include self-powered microelectronics devices.

Such applications would make the entry into the market feasible without needing massive capital investments. “If you do that, the amount you need to invest in your company is much, much less, on the order of a few million dollars instead of tens or hundreds of millions of dollars, and that allows you to more quickly develop a profitable company,” he says.

“It’s a way for them to prove their technology, both technically and by actually building and selling a product and making sure it survives in the field,” Mathews says, “and also, just to prove that you can manufacture at a certain price point.”

Already, there are a handful of startup companies working to try to bring perovskite solar cells to market, he points out, although none of them yet has an actual product for sale. The companies have taken different approaches, and some seem to be embarking on the kind of step-by-step growth approach outlined by this research, he says. “Probably the company that’s raised the most money is a company called Oxford PV, and they’re looking at tandem cells,” which incorporate both silicon and perovskite cells to improve overall efficiency. Another company is one started by Joel Jean PhD ’17 (who is also a co-author of this paper) and others, called Swift Solar, which is working on flexible perovskites. And there’s a company called Saule Technologies, working on printable perovskites.

Mathews says the kind of technoeconomic analysis the team used in its study could be applied to a wide variety of other new energy-related technologies, including rechargeable batteries and other storage systems, or other types of new solar cell materials.

“There are many scientific papers and academic studies that look at how much it will cost to manufacture a technology once it’s at scale,” he says. “But very few people actually look at how much does it cost at very small scale, and what are the factors affecting economies of scale? And I think that can be done for many technologies, and it would help us accelerate how we get innovations from lab to market.”

The research team also included MIT alumni Sarah Sofia PhD ’19 and Sin Cheng Siah PhD ’15, Wellesley College student Erica Ma, and former MIT postdoc Hannu Laine. The work was supported by the European Union’s Horizon 2020 research and innovation program, the Martin Family Society for Fellows of Sustainability, the U.S. Department of Energy, Shell, through the MIT Energy Initiative, and the Singapore-MIT Alliance for Research and Technology.

MIT – A Simple, Solar-Powered Water Desalination System


MIT-Portable-Desalination_1Tests on an MIT building rooftop showed that a simple proof-of-concept desalination device could produce clean, drinkable water at a rate equivalent to more than 1.5 gallons per hour for each square meter of solar collecting area. Images courtesy of the researchers

System achieves new level of efficiency in harnessing sunlight to make fresh potable water from seawater.

A completely passive solar-powered desalination system developed by researchers at MIT and in China could provide more than 1.5 gallons of fresh drinking water per hour for every square meter of solar collecting area. Such systems could potentially serve off-grid arid coastal areas to provide an efficient, low-cost water source.

The system uses multiple layers of flat solar evaporators and condensers, lined up in a vertical array and topped with transparent aerogel insulation. It is described in a paper appearing today in the journal Energy and Environmental Science, authored by MIT doctoral students Lenan Zhang and Lin Zhao, postdoc Zhenyuan Xu, professor of mechanical engineering and department head Evelyn Wang, and eight others at MIT and at Shanghai Jiao Tong University in China.

The key to the system’s efficiency lies in the way it uses each of the multiple stages to desalinate the water. At each stage, heat released by the previous stage is harnessed instead of wasted. In this way, the team’s demonstration device can achieve an overall efficiency of 385 percent in converting the energy of sunlight into the energy of water evaporation.

The device is essentially a multilayer solar still, with a set of evaporating and condensing components like those used to distill liquor. It uses flat panels to absorb heat and then transfer that heat to a layer of water so that it begins to evaporate. The vapor then condenses on the next panel. That water gets collected, while the heat from the vapor condensation gets passed to the next layer.

Whenever vapor condenses on a surface, it releases heat; in typical condenser systems, that heat is simply lost to the environment. But in this multilayer evaporator the released heat flows to the next evaporating layer, recycling the solar heat and boosting the overall efficiency.

“When you condense water, you release energy as heat,” Wang says. “If you have more than one stage, you can take advantage of that heat.”

Adding more layers increases the conversion efficiency for producing potable water, but each layer also adds cost and bulk to the system. The team settled on a 10-stage system for their proof-of-concept device, which was tested on an MIT building rooftop. The system delivered pure water that exceeded city drinking water standards, at a rate of 5.78 liters per square meter (about 1.52 gallons per 11 square feet) of solar collecting area. This is more than two times as much as the record amount previously produced by any such passive solar-powered desalination system, Wang says.

Theoretically, with more desalination stages and further optimization, such systems could reach overall efficiency levels as high as 700 or 800 percent, Zhang says.

Unlike some desalination systems, there is no accumulation of salt or concentrated brines to be disposed of. In a free-floating configuration, any salt that accumulates during the day would simply be carried back out at night through the wicking material and back into the seawater, according to the researchers.

Their demonstration unit was built mostly from inexpensive, readily available materials such as a commercial black solar absorber and paper towels for a capillary wick to carry the water into contact with the solar absorber. In most other attempts to make passive solar desalination systems, the solar absorber material and the wicking material have been a single component, which requires specialized and expensive materials, Wang says. “We’ve been able to decouple these two.”

The most expensive component of the prototype is a layer of transparent aerogel used as an insulator at the top of the stack, but the team suggests other less expensive insulators could be used as an alternative. (The aerogel itself is made from dirt-cheap silica but requires specialized drying equipment for its manufacture.)

Wang emphasizes that the team’s key contribution is a framework for understanding how to optimize such multistage passive systems, which they call thermally localized multistage desalination. The formulas they developed could likely be applied to a variety of materials and device architectures, allowing for further optimization of systems based on different scales of operation or local conditions and materials.

One possible configuration would be floating panels on a body of saltwater such as an impoundment pond. These could constantly and passively deliver fresh water through pipes to the shore, as long as the sun shines each day. Other systems could be designed to serve a single household, perhaps using a flat panel on a large shallow tank of seawater that is pumped or carried in. The team estimates that a system with a roughly 1-square-meter solar collecting area could meet the daily drinking water needs of one person. In production, they think a system built to serve the needs of a family might be built for around $100.

The researchers plan further experiments to continue to optimize the choice of materials and configurations, and to test the durability of the system under realistic conditions. They also will work on translating the design of their lab-scale device into a something that would be suitable for use by consumers. The hope is that it could ultimately play a role in alleviating water scarcity in parts of the developing world where reliable electricity is scarce but seawater and sunlight are abundant.

“This new approach is very significant,” says Ravi Prasher, an associate lab director at

Lawrence Berkeley National Laboratory and adjunct professor of mechanical engineering at the University of California at Berkeley, who was not involved in this work. “One of the challenges in solar still-based desalination has been low efficiency due to the loss of significant energy in condensation. By efficiently harvesting the condensation energy, the overall solar to vapor efficiency is dramatically improved. … This increased efficiency will have an overall impact on reducing the cost of produced water.”

The research team included Bangjun Li, Chenxi Wang and Ruzhu Wang at the Shanghai Jiao Tong University, and Bikram Bhatia, Kyle Wilke, Youngsup Song, Omar Labban, and John Lienhard, who is the Abdul Latif Jameel Professor of Water at MIT. The research was supported by the National Natural Science Foundation of China, the Singapore-MIT Alliance for Research and Technology, and the MIT Tata Center for Technology and Design.

MIT – A new approach to making airplane parts, minus the massive infrastructure


MIT-Nano-plugs-PRESS-01

Carbon nanotube film produces aerospace-grade composites with no need for huge ovens or autoclaves.

A modern airplane’s fuselage is made from multiple sheets of different composite materials, like so many layers in a phyllo-dough pastry. Once these layers are stacked and molded into the shape of a fuselage, the structures are wheeled into warehouse-sized ovens and autoclaves, where the layers fuse together to form a resilient, aerodynamic shell.

Now MIT engineers have developed a method to produce aerospace-grade composites without the enormous ovens and pressure vessels. The technique may help to speed up the manufacturing of airplanes and other large, high-performance composite structures, such as blades for wind turbines.

The researchers detail their new method in a paper published today in the journal Advanced Materials Interfaces.

“If you’re making a primary structure like a fuselage or wing, you need to build a pressure vessel, or autoclave, the size of a two- or three-story building, which itself requires time and money to pressurize,” says Brian Wardle, professor of aeronautics and astronautics at MIT. “These things are massive pieces of infrastructure. Now we can make primary structure materials without autoclave pressure, so we can get rid of all that infrastructure.”

Wardle’s co-authors on the paper are lead author and MIT postdoc Jeonyoon Lee, and Seth Kessler of Metis Design Corporation, an aerospace structural health monitoring company based in Boston.

Out of the oven, into a blanket

In 2015, Lee led the team, along with another member of Wardle’s lab, in creating a method to make aerospace-grade composites without requiring an oven to fuse the materials together. Instead of placing layers of material inside an oven to cure, the researchers essentially wrapped them in an ultrathin film of carbon nanotubes (CNTs). When they applied an electric current to the film, the CNTs, like a nanoscale electric blanket, quickly generated heat, causing the materials within to cure and fuse together.

With this out-of-oven, or OoO, technique, the team was able to produce composites as strong as the materials made in conventional airplane manufacturing ovens, using only 1 percent of the energy.

The researchers next looked for ways to make high-performance composites without the use of large, high-pressure autoclaves — building-sized vessels that generate high enough pressures to press materials together, squeezing out any voids, or air pockets, at their interface.

“There’s microscopic surface roughness on each ply of a material, and when you put two plys together, air gets trapped between the rough areas, which is the primary source of voids and weakness in a composite,” Wardle says. “An autoclave can push those voids to the edges and get rid of them.”

Researchers including Wardle’s group have explored “out-of-autoclave,” or OoA, techniques to manufacture composites without using the huge machines. But most of these techniques have produced composites where nearly 1 percent of the material contains voids, which can compromise a material’s strength and lifetime. In comparison, aerospace-grade composites made in autoclaves are of such high quality that any voids they contain are neglible and not easily measured.

“The problem with these OoA approaches is also that the materials have been specially formulated, and none are qualified for primary structures such as wings and fuselages,” Wardle says. “They’re making some inroads in secondary structures, such as flaps and doors, but they still get voids.”

Straw pressure

Part of Wardle’s work focuses on developing nanoporous networks — ultrathin films made from aligned, microscopic material such as carbon nanotubes, that can be engineered with exceptional properties, including color, strength, and electrical capacity. The researchers wondered whether these nanoporous films could be used in place of giant autoclaves to squeeze out voids between two material layers, as unlikely as that may seem.

A thin film of carbon nanotubes is somewhat like a dense forest of trees, and the spaces between the trees can function like thin nanoscale tubes, or capillaries. A capillary such as a straw can generate pressure based on its geometry and its surface energy, or the material’s ability to attract liquids or other materials.

The researchers proposed that if a thin film of carbon nanotubes were sandwiched between two materials, then, as the materials were heated and softened, the capillaries between the carbon nanotubes should have a surface energy and geometry such that they would draw the materials in toward each other, rather than leaving a void between them. Lee calculated that the capillary pressure should be larger than the pressure applied by the autoclaves.

The researchers tested their idea in the lab by growing films of vertically aligned carbon nanotubes using a technique they previously developed, then laying the films between layers of materials that are typically used in the autoclave-based manufacturing of primary aircraft structures. They wrapped the layers in a second film of carbon nanotubes, which they applied an electric current to to heat it up. They observed that as the materials heated and softened in response, they were pulled into the capillaries of the intermediate CNT film.

The resulting composite lacked voids, similar to aerospace-grade composites that are produced in an autoclave. The researchers subjected the composites to strength tests, attempting to push the layers apart, the idea being that voids, if present, would allow the layers to separate more easily.

“In these tests, we found that our out-of-autoclave composite was just as strong as the gold-standard autoclave process composite used for primary aerospace structures,” Wardle says.

The team will next look for ways to scale up the pressure-generating CNT film. In their experiments, they worked with samples measuring several centimeters wide — large enough to demonstrate that nanoporous networks can pressurize materials and prevent voids from forming. To make this process viable for manufacturing entire wings and fuselages, researchers will have to find ways to manufacture CNT and other nanoporous films at a much larger scale.

“There are ways to make really large blankets of this stuff, and there’s continuous production of sheets, yarns, and rolls of material that can be incorporated in the process,” Wardle says.

He plans also to explore different formulations of nanoporous films, engineering capillaries of varying surface energies and geometries, to be able to pressurize and bond other high-performance materials.

“Now we have this new material solution that can provide on-demand pressure where you need it,” Wardle says. “Beyond airplanes, most of the composite production in the world is composite pipes, for water, gas, oil, all the things that go in and out of our lives. This could make making all those things, without the oven and autoclave infrastructure.”

This research was supported, in part, by Airbus, ANSYS, Embraer, Lockheed Martin, Saab AB, Saertex, and Teijin Carbon America through MIT’s Nano-Engineered Composite aerospace Structures (NECST) Consortium.

MIT – A new way to deliver drugs with pinpoint targeting


MIT-Nanomaterial-Drug-Delivery-01_0

Diagram illustrates the structure of the tiny bubbles, called liposomes, used to deliver drugs. The blue spheres represent lipids, a kind of fat molecule, surrounding a central cavity containing magnetic nanoparticles (black) and the drug to be delivered (red). When the nanoparticles are heated, the drug can escape into the body. Image courtesy of the researchers

Magnetic particles allow drugs to be released at precise times and in specific areas.

Most pharmaceuticals must either be ingested or injected into the body to do their work. Either way, it takes some time for them to reach their intended targets, and they also tend to spread out to other areas of the body. Now, researchers at MIT and elsewhere have developed a system to deliver medical treatments that can be released at precise times, minimally-invasively, and that ultimately could also deliver those drugs to specifically targeted areas such as a specific group of neurons in the brain.

The new approach is based on the use of tiny magnetic particles enclosed within a tiny hollow bubble of lipids (fatty molecules) filled with water, known as a liposome. The drug of choice is encapsulated within these bubbles, and can be released by applying a magnetic field to heat up the particles, allowing the drug to escape from the liposome and into the surrounding tissue.

The findings are reported today in the journal Nature Nanotechnology in a paper by MIT postdoc Siyuan Rao, Associate Professor Polina Anikeeva, and 14 others at MIT, Stanford University, Harvard University, and the Swiss Federal Institute of Technology in Zurich.

“We wanted a system that could deliver a drug with temporal precision, and could eventually target a particular location,” Anikeeva explains. “And if we don’t want it to be invasive, we need to find a non-invasive way to trigger the release.”

Magnetic fields, which can easily penetrate through the body — as demonstrated by detailed internal images produced by magnetic resonance imaging, or MRI — were a natural choice. The hard part was finding materials that could be triggered to heat up by using a very weak magnetic field (about one-hundredth the strength of that used for MRI), in order to prevent damage to the drug or surrounding tissues, Rao says.

Rao came up with the idea of taking magnetic nanoparticles, which had already been shown to be capable of being heated by placing them in a magnetic field, and packing them into these spheres called liposomes. These are like little bubbles of lipids, which naturally form a spherical double layer surrounding a water droplet.

When placed inside a high-frequency but low-strength magnetic field, the nanoparticles heat up, warming the lipids and making them undergo a transition from solid to liquid, which makes the layer more porous — just enough to let some of the drug molecules escape into the surrounding areas. When the magnetic field is switched off, the lipids re-solidify, preventing further releases. Over time, this process can be repeated, thus releasing doses of the enclosed drug at precisely controlled intervals.

The drug carriers were engineered to be stable inside the body at the normal body temperature of 37 degrees Celsius, but able to release their payload of drugs at a temperature of 42 degrees. “So we have a magnetic switch for drug delivery,” and that amount of heat is small enough “so that you don’t cause thermal damage to tissues,” says Anikeeva, who holds appointments in the departments of Materials Science and Engineering and the Brain and Cognitive Sciences.

In principle, this technique could also be used to guide the particles to specific, pinpoint locations in the body, using gradients of magnetic fields to push them along, but that aspect of the work is an ongoing project. For now, the researchers have been injecting the particles directly into the target locations, and using the magnetic fields to control the timing of drug releases. “The technology will allow us to address the spatial aspect,” Anikeeva says, but that has not yet been demonstrated.

This could enable very precise treatments for a wide variety of conditions, she says. “Many brain disorders are characterized by erroneous activity of certain cells. When neurons are too active or not active enough, that manifests as a disorder, such as Parkinson’s, or depression, or epilepsy.” If a medical team wanted to deliver a drug to a specific patch of neurons and at a particular time, such as when an onset of symptoms is detected, without subjecting the rest of the brain to that drug, this system “could give us a very precise way to treat those conditions,” she says.

Rao says that making these nanoparticle-activated liposomes is actually quite a simple process. “We can prepare the liposomes with the particles within minutes in the lab,” she says, and the process should be “very easy to scale up” for manufacturing. And the system is broadly applicable for drug delivery: “we can encapsulate any water-soluble drug,” and with some adaptations, other drugs as well, she says.

One key to developing this system was perfecting and calibrating a way of making liposomes of a highly uniform size and composition. This involves mixing a water base with the fatty acid lipid molecules and magnetic nanoparticles and homogenizing them under precisely controlled conditions. Anikeeva compares it to shaking a bottle of salad dressing to get the oil and vinegar mixed, but controlling the timing, direction and strength of the shaking to ensure a precise mixing.

Anikeeva says that while her team has focused on neurological disorders, as that is their specialty, the drug delivery system is actually quite general and could be applied to almost any part of the body, for example to deliver cancer drugs, or even to deliver painkillers directly to an affected area instead of delivering them systemically and affecting the whole body. “This could deliver it to where it’s needed, and not deliver it continuously,” but only as needed.

Because the magnetic particles themselves are similar to those already in widespread use as contrast agents for MRI scans, the regulatory approval process for their use may be simplified, as their biological compatibility has largely been proven.

The team included researchers in MIT’s departments of Materials Science and Engineering and Brain and Cognitive Sciences, as well as the McGovern Institute for Brain Research, the Simons Center for Social Brain, and the Research Laboratory of Electronics; the Harvard University Department of Chemistry and Chemical Biology and the John A. Paulsen School of Engineering and Applied Sciences; Stanford University; and the Swiss Federal Institute of Technology in Zurich. The work was supported by the Simons Postdoctoral Fellowship, the U.S. Defense Advanced Research Projects Agency, the Bose Research Grant, and the National Institutes of Health.

MIT – New ‘battery’ aims to spark a carbon capture revolution


Smoke and steam billows from Belchatow Power Station, Europe’s largest coal-fired power plant near Belchatow, Poland on November 28, 2018. Inventors claim a new carbon capture “battery” could be retrofitted for industrial plants but also for mobile sources of CO2 emissions like cars and airplanes. Photo by REUTERS/Kacper Pempel

Renewable energy alone is not enough to turn the tide of the climate crisis. Despite the rapid expansion of wind, solar and other clean energy technologies, human behavior and consumption are flooding our skies with too much carbon, and simply supplanting fossil fuels won’t stop global warming.

To make some realistic attempt at preventing a grim future, humans need to be able to physically remove carbon from the air. 

That’s why carbon capture technology is slowly being integrated into energy and industrial facilities across the globe. Typically set up to collect carbon from an exhaust stream, this technology sops up greenhouse gases before they spread into Earth’s airways.

But those industrial practices work because these factories produce gas pollutants like carbon dioxide and methane at high concentrations. Carbon capture can’t draw CO2 from regular open air, where the concentration of this prominent pollutant is too diffuse. 

Moreover, the energy sector’s transition toward decarbonization is moving too slowly. It will take years — likely decades — before the world’s hundreds of CO2-emitting industrial plants adopt capture technology.

Humans have pumped about 2,000 gigatonnes — billions of metric tons — of carbon dioxide into the air since industrialization, and there will be more. 

But what if you could have a personal-sized carbon capture machine on your car, commercial airplane or solar-powered home?

Chemical engineers at the Massachusetts Institute of Technology have created a new device that can remove carbon dioxide from the air at any concentration.

Published in October in the journal Energy & Environmental Science, the project is the latest bid to directly capture CO2 emissions and keep them from accelerating and worsening future climate disasters. 

Think of the invention as a quasi-battery, in terms of its shape, its construction and how it works to collect carbon dioxide. You pump electricity into the battery, and while the device stores this charge, a chemical reaction occurs that absorbs CO2 from the surrounding atmosphere — a process known as direct air capture. The CO2 can be extracted by discharging the battery, releasing the gas, so the CO2 then can be pumped into the ground. The researchers describe this back-and-forth as electroswing adsorption.

I realized there was a gap in the spectrum of solutions,” said Sahag Voskian, who co-led the project with fellow MIT chemical engineer T. Alan Hatton. “Many current systems, for instance, are very bulky and can only be used for large-scale power plants or industrial applications.”

Relative to current technology, this electroswing adsorber could be retrofitted onto smaller, mobile sources of emissions like autos and planes, the study states.

Voskian also pictures the battery being scaled to plug into power plants powered by renewables, such as wind farms and solar fields, which are known to create more energy than they can store. Rather than lose this power, these renewable plants could set up a side hustle where their excess energy is used to capture carbon. 

“That’s one of the nice aspects of this technology — is that direct linkage with renewables,” said Jennifer Wilcox, a chemical engineer at Worcester Polytechnic Institute, who was not involved in the study. 

The advantage of an electricity-based system for carbon capture is that it scales linearly. If you need 10 times more capacity, you simply build 10 times more of these “electroswing batteries” and stack them, Voskian said. 

He estimates that if you cover a football field with these devices in stacks that are tens of feet high, they could remove about 200,000 to 400,000 metric tons of CO2 a year. Build another 100,000 of these fields, and they could bring carbon dioxide in the atmosphere back to preindustrial levels within 40 years. 

One hundred thousand installations sounds like a lot, but keep in mind that these devices can be built to any size and run off the excess electricity created by renewables like wind and solar, which at the moment cannot be easily stored. Imagine turning the more than 2 million U.S. homes with rooftop solar into mini-carbon capture plants. 

On paper, this invention sounds like a game changer. But it has a number of feasibility hurdles to surmount before it leaves the laboratory. 

How electroswing battery works

The idea of using electricity to trigger a chemical reaction — electrochemistry — as a means for capturing carbon dioxide isn’t new. It has been around for nearly 25 years, in fact. 

But Voskian and Hatton have now added two special materials into the equation: quinone and carbon nanotubes. 

A carbon nanotube is a human-made atom-sized cylinder — a sheet of carbon molecules spread into a single layer and wrapped up like a tube. Aside from being more than 100 times stronger than stainless steel or titanium, carbon nanotubes are excellent conductors of electricity, making them sturdy building blocks for electrified equipment. 

Much like a regular battery, Voskian and Hatton’s device has a positive electrode and a negative electrode — “plus” and “minus” sides. But the minus side — the negative electrode — is infused with quinone, a chemical that, after being electrically charged, reacts and sticks to CO2.

“You can think of it like the charge and discharge of a battery,” Voskian said. “When you charge the battery, you have carbon capture. When you discharge it, you release the carbon that you captured.” 

Their approach is unique because all the energy required for their direct air capture comes from electricity. The three major startups in this emerging space — Climeworks, Global Thermostat and Carbon Engineering — rely on a mixture of electric and thermal (heat) energy, Wilcox said, with thermal energy being the dominant factor. 

For power plants and industrial facilities, that excess heat — or waste heat, a byproduct of their everyday work, isn’t a perfect fit for carbon capture. Waste heat isn’t very consistent. Imagine standing next to a fire — its warmth changes as the flames flit about.

This heat can come from carbon-friendly options — such as a hydrothermal plant — but some current startups are preparing their capture systems to run on thermal energy from fossil-fuel burning facilities. So they may capture 1.5 tons of CO2, but they also generate about a half ton in the process

In Voskian’s operation, “We don’t have any of that. We have full control over the energetics of our process,” he said.

Will it work?

Voskian and Hatton, who have launched a startup called Verdox, write in their study that operating electroswing carbon capture would cost between $50 to $100 per metric ton of CO2.

“If it’s true, that’s a great breakthrough,” said Richard Newell, president and CEO of Resources for the Future, a nonprofit research organization that develops energy and environmental policy on carbon capture. But, he cautioned, “the distance between showing something in the laboratory and then demonstrating it at a commercial scale is very big.” 

MIT engineers develop a new way to remove carbon dioxide from air


MIT-Carbon-Capture-01_0

In this diagram of the new system, air entering from top right passes to one of two chambers (the gray rectangular structures) containing battery electrodes that attract the carbon dioxide. Then the airflow is switched to the other chamber, while the accumulated carbon dioxide in the first chamber is flushed into a separate storage tank (at right). These alternating flows allow for continuous operation of the two-step process. Image courtesy of the researchers

The process could work on the gas at any concentrations, from power plant emissions to open air

A new way of removing carbon dioxide from a stream of air could provide a significant tool in the battle against climate change. The new system can work on the gas at virtually any concentration level, even down to the roughly 400 parts per million currently found in the atmosphere.

Most methods of removing carbon dioxide from a stream of gas require higher concentrations, such as those found in the flue emissions from fossil fuel-based power plants. A few variations have been developed that can work with the low concentrations found in air, but the new method is significantly less energy-intensive and expensive, the researchers say.

The technique, based on passing air through a stack of charged electrochemical plates, is described in a new paper in the journal Energy and Environmental Science, by MIT postdoc Sahag Voskian, who developed the work during his PhD, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering.

The device is essentially a large, specialized battery that absorbs carbon dioxide from the air (or other gas stream) passing over its electrodes as it is being charged up, and then releases the gas as it is being discharged. In operation, the device would simply alternate between charging and discharging, with fresh air or feed gas being blown through the system during the charging cycle, and then the pure, concentrated carbon dioxide being blown out during the discharging.

As the battery charges, an electrochemical reaction takes place at the surface of each of a stack of electrodes. These are coated with a compound called poly-anthraquinone, which is composited with carbon nanotubes. The electrodes have a natural affinity for carbon dioxide and readily react with its molecules in the airstream or feed gas, even when it is present at very low concentrations. The reverse reaction takes place when the battery is discharged — during which the device can provide part of the power needed for the whole system — and in the process ejects a stream of pure carbon dioxide. The whole system operates at room temperature and normal air pressure.

“The greatest advantage of this technology over most other carbon capture or carbon absorbing technologies is the binary nature of the adsorbent’s affinity to carbon dioxide,” explains Voskian. In other words, the electrode material, by its nature, “has either a high affinity or no affinity whatsoever,” depending on the battery’s state of charging or discharging. Other reactions used for carbon capture require intermediate chemical processing steps or the input of significant energy such as heat, or pressure differences.

“This binary affinity allows capture of carbon dioxide from any concentration, including 400 parts per million, and allows its release into any carrier stream, including 100 percent CO2,” Voskian says. That is, as any gas flows through the stack of these flat electrochemical cells, during the release step the captured carbon dioxide will be carried along with it. For example, if the desired end-product is pure carbon dioxide to be used in the carbonation of beverages, then a stream of the pure gas can be blown through the plates. The captured gas is then released from the plates and joins the stream.

In some soft-drink bottling plants, fossil fuel is burned to generate the carbon dioxide needed to give the drinks their fizz. Similarly, some farmers burn natural gas to produce carbon dioxide to feed their plants in greenhouses. The new system could eliminate that need for fossil fuels in these applications, and in the process actually be taking the greenhouse gas right out of the air, Voskian says. Alternatively, the pure carbon dioxide stream could be compressed and injected underground for long-term disposal, or even made into fuel through a series of chemical and electrochemical processes.

The process this system uses for capturing and releasing carbon dioxide “is revolutionary” he says. “All of this is at ambient conditions — there’s no need for thermal, pressure, or chemical input. It’s just these very thin sheets, with both surfaces active, that can be stacked in a box and connected to a source of electricity.”

“In my laboratories, we have been striving to develop new technologies to tackle a range of environmental issues that avoid the need for thermal energy sources, changes in system pressure, or addition of chemicals to complete the separation and release cycles,” Hatton says. “This carbon dioxide capture technology is a clear demonstration of the power of electrochemical approaches that require only small swings in voltage to drive the separations.”​

In a working plant — for example, in a power plant where exhaust gas is being produced continuously — two sets of such stacks of the electrochemical cells could be set up side by side to operate in parallel, with flue gas being directed first at one set for carbon capture, then diverted to the second set while the first set goes into its discharge cycle. By alternating back and forth, the system could always be both capturing and discharging the gas. In the lab, the team has proven the system can withstand at least 7,000 charging-discharging cycles, with a 30 percent loss in efficiency over that time. The researchers estimate that they can readily improve that to 20,000 to 50,000 cycles.

The electrodes themselves can be manufactured by standard chemical processing methods. While today this is done in a laboratory setting, it can be adapted so that ultimately they could be made in large quantities through a roll-to-roll manufacturing process similar to a newspaper printing press, Voskian says. “We have developed very cost-effective techniques,” he says, estimating that it could be produced for something like tens of dollars per square meter of electrode.

Compared to other existing carbon capture technologies, this system is quite energy efficient, using about one gigajoule of energy per ton of carbon dioxide captured, consistently. Other existing methods have energy consumption which vary between 1 to 10 gigajoules per ton, depending on the inlet carbon dioxide concentration, Voskian says.

The researchers have set up a company called Verdox to commercialize the process, and hope to develop a pilot-scale plant within the next few years, he says. And the system is very easy to scale up, he says: “If you want more capacity, you just need to make more electrodes.”

This work was supported by an MIT Energy Initiative Seed Fund grant and by Eni S.p.A.

MIT: New approach suggests path to Emissions-Free Cement


MIT-Green-Cement_0

In a demonstration of the basic chemical reactions used in the new process, electrolysis takes place in neutral water. Dyes show how acid (pink) and base (purple) are produced at the positive and negative electrodes. A variation of this process can be used to convert calcium carbonate (CaCO3) into calcium hydroxide (Ca(OH)2), which can then be used to make Portland cement without producing any greenhouse gas emissions. Cement production currently causes 8 percent of global carbon emissions. Image: Felice Frankel

MIT researchers find a way to eliminate carbon emissions from cement production — a major global source of greenhouse gases.

It’s well known that the production of cement — the world’s leading construction material — is a major source of greenhouse gas emissions, accounting for about 8 percent of all such releases. If cement production were a country, it would be the world’s third-largest emitter.

A team of researchers at MIT has come up with a new way of manufacturing the material that could eliminate these emissions altogether, and could even make some other useful products in the process.

The findings are being reported today in the journal PNAS in a paper by Yet-Ming Chiang, the Kyocera Professor of Materials Science and Engineering at MIT, with postdoc Leah Ellis, graduate student Andres Badel, and others.

“About 1 kilogram of carbon dioxide is released for every kilogram of cement made today,” Chiang says. That adds up to 3 to 4 gigatons (billions of tons) of cement, and of carbon dioxide emissions, produced annually today, and that amount is projected to grow. The number of buildings worldwide is expected to double by 2060, which is equivalent to “building one new New York City every 30 days,” he says. And the commodity is now very cheap to produce: It costs only about 13 cents per kilogram, which he says makes it cheaper than bottled water.

So it’s a real challenge to find ways of reducing the material’s carbon emissions without making it too expensive. Chiang and his team have spent the last year searching for alternative approaches, and hit on the idea of using an electrochemical process to replace the current fossil-fuel-dependent system.

Ordinary Portland cement, the most widely used standard variety, is made by grinding up limestone and then cooking it with sand and clay at high heat, which is produced by burning coal. The process produces carbon dioxide in two different ways: from the burning of the coal, and from gases released from the limestone during the heating. Each of these produces roughly equal contributions to the total emissions. The new process would eliminate or drastically reduce both sources, Chiang says. Though they have demonstrated the basic electrochemical process in the lab, the process will require more work to scale up to industrial scale.

First of all, the new approach could eliminate the use of fossil fuels for the heating process, substituting electricity generated from clean, renewable sources. “In many geographies renewable electricity is the lowest-cost electricity we have today, and its cost is still dropping,” Chiang says. In addition, the new process produces the same cement product. The team realized that trying to gain acceptance for a new type of cement — something that many research groups have pursued in different ways — would be an uphill battle, considering how widely used the material is around the world and how reluctant builders can be to try new, relatively untested materials.

The new process centers on the use of an electrolyzer, something that many people have encountered as part of high school chemistry classes, where a battery is hooked up to two electrodes in a glass of water, producing bubbles of oxygen from one electrode and bubbles of hydrogen from the other as the electricity splits the water molecules into their constituent atoms. Importantly, the electrolyzer’s oxygen-evolving electrode produces acid, while the hydrogen-evolving electrode produces a base.

In the new process, the pulverized limestone is dissolved in the acid at one electrode and high-purity carbon dioxide is released, while calcium hydroxide, generally known as lime, precipitates out as a solid at the other. The calcium hydroxide can then be processed in another step to produce the cement, which is mostly calcium silicate.

The carbon dioxide, in the form of a pure, concentrated stream, can then be easily sequestered, harnessed to produce value-added products such as a liquid fuel to replace gasoline, or used for applications such as oil recovery or even in carbonated beverages and dry ice. The result is that no carbon dioxide is released to the environment from the entire process, Chiang says. By contrast, the carbon dioxide emitted from conventional cement plants is highly contaminated with nitrogen oxides, sulfur oxides, carbon monoxide and other material that make it impractical to “scrub” to make the carbon dioxide usable.

Calculations show that the hydrogen and oxygen also emitted in the process could be recombined, for example in a fuel cell, or burned to produce enough energy to fuel the whole rest of the process, Ellis says, producing nothing but water vapor.

In a demonstration of the basic chemical reactions used in the new process, electrolysis takes place in neutral water. Dyes show how acid (pink) and base (purple) are produced at the positive and negative electrodes. A variation of this process can be used to convert calcium carbonate (CaCO3) into calcium hydroxide (Ca(OH)2), which can then be used to make Portland cement without producing any greenhouse gas emissions. Cement production currently causes 8 percent of global carbon emissions.

In their laboratory demonstration, the team carried out the key electrochemical steps required, producing lime from the calcium carbonate, but on a small scale. The process looks a bit like shaking a snow-globe, as it produces a flurry of suspended white particles inside the glass container as the lime precipitates out of the solution.

While the technology is simple and could, in principle, be easily scaled up, a typical cement plant today produces about 700,000 tons of the material per year. “How do you penetrate an industry like that and get a foot in the door?” asks Ellis, the paper’s lead author. One approach, she says, is to try to replace just one part of the process at a time, rather than the whole system at once, and “in a stepwise fashion” gradually add other parts.

The initial proposed system the team came up with is “not because we necessarily think we have the exact strategy” for the best possible approach, Chiang says, “but to get people in the electrochemical sector to start thinking more about this,” and come up with new ideas. “It’s an important first step, but not yet a fully developed solution.”

The research was partly supported by the Skolkovo Institute of Science and Technology.

 

MIT: Study Furthers Radically New View of Gene Control


  • MIT researchers have developed a new model of gene control, in which the cellular machinery that transcribes DNA into RNA forms specialized droplets called condensates.

  • Image: Steven H. Lee

  • Along the genome, proteins form liquid-like droplets that appear to boost the expression of particular genes.

    In recent years, MIT scientists have developed a new model for how key genes are controlled that suggests the cellular machinery that transcribes DNA into RNA forms specialized droplets called condensates. These droplets occur only at certain sites on the genome, helping to determine which genes are expressed in different types of cells.

    In a new study that supports that model, researchers at MIT and the Whitehead Institute for Biomedical Research have discovered physical interactions between proteins and with DNA that help explain why these droplets, which stimulate the transcription of nearby genes, tend to cluster along specific stretches of DNA known as super enhancers. These enhancer regions do not encode proteins but instead regulate other genes.

    “This study provides a fundamentally important new approach to deciphering how the ‘dark matter’ in our genome functions in gene control,” says Richard Young, an MIT professor of biology and member of the Whitehead Institute.

    Young is one of the senior authors of the paper, along with Phillip Sharp, an MIT Institute Professor and member of MIT’s Koch Institute for Integrative Cancer Research; and Arup K. Chakraborty, the Robert T. Haslam Professor in Chemical Engineering, a professor of physics and chemistry, and a member of MIT’s Institute for Medical Engineering and Science and the Ragon Institute of MGH, MIT, and Harvard.

    Graduate student Krishna Shrinivas and postdoc Benjamin Sabari are the lead authors of the paper, which appears in Molecular Cell on Aug. 8.

    “A biochemical factory”

    Every cell in an organism has an identical genome, but cells such as neurons or heart cells express different subsets of those genes, allowing them to carry out their specialized functions. Previous research has shown that many of these genes are located near super enhancers, which bind to proteins called transcription factors that stimulate the copying of nearby genes into RNA.

    About three years ago, Sharp, Young, and Chakraborty joined forces to try to model the interactions that occur at enhancers.

    In a 2017 Cell paper, based on computational studies, they hypothesized that in these regions, transcription factors form droplets called phase-separated condensates. Similar to droplets of oil suspended in salad dressing, these condensates are collections of molecules that form distinct cellular compartments but have no membrane separating them from the rest of the cell.

    In a 2018 Science paper, the researchers showed that these dynamic droplets do form at super enhancer locations. Made of clusters of transcription factors and other molecules, these droplets attract enzymes such as RNA polymerases that are needed to copy DNA into messenger RNA, keeping gene transcription active at specific sites.

    “We had demonstrated that the transcription machinery forms liquid-like droplets at certain regulatory regions on our genome, however we didn’t fully understand how or why these dewdrops of biological molecules only seemed to condense around specific points on our genome,” Shrinivas says.

    As one possible explanation for that site specificity, the research team hypothesized that weak interactions between intrinsically disordered regions of transcription factors and other transcriptional molecules, along with specific interactions between transcription factors and particular DNA elements, might determine whether a condensate forms at a particular stretch of DNA. Biologists have traditionally focused on “lock-and-key” style interactions between rigidly structured protein segments to explain most cellular processes, but more recent evidence suggests that weak interactions between floppy protein regions also play an important role in cell activities.

    In this study, computational modeling and experimentation revealed that the cumulative force of these weak interactions conspire together with transcription factor-DNA interactions to determine whether a condensate of transcription factors will form at a particular site on the genome. Different cell types produce different transcription factors, which bind to different enhancers. When many transcription factors cluster around the same enhancers, weak interactions between the proteins are more likely to occur. Once a critical threshold concentration is reached, condensates form.

    “Creating these local high concentrations within the crowded environment of the cell enables the right material to be in the right place at the right time to carry out the multiple steps required to activate a gene,” Sabari says. “Our current study begins to tease apart how certain regions of the genome are capable of pulling off this trick.”

    These droplets form on a timescale of seconds to minutes, and they blink in and out of existence depending on a cell’s needs.

    “It’s an on-demand biochemical factory that cells can form and dissolve, as and when they need it,” Chakraborty says. “When certain signals happen at the right locus on a gene, the condensates form, which concentrates all of the transcription molecules. Transcription happens, and when the cells are done with that task, they get rid of them.”

    “A functional condensate has to be more than the sum of its parts, and how the protein and DNA components work together is something we don’t fully understand,” says Rohit Pappu, director of the Center for Science and Engineering of Living Systems at Washington University, who was not involved in the research. “This work gets us on the road to thinking about the interplay among protein-protein, protein-DNA, and possibly DNA-DNA interactions as determinants of the outputs of condensates.”

    A new view

    Weak cooperative interactions between proteins may also play an important role in evolution, the researchers proposed in a 2018 Proceedings of the National Academy of Sciences paper.

    The sequences of intrinsically disordered regions of transcription factors need to change only a little to evolve new types of specific functionality. In contrast, evolving new specific functions via “lock-and-key” interactions requires much more significant changes.

    “If you think about how biological systems have evolved, they have been able to respond to different conditions without creating new genes.

    We don’t have any more genes that a fruit fly, yet we’re much more complex in many of our functions,” Sharp says. “The incremental expanding and contracting of these intrinsically disordered domains could explain a large part of how that evolution happens.”

    Similar condensates appear to play a variety of other roles in biological systems, offering a new way to look at how the interior of a cell is organized.

    Instead of floating through the cytoplasm and randomly bumping into other molecules, proteins involved in processes such as relaying molecular signals may transiently form droplets that help them interact with the right partners.

    “This is a very exciting turn in the field of cell biology,” Sharp says. “It is a whole new way of looking at biological systems that is richer and more meaningful.”

    Some of the MIT researchers, led by Young, have helped form a company called Dewpoint Therapeutics to develop potential treatments for a wide variety of diseases by exploiting cellular condensates.

    There is emerging evidence that cancer cells use condensates to control sets of genes that promote cancer, and condensates have also been linked to neurodegenerative disorders such as amyotrophic lateral sclerosis (ALS) and Huntington’s disease.

    The research was funded by the National Science Foundation, the National Institutes of Health, and the Koch Institute Support (core) Grant from the National Cancer Institute.