MIT and HARVARD Update: PHYSICS CREATES NEW FORM OF LIGHT THAT COULD DRIVE THE QUANTUM COMPUTING REVOLUTION


The discovery that photons can interact could be harnessed for quantum computing. PHOTO: CHRISTINE DANILOFF/MIT

For the first time, scientists have watched groups of three photons interacting and effectively producing a new form of light.

In results published in Science, researchers suggest that this new light could be used to perform highly complex, incredibly fast quantum computations.

Photons are tiny particles that normally travel solo through beams of light, never interacting with each other. But in 2013 scientists made them clump together in pairs, creating a new state of matter. This discovery shows that interactions are possible on a greater scale.

“It was an open question,” Vladan Vuletic from the Massachusetts Institute of Technology (MIT), who led the team with Mikhail Lukin from Harvard University, said in a statement. “Can you add more photons to a molecule to make bigger and bigger things?”

The scientists cooled a cloud of rubidium atoms to an ultralow temperature to answer their question. This slowed the atoms down till they were almost still. A very faint laser beam sent just a few photons through the freezing cloud at once.

The photons came out the other side as pairs and triplets, rather than just as individuals.

Photons flit between atoms like bees among flowers.

The researchers think the particles might flit from one nearby atom to another as they pass through the rubidium cloud—like bees in a field of flowers.

These passing photons could form “polaritons”—part photon, part atom hybrids. If more than one photon passes by the same atom at the same time, they might form polaritons that are linked.

As they leave the atom, they could stay together as a pair, or even a triplet.

“What’s neat about this is, when photons go through the medium, anything that happens in the medium, they ‘remember’ when they get out,” said co-author Sergio Cantu from MIT.

This whole process takes about a millionth of a second.

Read About: MIT Researchers Link Photons

The future of computing

This research is the latest step toward a long-fabled quantum computer, an ultra-powerful machine that could solve problems beyond the realm of traditional computers. Your desktop PC would, for example, struggle to solve the question: “If a salesman has lots of places to visit, what is the quickest route?”

“[A traditional computer] could solve this for a certain number of cities, but if I wanted to add more cities, it would get much harder, very quickly,” Vuletic previously stated in a press release.

Read more: What did the Big Bang look like? The physics of light during the formation of the universe

Light, he said, is already used to transmit data very quickly over long distances via fiber optic cables. Being able to manipulate these photons could enable the distribution of data in much more powerful ways.

The team is now aiming to coerce photons in ways beyond attraction. The next stop is repulsion, where photons slam into each other and scatter.

“It’s completely novel in the sense that we don’t even know sometimes qualitatively what to expect,” Vuletic says. “With repulsion of photons, can they be such that they form a regular pattern, like a crystal of light? Or will something else happen? It’s very uncharted territory.”

Advertisements

MIT launches the “MIT Intelligence Quest … MIT IQ” (Video)


MIT AI IQ 87e48072-b50e-4701-b38e-f236c0c22280-original

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known. Courtesy of MIT IQ

New Institute-wide initiative will advance human and machine intelligence research

MIT today announced the launch of the MIT Intelligence Quest, an initiative to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society.

The announcement was first made in a letter MIT President L. Rafael Reif sent to the Institute community.

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known. (continued below)

Watch and Read About: Scott Zoldi, Director of Analytics at FICO, has published a report that “we are just at the beginning of the golden age of analytics, in which the value and contributions of artificial intelligence (AI), machine learning (AA) and of deep learning can only continue to expand as we accept and incorporate those tools into our businesses. ” And according to the expert’s predictions, in 2018, the development and use of these technologies will continue to expand and strengthen. And consider that next year:

 

(Continued)

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

“Today we set out to answer two big questions, says President Reif. “How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines, to the benefit of society?”

MIT IQ: The Core and The Bridge

MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science. brain-quantum-2-b2b_wsf

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware.

Along with developing and advancing the technologies of intelligence, MIT IQ researchers will also investigate the societal and ethical implications of advanced analytical and predictive tools. There are already active projects and groups at the Institute investigating autonomous systems, media and information quality, labor markets and the work of the future, innovation and the digital economy, and the role of AI in the legal system.

In all its activities, MIT IQ is intended to take advantage of — and strengthen — the Institute’s culture of collaboration. MIT IQ will connect and amplify existing excellence across labs and centers already engaged in intelligence research. It will also establish shared, central spaces conducive to group work, and its resources will directly support research.

“Our quest is meant to power world-changing possibilities,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. Chandrakasan, in collaboration with Provost Martin Schmidt and all four of MIT’s other school deans, has led the development and establishment of MIT IQ.

“We imagine preventing deaths from cancer by using deep learning for early detection and personalized treatment,” Chandrakasan continues. “We imagine artificial intelligence in sync with, complementing, and assisting our own intelligence. And we imagine every scientist and engineer having access to human-intelligence-inspired algorithms that open new avenues of discovery in their fields. Researchers across our campus want to push the boundaries of what’s possible.”

Engaging energetically with partners

In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab, which was announced in September 2017. MIT researchers will collaborate with each other and with industry on challenges that range in scale from the very broad to the very specific.

“In the short time since we began our collaboration with IBM, the lab has garnered tremendous interest inside and outside MIT, and it will be a vital part of MIT IQ,” says President Reif.

John E. Kelly III, IBM senior vice president for cognitive solutions and research, says, “To take on the world’s greatest challenges and seize its biggest opportunities, we need to rapidly advance both AI technology and our understanding of human intelligence. Building on decades of collaboration — including our extensive joint MIT–IBM Watson AI Lab — IBM and MIT will together shape a new agenda for intelligence research and its applications. We are proud to be a cornerstone of this expanded initiative.”

MIT will seek to establish additional entities within MIT IQ, in partnership with corporate and philanthropic organizations.

Why MIT

MIT has been on the frontier of intelligence research since the 1950s, when pioneers Marvin Minsky and John McCarthy helped establish the field of artificial intelligence.

MIT now has over 200 principal investigators whose research bears directly on intelligence. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Department of Brain and Cognitive Sciences (BCS) — along with the McGovern Institute for Brain Research and the Picower Institute for Learning and Memory — collaborate on a range of projects. MIT is also home to the National Science Foundation–funded center for Brains, Minds and Machines (CBMM) — the only national center of its kind.

Four years ago, MIT launched the Institute for Data, Systems, and Society (IDSS) with a mission promoting data science, particularly in the context of social systems. It is  anticipated that faculty and students from IDSS will play a critical role in this initiative.

Faculty from across the Institute will participate in the initiative, including researchers in the Media Lab, the Operations Research Center, the Sloan School of Management, the School of Architecture and Planning, and the School of Humanities, Arts, and Social Sciences.

“Our quest will amount to a journey taken together by all five schools at MIT,” says Provost Schmidt. “Success will rest on a shared sense of purpose and a mix of contributions from a wide variety of disciplines. I’m excited by the new thinking we can help unlock.”

At the heart of MIT IQ will be collaboration among researchers in human and artificial intelligence.

“To revolutionize the field of artificial intelligence, we should continue to look to the roots of intelligence: the brain,” says James DiCarlo, department head and Peter de Florez Professor of Neuroscience in the Department of Brain and Cognitive Sciences. “By working with engineers and artificial intelligence researchers, human intelligence researchers can build models of the brain systems that produce intelligent behavior. The time is now, as model building at the scale of those brain systems is now possible. Discovering how the brain works in the language of engineers will not only lead to transformative AI — it will also illuminate entirely new ways to repair, educate, and augment our own minds.”

Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, and director of CSAIL, agrees. MIT researchers, she says, “have contributed pioneering and visionary solutions for intelligence since the beginning of the field, and are excited to make big leaps to understand human intelligence and to engineer significantly more capable intelligent machines. Understanding intelligence will give us the knowledge to understand ourselves and to create machines that will support us with cognitive and physical work.”

David Siegel, who earned a PhD in computer science at MIT in 1991 pursuing research at MIT’s Artificial Intelligence Laboratory, and who is a member of the MIT Corporation and an advisor to the MIT Center for Brains, Minds, and Machines, has been integral to the vision and formation of MIT IQ and will continue to help shape the effort. “Understanding human intelligence is one of the greatest scientific challenges,” he says, “one that helps us understand who we are while meaningfully advancing the field of artificial intelligence.” Siegel is co-chairman and a founder of Two Sigma Investments, LP.

The fruits of research

MIT IQ will thus provide a platform for long-term research, encouraging the foundational advances of the future. At the same time, MIT professors and researchers may develop technologies with near-term value, leading to new kinds of collaborations with existing companies — and to new companies.

Some such entrepreneurial efforts could be supported by The Engine, an Institute initiative launched in October 2016 to support startup companies pursuing particularly ambitious goals.

Other innovations stemming from MIT IQ could be absorbed into the innovation ecosystem surrounding the Institute — in Kendall Square, Cambridge, and the Boston metropolitan area. MIT is located in close proximity to a world-leading nexus of biotechnology and medical-device research and development, as well as a cluster of leading-edge technology firms that study and deploy machine intelligence.

MIT also has roots in centers of innovation elsewhere in the United States and around the world, through faculty research projects, institutional and industry collaborations, and the activities and leadership of its alumni. MIT IQ will seek to connect to innovative companies and individuals who share MIT’s passion for work in intelligence.

Eric Schmidt, former executive chairman of Alphabet, has helped MIT form the vision for MIT IQ. “Imagine the good that can be done by putting novel machine-learning tools in the hands of those who can make great use of them,” he says. “MIT IQ can become a fount of exciting new capabilities.”

“I am thrilled by today’s news,” says President Reif. “Drawing on MIT’s deep strengths and signature values, culture, and history, MIT IQ promises to make important contributions to understanding the nature of intelligence, and to harnessing it to make a better world.”

“MIT is placing a bet,” he says, “on the central importance of intelligence research to meeting the needs of humanity.”

MIT: Optimizing carbon nanotube electrodes for energy storage and water desalination applications


Opt CNTs for Water Wang-Mutha-nanotubes_0Evelyn Wang (left) and Heena Mutha have developed a nondestructive method of quantifying the detailed characteristics of carbon nanotube (CNT) samples — a valuable tool for optimizing these materials for use as electrodes in a variety of practical devices. Photo: Stuart Darsch

New model measures characteristics of carbon nanotube structures for energy storage and water desalination applications.

Using electrodes made of carbon nanotubes (CNTs) can significantly improve the performance of devices ranging from capacitors and batteries to water desalination systems. But figuring out the physical characteristics of vertically aligned CNT arrays that yield the most benefit has been difficult.

Now an MIT team has developed a method that can help. By combining simple benchtop experiments with a model describing porous materials, the researchers have found they can quantify the morphology of a CNT sample, without destroying it in the process.

In a series of tests, the researchers confirmed that their adapted model can reproduce key measurements taken on CNT samples under varying conditions. They’re now using their approach to determine detailed parameters of their samples — including the spacing between the nanotubes — and to optimize the design of CNT electrodes for a device that rapidly desalinates brackish water.

A common challenge in developing energy storage devices and desalination systems is finding a way to transfer electrically charged particles onto a surface and store them there temporarily. In a capacitor, for example, ions in an electrolyte must be deposited as the device is being charged and later released when electricity is being delivered. During desalination, dissolved salt must be captured and held until the cleaned water has been withdrawn.

One way to achieve those goals is by immersing electrodes into the electrolyte or the saltwater and then imposing a voltage on the system. The electric field that’s created causes the charged particles to cling to the electrode surfaces. When the voltage is cut, the particles immediately let go.

“Whether salt or other charged particles, it’s all about adsorption and desorption,” says Heena Mutha PhD ’17, a senior member of technical staff at the Charles Stark Draper Laboratory. “So the electrodes in your device should have lots of surface area as well as open pathways that allow the electrolyte or saltwater carrying the particles to travel in and out easily.”

One way to increase the surface area is by using CNTs. In a conventional porous material, such as activated charcoal, interior pores provide extensive surface area, but they’re irregular in size and shape, so accessing them can be difficult. In contrast, a CNT “forest” is made up of aligned pillars that provide the needed surfaces and straight pathways, so the electrolyte or saltwater can easily reach them.

However, optimizing the design of CNT electrodes for use in devices has proven tricky. Experimental evidence suggests that the morphology of the material — in particular, how the CNTs are spaced out — has a direct impact on device performance. Increasing the carbon concentration when fabricating CNT electrodes produces a more tightly packed forest and more abundant surface area. But at a certain density, performance starts to decline, perhaps because the pillars are too close together for the electrolyte or saltwater to pass through easily.

Designing for device performance

OPT CNTs III graphic-1

“Much work has been devoted to determining how CNT morphology affects electrode performance in various applications,” says Evelyn Wang, the Gail E. Kendall Professor of Mechanical Engineering. “But an underlying question is, ‘How can we characterize these promising electrode materials in a quantitative way, so as to investigate the role played by such details as the nanometer-scale interspacing?'”

Inspecting a cut edge of a sample can be done using a scanning electron microscope (SEM). But quantifying features, such as spacing, is difficult, time-consuming, and not very precise. Analyzing data from gas adsorption experiments works well for some porous materials, but not for CNT forests. Moreover, such methods destroy the material being tested, so samples whose morphologies have been characterized can’t be used in tests of overall device performance.

For the past two years, Wang and Mutha have been working on a better option. “We wanted to develop a nondestructive method that combines simple electrochemical experiments with a mathematical model that would let us ‘back calculate’ the interspacing in a CNT forest,” Mutha says. “Then we could estimate the porosity of the CNT forest — without destroying it.”

Adapting the conventional model

One widely used method for studying porous electrodes is electrochemical impedance spectroscopy (EIS). It involves pulsing voltage across electrodes in an electrochemical cell at a set time interval (frequency) while monitoring “impedance,” a measure that depends on the available storage space and resistance to flow. Impedance measurements at different frequencies is called the “frequency response.”Opt CNTs II 1-newmodelmeas

The classic model describing porous media uses that frequency response to calculate how much open space there is in a porous material. “So we should be able to use [the model] to calculate the space between the carbon nanotubes in a CNT electrode,” Mutha says.

But there’s a problem: This model assumes that all pores are uniform, cylindrical voids. But that description doesn’t fit electrodes made of CNTs. Mutha modified the model to more accurately define the pores in CNT materials as the void spaces surrounding solid pillars. While others have similarly altered the classic model, Mutha took her alterations a step further. The nanotubes in a CNT material are unlikely to be packed uniformly, so she added to her equations the ability to account for variations in the spacing between the nanotubes. With this modified model, Mutha could analyze EIS data from real samples to calculate CNT spacings.

Using the model

To demonstrate her approach, Mutha first fabricated a series of laboratory samples and then measured their frequency response. In collaboration with Yuan “Jenny” Lu ’15, a materials science and engineering graduate, she deposited thin layers of aligned CNTs onto silicon wafers inside a furnace and then used water vapor to separate the CNTs from the silicon, producing free-standing forests of nanotubes. To vary the CNT spacing, she used a technique developed by MIT collaborators in the Department of Aeronautics and Astronautics, Professor Brian Wardle and postdoc associate Itai Stein PhD ’16. Using a custom plastic device, she mechanically squeezed her samples from four sides, thereby packing the nanotubes together more tightly and increasing the volume fraction — that is, the fraction of the total volume occupied by the solid CNTs.

To test the frequency response of the samples, she used a glass beaker containing three electrodes immersed in an electrolyte. One electrode is the CNT-coated sample, while the other two are used to monitor the voltage and to absorb and measure the current. Using that setup, she first measured the capacitance of each sample, meaning how much charge it could store in each square centimeter of surface area at a given constant voltage. She then ran EIS tests on the samples and analyzed results using her modified porous media model.

Results for the three volume fractions tested show the same trends. As the voltage pulses become less frequent, the curves initially rise at about a 45 degree slope. But at some point, each one shifts toward vertical, with resistance becoming constant and impedance continuing to rise.

As Mutha explains, those trends are typical of EIS analyses. “At high frequencies, the voltage changes so quickly that — because of resistance in the CNT forest — it doesn’t penetrate the depth of the entire electrode material, so the response comes only from the surface or partway in,” she says. “But eventually the frequency is low enough that there’s time between pulses for the voltage to penetrate and for the whole sample to respond.”

Resistance is no longer a noticeable factor, so the line becomes vertical, with the capacitance component causing impedance to rise as more charged particles attach to the CNTs. That switch to vertical occurs earlier with the lower-volume-fraction samples. In sparser forests, the spaces are larger, so the resistance is lower.

The most striking feature of Mutha’s results is the gradual transition from the high-frequency to the low-frequency regime. Calculations from a model based on uniform spacing — the usual assumption — show a sharp transition from partial to complete electrode response. Because Mutha’s model incorporates subtle variations in spacing, the transition is gradual rather than abrupt. Her experimental measurements and model results both exhibit that behavior, suggesting that the modified model is more accurate.

By combining their impedance spectroscopy results with their model, the MIT researchers inferred the CNT interspacing in their samples. Since the forest packing geometry is unknown, they performed the analyses based on three- and six-pillar configurations to establish upper and lower bounds. Their calculations showed that spacing can range from 100 nanometers in sparse forests to below 10 nanometers in densely packed forests.

Comparing approaches

Work in collaboration with Wardle and Stein has validated the two groups’ differing approaches to determining CNT morphology. In their studies, Wardle and Stein use an approach similar to Monte Carlo modeling, which is a statistical technique that involves simulating the behavior of an uncertain system thousands of times under varying assumptions to produce a range of plausible outcomes, some more likely than others. For this application, they assumed a random distribution of “seeds” for carbon nanotubes, simulated their growth, and then calculated characteristics, such as inter-CNT spacing with an associated variability. Along with other factors, they assigned some degree of waviness to the individual CNTs to test the impact on the calculated spacing.

To compare their approaches, the two MIT teams performed parallel analyses that determined average spacing at increasing volume fractions. The trends they exhibited matched well, with spacing decreasing as volume fraction increases. However, at a volume fraction of about 26 percent, the EIS spacing estimates suddenly go up — an outcome that Mutha believes may reflect packing irregularities caused by buckling of the CNTs as she was densifying them.

To investigate the role played by waviness, Mutha compared the variabilities in her results with those in Stein’s results from simulations assuming different degrees of waviness. At high volume fractions, the EIS variabilities were closest to those from the simulations assuming little or no waviness. But at low volume fractions, the closest match came from simulations assuming high waviness.

Based on those findings, Mutha concludes that waviness should be considered when performing EIS analyses — at least in some cases. “To accurately predict the performance of devices with sparse CNT electrodes, we may need to model the electrode as having a broad distribution of interspacings due to the waviness of the CNTs,” she says. “At higher volume fractions, waviness effects may be negligible, and the system can be modeled as simple pillars.”

The researchers’ nondestructive yet quantitative technique provides device designers with a valuable new tool for optimizing the morphology of porous electrodes for a wide range of applications. Already, Mutha and Wang have been using it to predict the performance of supercapacitors and desalination systems. Recent work has focused on designing a high-performance, portable device for the rapid desalination of brackish water. Results to date show that using their approach to optimize the design of CNT electrodes and the overall device simultaneously can as much as double the salt adsorption capacity of the system, while speeding up the rate at which clean water is produced.

This research was supported in part by the MIT Energy Initiative Seed Fund Program and by the King Fahd University of Petroleum and Minerals (KFUPM) in Dhahran, Saudi Arabia, through the Center for Clean Water and Clean Energy at MIT and KFUPM. Mutha’s work was supported by a National Science Foundation Graduate Research Fellowship and Stein’s work by the Department of Defense through the National Defense Science and Engineering Graduate Fellowship Program.

MIT: A new approach to rechargeable batteries – metal-mesh membrane could solve longstanding problems – lead to inexpensive power storage


MIT-Battery-Membranes_0A type of battery first invented nearly five decades ago could catapult to the forefront of energy storage technologies, thanks to a new finding by researchers at MIT. Illustration modified from an original image by Felice Frankel

New metal-mesh membrane could solve longstanding problems and lead to inexpensive power storage.

A type of battery first invented nearly five decades ago could catapult to the forefront of energy storage technologies, thanks to a new finding by researchers at MIT. The battery, based on electrodes made of sodium and nickel chloride and using a new type of metal mesh membrane, could be used for grid-scale installations to make intermittent power sources such as wind and solar capable of delivering reliable baseload electricity.

The findings are being reported today in the journal Nature Energy, by a team led by MIT professor Donald Sadoway, postdocs Huayi Yin and Brice Chung, and four others.

Although the basic battery chemistry the team used, based on a liquid sodium electrode material, was first described in 1968, the concept never caught on as a practical approach because of one significant drawback: It required the use of a thin membrane to separate its molten components, and the only known material with the needed properties for that membrane was a brittle and fragile ceramic. These paper-thin membranes made the batteries too easily damaged in real-world operating conditions, so apart from a few specialized industrial applications, the system has never been widely implemented.

But Sadoway and his team took a different approach, realizing that the functions of that membrane could instead be performed by a specially coated metal mesh, a much stronger and more flexible material that could stand up to the rigors of use in industrial-scale storage systems.

“I consider this a breakthrough,” Sadoway says, because for the first time in five decades, this type of battery — whose advantages include cheap, abundant raw materials, very safe operational characteristics, and an ability to go through many charge-discharge cycles without degradation — could finally become practical.

While some companies have continued to make liquid-sodium batteries for specialized uses, “the cost was kept high because of the fragility of the ceramic membranes,” says Sadoway, the John F. Elliott Professor of Materials Chemistry. “Nobody’s really been able to make that process work,” including GE, which spent nearly 10 years working on the technology before abandoning the project.

As Sadoway and his team explored various options for the different components in a molten-metal-based battery, they were surprised by the results of one of their tests using lead compounds. “We opened the cell and found droplets” inside the test chamber, which “would have to have been droplets of molten lead,” he says. But instead of acting as a membrane, as expected, the compound material “was acting as an electrode,” actively taking part in the battery’s electrochemical reaction.

“That really opened our eyes to a completely different technology,” he says. The membrane had performed its role — selectively allowing certain molecules to pass through while blocking others — in an entirely different way, using its electrical properties rather than the typical mechanical sorting based on the sizes of pores in the material.

In the end, after experimenting with various compounds, the team found that an ordinary steel mesh coated with a solution of titanium nitride could perform all the functions of the previously used ceramic membranes, but without the brittleness and fragility. The results could make possible a whole family of inexpensive and durable materials practical for large-scale rechargeable batteries.

The use of the new type of membrane can be applied to a wide variety of molten-electrode battery chemistries, he says, and opens up new avenues for battery design. “The fact that you can build a sodium-sulfur type of battery, or a sodium/nickel-chloride type of battery, without resorting to the use of fragile, brittle ceramic — that changes everything,” he says.

The work could lead to inexpensive batteries large enough to make intermittent, renewable power sources practical for grid-scale storage, and the same underlying technology could have other applications as well, such as for some kinds of metal production, Sadoway says.

Sadoway cautions that such batteries would not be suitable for some major uses, such as cars or phones. Their strong point is in large, fixed installations where cost is paramount, but size and weight are not, such as utility-scale load leveling. In those applications, inexpensive battery technology could potentially enable a much greater percentage of intermittent renewable energy sources to take the place of baseload, always-available power sources, which are now dominated by fossil fuels.

The research team included Fei Chen, a visiting scientist from Wuhan University of Technology; Nobuyuki Tanaka, a visiting scientist from the Japan Atomic Energy Agency; MIT research scientist Takanari Ouchi; and postdocs Huayi Yin, Brice Chung, and Ji Zhao. The work was supported by the French oil company Total S.A. through the MIT Energy Initiative.

MIT: Novel methods of synthesizing quantum dot materials – promising materials for high performance in electronic and optical devices


QD 3-novelmethodsThese images show scanning electron micrographs of the researchers’ sample quantum dot films. The dark spots are the individual quantum dots, each about 5 nanometers in diameter. Images a and b show the consistent size and alignment of the …more

For quantum dot (QD) materials to perform well in devices such as solar cells, the nanoscale crystals in them need to pack together tightly so that electrons can hop easily from one dot to the next and flow out as current. MIT researchers have now made QD films in which the dots vary by just one atom in diameter and are organized into solid lattices with unprecedented order. Subsequent processing pulls the QDs in the film closer together, further easing the electrons’ pathway. Tests using an ultrafast laser confirm that the energy levels of vacancies in adjacent QDs are so similar that hopping electrons don’t get stuck in low-energy dots along the way.

Taken together, the results suggest a new direction for ongoing efforts to develop these promising materials for high performance in electronic and optical devices.

In recent decades, much research attention has focused on electronic materials made of , which are tiny crystals of semiconducting materials a few nanometers in diameter. After three decades of research, QDs are now being used in TV displays, where they emit bright light in vivid colors that can be fine-tuned by changing the sizes of the nanoparticles. But many opportunities remain for taking advantage of these remarkable materials.

“QDs are a really promising underlying materials technology for  applications,” says William Tisdale, the ARCO Career Development Professor in Energy Studies and an associate professor of chemical engineering.

QD materials pique his interest for several reasons. QDs are easily synthesized in a solvent at low temperatures using standard procedures. The QD-bearing solvent can then be deposited on a surface—small or large, rigid or flexible—and as it dries, the QDs are left behind as a solid. Best of all, the electronic and optical properties of that solid can be controlled by tuning the QDs.

“With QDs, you have all these degrees of freedom,” says Tisdale. “You can change their composition, size, shape, and surface chemistry to fabricate a material that’s tailored for your application.”

The ability to adjust electron behavior to suit specific devices is of particular interest. For example, in solar photovoltaics (PVs), electrons should pick up energy from sunlight and then move rapidly through the material and out as current before they lose their excess energy. In light-emitting diodes (LEDs), high-energy “excited” electrons should relax on cue, emitting their extra energy as light.

With thermoelectric (TE) devices, QD materials could be a game-changer. When TE materials are hotter on one side than the other, they generate electricity. So TE devices could turn waste heat in car engines, industrial equipment, and other sources into power—without combustion or moving parts. The TE effect has been known for a century, but devices using TE materials have remained inefficient. The problem: While those materials conduct electricity well, they also conduct heat well, so the temperatures of the two ends of a device quickly equalize. In most materials, measures to decrease heat flow also decrease electron flow.

“With QDs, we can control those two properties separately,” says Tisdale. “So we can simultaneously engineer our material so it’s good at transferring electrical charge but bad at transporting heat.”

Making good arrays

One challenge in working with QDs has been to make particles that are all the same size and shape. During QD synthesis, quadrillions of nanocrystals are deposited onto a surface, where they self-assemble in an orderly fashion as they dry. If the individual QDs aren’t all exactly the same, they can’t pack together tightly, and electrons won’t move easily from one nanocrystal to the next.

Three years ago, a team in Tisdale’s lab led by Mark Weidman Ph.D. ’16 demonstrated a way to reduce that structural disorder. In a series of experiments with lead-sulfide QDs, team members found that carefully selecting the ratio between the lead and sulfur in the starting materials would produce QDs of uniform size.

“As those nanocrystals dry, they self-assemble into a beautifully ordered arrangement we call a superlattice,” Tisdale says.

Novel methods of synthesizing quantum dot materials
As shown in these schematics, at the center of a quantum dot is a core of a semiconducting material. Radiating outward from that core are arms, or ligands, of an organic material. The ligands keep the quantum dots in solution from sticking …more

Scattering electron microscope images of those superlattices taken from several angles show lined-up, 5-nanometer-diameter nanocrystals throughout the samples and confirm the long-range ordering of the QDs.

For a closer examination of their materials, Weidman performed a series of X-ray scattering experiments at the National Synchrotron Light Source at Brookhaven National Laboratory. Data from those experiments showed both how the QDs are positioned relative to one another and how they’re oriented, that is, whether they’re all facing the same way. The results confirmed that QDs in the superlattices are well ordered and essentially all the same.

“On average, the difference in diameter between one nanocrystal and another was less than the size of one more atom added to the surface,” says Tisdale. “So these QDs have unprecedented monodispersity, and they exhibit structural behavior that we hadn’t seen previously because no one could make QDs this monodisperse.”

Controlling electron hopping

The researchers next focused on how to tailor their monodisperse QD materials for efficient transfer of electrical current. “In a PV or TE device made of QDs, the electrons need to be able to hop effortlessly from one dot to the next and then do that many thousands of times as they make their way to the metal electrode,” Tisdale explains.

One way to influence hopping is by controlling the spacing from one QD to the next. A single QD consists of a core of semiconducting material—in this work, lead sulfide—with chemically bound arms, or ligands, made of organic (carbon-containing) molecules radiating outward. The ligands play a critical role—without them, as the QDs form in solution, they’d stick together and drop out as a solid clump. Once the QD layer is dry, the ligands end up as solid spacers that determine how far apart the nanocrystals are.

A standard ligand material used in QD synthesis is . Given the length of an oleic acid ligand, the QDs in the dry superlattice end up about 2.6 nanometers apart—and that’s a problem.

“That may sound like a small distance, but it’s not,” says Tisdale. “It’s way too big for a hopping electron to get across.”

Using shorter ligands in the starting solution would reduce that distance, but they wouldn’t keep the QDs from sticking together when they’re in solution. “So we needed to swap out the long oleic acid ligands in our solid materials for something shorter” after the film formed, Tisdale says.

To achieve that replacement, the researchers use a process called ligand exchange. First, they prepare a mixture of a shorter ligand and an organic solvent that will dissolve oleic acid but not the lead sulfide QDs. They then submerge the QD film in that mixture for 24 hours. During that time, the oleic acid ligands dissolve, and the new, shorter ligands take their place, pulling the QDs closer together. The solvent and oleic acid are then rinsed off.

Tests with various ligands confirmed their impact on interparticle spacing. Depending on the length of the selected ligand, the researchers could reduce that spacing from the original 2.6 nanometers with oleic acid all the way down to 0.4 nanometers. However, while the resulting films have beautifully ordered regions—perfect for fundamental studies—inserting the shorter ligands tends to generate cracks as the overall volume of the QD sample shrinks.

Energetic alignment of nanocrystals

One result of that work came as a surprise: Ligands known to yield high performance in lead-sulfide-based solar cells didn’t produce the shortest interparticle spacing in their tests.

Novel methods of synthesizing quantum dot materials
These graphs show electron energy measurements in a standard quantum dot film (top) and in a film made from monodisperse quantum dots (bottom). In each graph, the data points show energy measurements at initial excitation — indicated by the …more

“Reducing that spacing to get good conductivity is necessary,” says Tisdale. “But there may be other aspects of our QD material that we need to optimize to facilitate electron transfer.”

One possibility is a mismatch between the energy levels of the electrons in adjacent QDs. In any material, electrons exist at only two energy levels—a low ground state and a high excited state. If an electron in a QD film receives extra energy—say, from incoming sunlight—it can jump up to its excited state and move through the material until it finds a low-energy opening left behind by another traveling electron. It then drops down to its ground state, releasing its excess energy as heat or light.

In solid crystals, those two energy levels are a fixed characteristic of the material itself. But in QDs, they vary with particle size. Make a QD smaller and the energy level of its excited electrons increases. Again, variability in QD size can create problems. Once excited, a high-energy electron in a small QD will hop from dot to dot—until it comes to a large, low-energy QD.

“Excited electrons like going downhill more than they like going uphill, so they tend to hang out on the low-energy dots,” says Tisdale. “If there’s then a high-energy dot in the way, it takes them a long time to get past that bottleneck.”

So the greater mismatch between energy levels—called energetic disorder—the worse the electron mobility. To measure the impact of energetic disorder on electron flow in their samples, Rachel Gilmore Ph.D. ’17 and her collaborators used a technique called pump-probe spectroscopy—as far as they know, the first time this method has been used to study electron hopping in QDs.

QDs in an excited state absorb light differently than do those in the ground state, so shining light through a material and taking an absorption spectrum provides a measure of the electronic states in it. But in QD materials, electron hopping events can occur within picoseconds—10-12 of a second—which is faster than any electrical detector can measure.

The researchers therefore set up a special experiment using an ultrafast laser, whose beam is made up of quick pulses occurring at 100,000 per second. Their setup subdivides the laser beam such that a single pulse is split into a pump pulse that excites a sample and—after a delay measured in femtoseconds (10-15 seconds)—a corresponding probe pulse that measures the sample’s energy state after the delay. By gradually increasing the delay between the pump and probe pulses, they gather absorption spectra that show how much electron transfer has occurred and how quickly the excited electrons drop back to their ground state.

Using this technique, they measured electron energy in a QD sample with standard dot-to-dot variability and in one of the monodisperse samples. In the sample with standard variability, the excited electrons lose much of their excess energy within 3 nanoseconds. In the monodisperse sample, little energy is lost in the same time period—an indication that the energy levels of the QDs are all about the same.

By combining their spectroscopy results with computer simulations of the electron transport process, the researchers extracted electron hopping times ranging from 80 picoseconds for their smallest quantum dots to over 1 nanosecond for the largest ones. And they concluded that their QD materials are at the theoretical limit of how little energetic disorder is possible. Indeed, any difference in energy between neighboring QDs isn’t a problem. At room temperature, energy levels are always vibrating a bit, and those fluctuations are larger than the small differences from one QD to the next.

“So at some instant, random kicks in energy from the environment will cause the  of the QDs to line up, and the electron will do a quick hop,” says Tisdale.

The way forward

With energetic disorder no longer a concern, Tisdale concludes that further progress in making commercially viable QD  will require better ways of dealing with structural disorder. He and his team tested several methods of performing ligand exchange in solid samples, and none produced films with consistent QD size and spacing over large areas without cracks. As a result, he now believes that efforts to optimize that process “may not take us where we need to go.”

What’s needed instead is a way to put short ligands on the QDs when they’re in solution and then let them self-assemble into the desired structure.

“There are some emerging strategies for solution-phase ligand exchange,” he says. “If they’re successfully developed and combined with monodisperse QDs, we should be able to produce beautifully ordered, large-area structures well suited for devices such as solar cells, LEDs, and thermoelectric systems.”

 Explore further: Extremely bright and fast light emission

More information: Rachel H. Gilmore et al. Charge Carrier Hopping Dynamics in Homogeneously Broadened PbS Quantum Dot Solids, Nano Letters (2017). DOI: 10.1021/acs.nanolett.6b04201

Mark C. Weidman et al. Monodisperse, Air-Stable PbS Nanocrystals via Precursor Stoichiometry Control, ACS Nano (2014). DOI: 10.1021/nn5018654

Mark C. Weidman et al. Interparticle Spacing and Structural Ordering in Superlattice PbS Nanocrystal Solids Undergoing Ligand Exchange, Chemistry of Materials (2014). DOI: 10.1021/cm503626s

 

MIT Engineers create Plants that “Glow” – Embedded Nanoparticles could Illuminate Workspace


Plant Glow 4-engineerscreIllumination of a book (“Paradise Lost,” by John Milton) with the nanobionic light-emitting plants (two 3.5-week-old watercress plants). The book and the light-emitting watercress plants were placed in front of a reflective paper to …more

Imagine that instead of switching on a lamp when it gets dark, you could read by the light of a glowing plant on your desk.

MIT engineers have taken a critical first step toward making that vision a reality. By embedding specialized nanoparticles into the leaves of a watercress plant, they induced the plants to give off dim  for nearly four hours. They believe that, with further optimization, such plants will one day be bright enough to illuminate a workspace.

“The vision is to make a plant that will function as a desk lamp—a lamp that you don’t have to plug in. The light is ultimately powered by the energy metabolism of the plant itself,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study.

This technology could also be used to provide low-intensity indoor lighting, or to transform trees into self-powered streetlights, the researchers say.

MIT postdoc Seon-Yeong Kwak is the lead author of the study, which appears in the journal Nano Letters.

Nanobionic plants

Plant nanobionics, a new research area pioneered by Strano’s lab, aims to give plants novel features by embedding them with different types of nanoparticles. The group’s goal is to engineer plants to take over many of the functions now performed by electrical devices. The researchers have previously designed plants that can detect explosives and communicate that information to a smartphone, as well as plants that can monitor drought conditions.

Engineers create plants that glow
Glowing MIT logo printed on the leaf of an arugula plant. The mixture of nanoparticles was infused into the leaf using lab-designed syringe termination adaptors. The image is merged of the bright-field image and light emission in the dark. Credit: Kwak Seonyeong

Lighting, which accounts for about 20 percent of worldwide energy consumption, seemed like a logical next target. “Plants can self-repair, they have their own energy, and they are already adapted to the outdoor environment,” Strano says. “We think this is an idea whose time has come. It’s a perfect problem for plant nanobionics.”

To create their glowing plants, the MIT team turned to luciferase, the enzyme that gives fireflies their glow. Luciferase acts on a molecule called luciferin, causing it to emit light. Another molecule called co-enzyme A helps the process along by removing a reaction byproduct that can inhibit luciferase activity.

The MIT team packaged each of these three components into a different type of nanoparticle carrier. The nanoparticles, which are all made of materials that the U.S. Food and Drug Administration classifies as “generally regarded as safe,” help each component get to the right part of the plant. They also prevent the components from reaching concentrations that could be toxic to the plants.

The researchers used silica nanoparticles about 10 nanometers in diameter to carry luciferase, and they used slightly larger particles of the polymers PLGA and chitosan to carry luciferin and coenzyme A, respectively. To get the particles into , the researchers first suspended the particles in a solution. Plants were immersed in the solution and then exposed to high pressure, allowing the particles to enter the leaves through tiny pores called stomata.

Particles releasing luciferin and coenzyme A were designed to accumulate in the extracellular space of the mesophyll, an inner layer of the leaf, while the smaller particles carrying luciferase enter the cells that make up the mesophyll. The PLGA particles gradually release luciferin, which then enters the plant cells, where luciferase performs the chemical reaction that makes luciferin glow.

The researchers’ early efforts at the start of the project yielded plants that could glow for about 45 minutes, which they have since improved to 3.5 hours. The light generated by one 10-centimeter watercress seedling is currently about one-thousandth of the amount needed to read by, but the researchers believe they can boost the light emitted, as well as the duration of light, by further optimizing the concentration and release rates of the components.

Credit: Melanie Gonick/MIT

Plant transformation

Previous efforts to create light-emitting plants have relied on genetically engineering plants to express the gene for luciferase, but this is a laborious process that yields extremely dim light. Those studies were performed on tobacco plants and Arabidopsis thaliana, which are commonly used for plant genetic studies. However, the method developed by Strano’s lab could be used on any type of plant. So far, they have demonstrated it with arugula, kale, and spinach, in addition to watercress.

For future versions of this technology, the researchers hope to develop a way to paint or spray the nanoparticles onto plant leaves, which could make it possible to transform trees and other large plants into light sources.

“Our target is to perform one treatment when the plant is a seedling or a mature plant, and have it last for the lifetime of the plant,” Strano says. “Our work very seriously opens up the doorway to streetlamps that are nothing but treated trees, and to indirect lighting around homes.”

The researchers have also demonstrated that they can turn the light off by adding nanoparticles carrying a luciferase inhibitor. This could enable them to eventually create  that shut off their light emission in response to environmental conditions such as sunlight, the researchers say.

 Explore further: Nanobionic spinach plants can detect explosives

More information: Seon-Yeong Kwak et al. A Nanobionic Light-Emitting Plant, Nano Letters (2017). DOI: 10.1021/acs.nanolett.7b04369

 

MIT: Device makes power conversion more efficient New design could dramatically cut energy waste in electric vehicles, data centers, and the power grid


MIT-Power-Converters-01_0MIT postdoc Yuhao Zhang, handles a wafer with hundreds of vertical gallium nitride power devices fabricated from the Microsystems Technology Laboratories production line. Courtesy of Yuhao Zhang

 

Power electronics, which do things like modify voltages or convert between direct and alternating current, are everywhere. They’re in the power bricks we use to charge our portable devices; they’re in the battery packs of electric cars; and they’re in the power grid itself, where they mediate between high-voltage transmission lines and the lower voltages of household electrical sockets.

Power conversion is intrinsically inefficient: A power converter will never output quite as much power as it takes in. But recently, power converters made from gallium nitride have begun to reach the market, boasting higher efficiencies and smaller sizes than conventional, silicon-based power converters.

Commercial gallium nitride power devices can’t handle voltages above about 600 volts, however, which limits their use to household electronics.

At the Institute of Electrical and Electronics Engineers’ International Electron Devices Meeting this week, researchers from MIT, semiconductor company IQE, Columbia University, IBM, and the Singapore-MIT Alliance for Research and Technology, presented a new design that, in tests, enabled gallium nitride power devices to handle voltages of 1,200 volts.

That’s already enough capacity for use in electric vehicles, but the researchers emphasize that their device is a first prototype manufactured in an academic lab. They believe that further work can boost its capacity to the 3,300-to-5,000-volt range, to bring the efficiencies of gallium nitride to the power electronics in the electrical grid itself.

That’s because the new device uses a fundamentally different design from existing gallium nitride power electronics.

“All the devices that are commercially available are what are called lateral devices,” says Tomás Palacios, who is an MIT professor of electrical engineering and computer science, a member of the Microsystems Technology Laboratories, and senior author on the new paper. “So the entire device is fabricated on the top surface of the gallium nitride wafer, which is good for low-power applications like the laptop charger. But for medium- and high-power applications, vertical devices are much better. These are devices where the current, instead of flowing through the surface of the semiconductor, flows through the wafer, across the semiconductor. Vertical devices are much better in terms of how much voltage they can manage and how much current they control.”

For one thing, Palacios explains, current flows into one surface of a vertical device and out the other. That means that there’s simply more space in which to attach input and output wires, which enables higher current loads.

For another, Palacios says, “when you have lateral devices, all the current flows through a very narrow slab of material close to the surface. We are talking about a slab of material that could be just 50 nanometers in thickness. So all the current goes through there, and all the heat is being generated in that very narrow region, so it gets really, really, really hot. In a vertical device, the current flows through the entire wafer, so the heat dissipation is much more uniform.”

Narrowing the field

Although their advantages are well-known, vertical devices have been difficult to fabricate in gallium nitride. Power electronics depend on transistors, devices in which a charge applied to a “gate” switches a semiconductor material — such as silicon or gallium nitride — between a conductive and a nonconductive state.

For that switching to be efficient, the current flowing through the semiconductor needs to be confined to a relatively small area, where the gate’s electric field can exert an influence on it. In the past, researchers had attempted to build vertical transistors by embedding physical barriers in the gallium nitride to direct current into a channel beneath the gate.

But the barriers are built from a temperamental material that’s costly and difficult to produce, and integrating it with the surrounding gallium nitride in a way that doesn’t disrupt the transistor’s electronic properties has also proven challenging.

Palacios and his collaborators adopt a simple but effective alternative. The team includes first authors Yuhao Zhang, a postdoc in Palacios’s lab, and Min Sun, who received his MIT PhD in the Department of Electrical Engineering and Computer Science (EECS) last spring; Daniel Piedra and Yuxuan Lin, MIT graduate students in EECS; Jie Hu, a postdoc in Palacios’s group; Zhihong Liu of the Singapore-MIT Alliance for Research and Technology; Xiang Gao of IQE; and Columbia’s Ken Shepard.

Rather than using an internal barrier to route current into a narrow region of a larger device, they simply use a narrower device. Their vertical gallium nitride transistors have bladelike protrusions on top, known as “fins.” On both sides of each fin are electrical contacts that together act as a gate. Current enters the transistor through another contact, on top of the fin, and exits through the bottom of the device. The narrowness of the fin ensures that the gate electrode will be able to switch the transistor on and off.

“Yuhao and Min’s brilliant idea, I think, was to say, ‘Instead of confining the current by having multiple materials in the same wafer, let’s confine it geometrically by removing the material from those regions where we don’t want the current to flow,’” Palacios says. “Instead of doing the complicated zigzag path for the current in conventional vertical transistors, let’s change the geometry of the transistor completely.”

MIT: Making renewable power more viable for the grid


Making renewable power more viable for the grid

“Air-breathing” battery can store electricity for months, for about a fifth the cost of current technologies.

Wind and solar power are increasingly popular sources for renewable energy. But intermittency issues keep them from connecting widely to the U.S. grid: They require energy-storage systems that, at the cheapest, run about $100 per kilowatt hour and function only in certain locations.

Now MIT researchers have developed an “air-breathing” battery that could store electricity for very long durations for about one-fifth the cost of current technologies, with minimal location restraints and zero emissions. The battery could be used to make sporadic renewable power a more reliable source of electricity for the grid.

For its anode, the rechargeable flow battery uses cheap, abundant sulfur dissolved in water. An aerated liquid salt solution in the cathode continuously takes in and releases oxygen that balances charge as ions shuttle between the electrodes. Oxygen flowing into the cathode causes the anode to discharge electrons to an external circuit. Oxygen flowing out sends electrons back to the anode, recharging the battery.

“This battery literally inhales and exhales air, but it doesn’t exhale carbon dioxide, like humans — it exhales oxygen,” says Yet-Ming Chiang, the Kyocera Professor of Materials Science and Engineering at MIT and co-author of a paper describing the battery.

The research appears today in the journal Joule.

The battery’s total chemical cost — the combined price of the cathode, anode, and electrolyte materials — is about 1/30th the cost of competing batteries, such as lithium-ion batteries. Scaled-up systems could be used to store electricity from wind or solar power, for multiple days to entire seasons, for about $20 to $30 per kilowatt hour.

Co-authors with Chiang on the paper are: first author Zheng Li, who was a postdoc at MIT during the research and is now a professor at Virginia Tech; Fikile R. Brushett, the Raymond A. and Helen E. St. Laurent Career Development Professor of Chemical Engineering; research scientist Liang Su; graduate students Menghsuan Pan and Kai Xiang; and undergraduate students Andres Badel, Joseph M. Valle, and Stephanie L. Eiler.

Finding the right balance

Development of the battery began in 2012, when Chiang joined the Department of Energy’s Joint Center for Energy Storage Research, a five-year project that brought together about 180 researchers to collaborate on energy-saving technologies. Chiang, for his part, focused on developing an efficient battery that could reduce the cost of grid-scale energy storage.

A major issue with batteries over the past several decades, Chiang says, has been a focus on synthesizing materials that offer greater energy density but are very expensive. The most widely used materials in lithium-ion batteries for cellphones, for instance, have a cost of about $100 for each kilowatt hour of energy stored.

“This meant maybe we weren’t focusing on the right thing, with an ever-increasing chemical cost in pursuit of high energy-density,” Chiang says. He brought the issue to other MIT researchers. “We said, ‘If we want energy storage at the terawatt scale, we have to use truly abundant materials.’”

The researchers first decided the anode needed to be sulfur, a widely available byproduct of natural gas and petroleum refining that’s very energy dense, having the lowest cost per stored charge next to water and air. The challenge then was finding an inexpensive liquid cathode material that remained stable while producing a meaningful charge.

That seemed improbable — until a serendipitous discovery in the lab.

On a short list of candidates was a compound called potassium permanganate. If used as a cathode material, that compound is “reduced” — a reaction that draws ions from the anode to the cathode, discharging electricity. However, the reduction of the permanganate is normally impossible to reverse, meaning the battery wouldn’t be rechargeable.

Still, Li tried. As expected, the reversal failed. However, the battery was, in fact, recharging, due to an unexpected oxygen reaction in the cathode, which was running entirely on air. “I said, ‘Wait, you figured out a rechargeable chemistry using sulfur that does not require a cathode compound?’ That was the ah-ha moment,” Chiang says.

Using that concept, the team of researchers created a type of flow battery, where electrolytes are continuously pumped through electrodes and travel through a reaction cell to create charge or discharge.

The battery consists of a liquid anode (anolyte) of polysulfide that contains lithium or sodium ions, and a liquid cathode (catholyte) that consists of an oxygenated dissolved salt, separated by a membrane.

Upon discharging, the anolyte releases electrons into an external circuit and the lithium or sodium ions travel to the cathode.

At the same time, to maintain electroneutrality, the catholyte draws in oxygen, creating negatively charged hydroxide ions. When charging, the process is simply reversed. Oxygen is expelled from the catholyte, increasing hydrogen ions, which donate electrons back to the anolyte through the external circuit.

“What this does is create a charge balance by taking oxygen in and out of the system,” Chiang says.

Because the battery uses ultra-low-cost materials, its chemical cost is one of the lowest — if not the lowest — of any rechargeable battery to enable cost-effective long-duration discharge. Its energy density is slightly lower than today’s lithium-ion batteries.

“It’s a creative and interesting new concept that could potentially be an ultra-low-cost solution for grid storage,” says Venkat Viswanathan, an assistant professor of mechanical engineering at Carnegie Mellon University who studies energy-storage systems.

Lithium-sulfur and lithium-air batteries — where sulfur or oxygen are used in the cathode — exist today. But the key innovation of the MIT research, Viswanathan says, is combining the two concepts to create a lower-cost battery with comparable efficiency and energy density. The design could inspire new work in the field, he adds: “It’s something that immediately captures your imagination.”

Making renewables more reliable

The prototype is currently about the size of a coffee cup. But flow batteries are highly scalable, Chiang says, and cells can be combined into larger systems.

As the battery can discharge over months, the best use may be for storing electricity from notoriously unpredictable wind and solar power sources. “The intermittency for solar is daily, but for wind it’s longer-scale intermittency and not so predictable.

When it’s not so predictable you need more reserve — the capability to discharge a battery over a longer period of time — because you don’t know when the wind is going to come back next,” Chiang says. Seasonal storage is important too, he adds, especially with increasing distance north of the equator, where the amount of sunlight varies more widely from summer to winter.

Chiang says this could be the first technology to compete, in cost and energy density, with pumped hydroelectric storage systems, which provide most of the energy storage for renewables around the world but are very restricted by location.

“The energy density of a flow battery like this is more than 500 times higher than pumped hydroelectric storage. It’s also so much more compact, so that you can imagine putting it anywhere you have renewable generation,” Chiang says.

The research was supported by the Department of Energy.

MIT: Researchers Develop Nanoparticles that Deliver the CRISPR genome-editing system – Big Step Forward for Cancer Research


In a new study, MIT researchers have developed nanoparticles that can deliver the CRISPR genome-editing system and specifically modify genes in mice.

The team used nanoparticles to carry the CRISPR components, eliminating the need to use viruses for delivery.

Using the new delivery technique, the researchers were able to cut out certain genes in about 80 percent of liver cells, the best success rate ever achieved with CRISPR in adult animals.

“What’s really exciting here is that we’ve shown you can make a nanoparticle that can be used to permanently and specifically edit the DNA in the liver of an adult animal,” says Daniel Anderson, an associate professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES).

One of the genes targeted in this study, known as Pcsk9, regulates cholesterol levels. Mutations in the human version of the gene are associated with a rare disorder called dominant familial hypercholesterolemia, and the FDA recently approved two antibody drugs that inhibit Pcsk9.

However these antibodies need to be taken regularly, and for the rest of the patient’s life, to provide therapy. The new nanoparticles permanently edit the gene following a single treatment, and the technique also offers promise for treating other liver disorders, according to the MIT team.

Anderson is the senior author of the study, which appears in the Nov. 13 issue of Nature Biotechnology. The paper’s lead author is Koch Institute research scientist Hao Yin.

Other authors include David H. Koch Institute Professor Robert Langer of MIT, professors Victor Koteliansky and Timofei Zatsepin of the Skolkovo Institute of Science and Technology, and Professor Wen Xue of the University of Massachusetts Medical School.

Targeting Disease

Many scientists are trying to develop safe and efficient ways to deliver the components needed for CRISPR, which consists of a DNA-cutting enzyme called Cas9 and a short RNA that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut.

In most cases, researchers rely on viruses to carry the gene for Cas9, as well as the RNA guide strand. In 2014, Anderson, Yin, and their colleagues developed a nonviral delivery system in the first-ever demonstration of curing a disease (the liver disorder tyrosinemia) with CRISPR in an adult animal. However, this type of delivery requires a high-pressure injection, a method that can also cause some damage to the liver.

Later, the researchers showed they could deliver the components without the high-pressure injection by packaging messenger RNA (mRNA) encoding Cas9 into a nanoparticle instead of a virus. Using this approach, in which the guide RNA was still delivered by a virus, the researchers were able to edit the target gene in about 6 percent of hepatocytes, which is enough to treat tyrosinemia.

While that delivery technique holds promise, in some situations it would be better to have a completely nonviral delivery system, Anderson says.

One consideration is that once a particular virus is used, the patient will develop antibodies to it, so it couldn’t be used again.

Also, some patients have pre-existing antibodies to the viruses being tested as CRISPR delivery vehicles.

In the new Nature Biotechnology paper, the researchers came up with a system that delivers both Cas9 and the RNA guide using nanoparticles, with no need for viruses.

To deliver the guide RNAs, they first had to chemically modify the RNA to protect it from enzymes in the body that would normally break it down before it could reach its destination.

The researchers analyzed the structure of the complex formed by Cas9 and the RNA guide, or sgRNA, to figure out which sections of the guide RNA strand could be chemically modified without interfering with the binding of the two molecules. Based on this analysis, they created and tested many possible combinations of modifications.

“We used the structure of the Cas9 and sgRNA complex as a guide and did tests to figure out we can modify as much as 70 percent of the guide RNA,” Yin says. “We could heavily modify it and not affect the binding of sgRNA and Cas9, and this enhanced modification really enhances activity.”

Reprogramming the Liver

The researchers packaged these modified RNA guides (which they call enhanced sgRNA) into lipid nanoparticles, which they had previously used to deliver other types of RNA to the liver, and injected them into mice along with nanoparticles containing mRNA that encodes Cas9.

They experimented with knocking out a few different genes expressed by hepatocytes, but focused most of their attention on the cholesterol-regulating Pcsk9 gene. The researchers were able to eliminate this gene in more than 80 percent of liver cells, and the Pcsk9 protein was undetectable in these mice. They also found a 35 percent drop in the total cholesterol levels of the treated mice.

The researchers are now working on identifying other liver diseases that might benefit from this approach, and advancing these approaches toward use in patients.

“I think having a fully synthetic nanoparticle that can specifically turn genes off could be a powerful tool not just for Pcsk9 but for other diseases as well,” Anderson says.

“The liver is a really important organ and also is a source of disease for many people. If you can reprogram the DNA of your liver while you’re still using it, we think there are many diseases that could be addressed.”

“We are very excited to see this new application of nanotechnology open new avenues for gene editing,” Langer adds.

Materials provided by MIT News.Note: Content may be edited for style and length. 

Strong current of energy runs through MIT: Robust community focused on fueling the world’s future +Video


MIT-Energy-Past-Future-borders-01 small_0Top row (l-r): Tata Center spinoff Khethworks develops affordable irrigation for the developing world; students discuss utility research in Washington; thin, lightweight solar cell developed by Professor Vladimir Bulović and team. Bottom row (l-r): MIT’s record-setting Alcator tokamak fusion research reactor; a researcher in the MIT Energy Laboratory’s Combustion Research Facility; Professor Kripa Varanasi, whose research on slippery surfaces has led to a spinoff co-founded with Associate Provost Karen Gleason.

Photos: Tata Center for Technology and Design, MITEI, Joel Jean and Anna Osherov, Bob Mumgaard/PSFC, Energy Laboratory Archives, Bryce Vickmark

Research, education, and student activities help create a robust community focused on fueling the world’s future.

On any given day at MIT, undergraduates design hydro-powered desalination systems, graduate students test alternative fuels, and professors work to tap the huge energy-generating potential of nuclear fusion, biomaterials, and more. While some MIT researchers are modeling the impacts of policy on energy markets, others are experimenting with electrochemical forms of energy storage.

This is the robust energy community at MIT. Developed over the past 10 years with the guidance and support of the MIT Energy Initiative (MITEI) — and with roots extending back into the early days of the Institute — it has engaged more than 300 faculty members and spans more than 900 research projects across all five schools.

In addition, MIT offers a multidisciplinary energy minor and myriad energy-related events and activities throughout the year. Together, these efforts ensure that students who arrive on campus with an interest in energy have free rein to pursue their ambitions.

Opportunities for students

“The MIT energy ecosystem is an incredible system, and it’s built from the ground up,” says Robert C. Armstrong, a professor of chemical engineering and the director of MITEI, which is overseen at the Institute level by Vice President for Research Maria Zuber. “It begins with extensive student involvement in energy.” MITnano_ 042216 InfCorrTerraceView_label (1)

Opportunities begin the moment undergraduates arrive on campus, with a freshman pre-orientation program offered through MITEI that includes such hands-on activities as building motors and visiting the Institute’s nuclear research reactor.

“I got accepted into the pre-orientation program and from there, I was just hooked. I learned about solar technology, wind technology, different types of alternative fuels, bio fuels, even wave power,” says graduate student Priyanka Chatterjee ’15, who minored in energy studies and majored in mechanical and ocean engineering.

Those who choose the minor take a core set of subjects encompassing energy science, technology, and social science. Those interested in a deep dive into research can participate in the Energy Undergraduate Research Opportunities Program (UROP), which provides full-time summer positions. UROP students are mentored by graduate students and postdocs, many of them members of the Society of Energy Fellows, who are also conducting their own energy research at MIT.

For extracurricular activities, students can join the MIT Energy Club, which is among the largest student-run organizations at MIT with more than 5,000 members. They can also compete for the MIT Clean Energy Prize, a student competition that awards more than $200,000 each year for energy innovation. And there are many other opportunities.

The Tata Center for Technology and Design, now in its sixth year, extends MIT’s reach abroad. It supports 65 graduate students every year who conduct research central to improving life in developing countries — including lowering costs of rural electrification and using solar energy in novel ways.

Students have other opportunities to conduct and share energy research internationally as well.

“Over the years, MITEI has made it possible for several of the students I’ve advised to engage more directly in global energy and climate policy negotiations,” says Valerie Karplus, an assistant professor of global economics and management. “In 2015, I joined them at the Paris climate conference, which was a tremendous educational and outreach experience for all of us.”

Holistic problem-solving

“What is important is to provide our students a holistic understanding of the energy challenges,” says MIT Associate Dean for Innovation Vladimir Bulović.

Adds Karplus: “There’s been an evolution in thinking from ‘How do we build a better mousetrap?’ to ‘How do we bring about change in society at a system level?’”

This kind of thinking is at the root of MIT’s multidisciplinary approach to addressing the global energy challenge — and it has been since MITEI was conceived and launched by then-MIT President Susan Hockfield, a professor of neuroscience. While energy research has been part of the Institute since its founding (MIT’s first president, William Barton Rogers, famously collapsed and died after uttering the words “bituminous coal” at the 1882 commencement), the concerted effort to connect researchers across the five schools for collaborative projects is a more recent development.

“The objective of MITEI was really to solve the big energy problems, which we feel needs all of the schools’ and departments’ contributions,” says Ernest J. Moniz, a professor emeritus of physics and special advisor to MIT’s president. Moniz was the founding director of MITEI before serving as U.S. Secretary of Energy during President Obama’s administration.

Hockfield says great technology by itself “can’t go anywhere without great policy.”

“It’s the economics, it’s the sociology, it’s the science and the engineering, it’s the architecture — it’s all of the pieces of MIT that had to come together if we were going to develop really impactful sustainable energy solutions,” she says.

This multidisciplinary approach is evident in much of MIT’s energy research — notably the series of comprehensive studies MITEI has conducted on such topics as the future of solar energy, natural gas, the electric grid, and more.

“To make a better world, it’s essential that we figure out how to take what we’ve learned at MIT in energy and get that out into the world,” Armstrong says.

Fostering collaborations

MITEI’s eight low-carbon energy research centers — focused on a range of topics from materials design to solar generation to carbon capture and storage — similarly address challenges on multiple technology and policy fronts. These centers are a core component of MIT’s five-year Plan for Action on Climate Change, announced by President L. Rafael Reif in October 2015. The centers employ a strategy that has been fundamental to MIT’s energy work since the founding of MITEI: broad, sustained collaboration with stakeholders from industry, government, and the philanthropic and non-governmental organization communities.

“It’s one thing to do research that’s interesting in a laboratory. It’s something very different to take that laboratory discovery into the world and deliver practical applications,” Hockfield says. “Our collaboration with industry allowed us to do that with a kind of alacrity that we could never have done on our own.”

For example, MITEI’s members have supported more than 160 energy-focused research projects, representing $21.4 million in funding over the past nine years, through the Seed Fund Program. Projects have led to follow-on federal and industry funding, startup companies, and pilot plants for solar desalinization systems in India and Gaza, among other outcomes.

What has MIT’s energy community as a whole accomplished over the past decade? Hockfield says it’s raised the visibility of the world’s energy problems, contributed solutions — both technical and sociopolitical — and provided “an army of young people” to lead the way to a sustainable energy future.

“I couldn’t be prouder of what MIT has contributed,” she says. “We are in the midst of a reinvention of how we make energy and how we use energy. And we will develop sustainable energy practices for a larger population, a wealthier population, and a healthier planet.”