MIT Technology Review: Sustainable Energy: The daunting math of climate change means we’ll need carbon capture … eventually


 

MIT CC Friedman unknown-1_4

 Net Power’s pilot natural gas plant with carbon capture, near Houston, Texas.

An Interview with Julio Friedmann

At current rates of greenhouse-gas emissions, the world could lock in 1.5 ˚C of warming as soon as 2021, an analysis by the website Carbon Brief has found. We’re on track to blow the carbon budget for 2 ˚C by 2036.

Amid this daunting climate math, many researchers argue that capturing carbon dioxide from power plants, factories, and the air will have to play a big part in any realistic efforts to limit the dangers of global warming.

If it can be done economically, carbon capture and storage (CCS) offers the world additional flexibility and time to make the leap to cleaner systems. It means we can retrofit, rather than replace, vast parts of the global energy infrastructure. And once we reach disastrous levels of warming, so-called direct air capture offers one of the only ways to dig our way out of trouble, since carbon dioxide otherwise stays in the atmosphere for thousands of years.

Julio Friedmann has emerged as one of the most ardent advocates of these technologies. He oversaw research and development efforts on clean coal and carbon capture at the US Department of Energy’s Office of Fossil Energy under the last administration. Among other roles, he’s now working with or advising the Global CCS Institute, the Energy Futures Initiative, and Climeworks, a Switzerland-based company already building pilot plants that pull carbon dioxide from the air.

In an interview with MIT Technology Review, Friedmann argues that the technology is approaching a tipping point: a growing number of projects demonstrate that it works in the real world, and that it is becoming more reliable and affordable. He adds that the boosted US tax credit for capturing and storing carbon, passed in the form of the Future Act as part of the federal budget earlier this year, will push forward many more projects and help create new markets for products derived from carbon dioxide (see “The carbon-capture era may finally be starting”).

But serious challenges remain. Even with the tax credit, companies will incur steep costs by adding carbon capture systems to existing power plants. And a widely cited 2011 study, coauthored by MIT researcher Howard Herzog, found that direct air capture will require vast amounts of energy and cost 10 times as much as scrubbing carbon from power plants.

(This interview has been edited for length and clarity.)

In late February, you wrote a Medium post saying that with the passage of the increased tax credit for carbon capture and storage, we’ve “launched the climate counter-strike.” Why is that a big deal?

It actually sets a price on carbon formally. It says you should get paid to not emit carbon dioxide, and you should get paid somewhere between $35 a ton and $50 a ton. So that is already a massive change. In addition to that, it says you can do one of three things: you can store CO2, you can use it for enhanced oil recovery, or you can turn it into stuff. Fundamentally, it says not emitting has value.

As I’ve said many times before, the lack of progress in deploying CCS up until this point is not a question of cost. It’s really been a question of finance.

The Future Act creates that financing.

I identified an additional provision which said not only can you consider a power plant a source or an industrial site a source, you can consider the air a source.

Even if we zeroed out all our emissions today, we still have a legacy of harm of two trillion tons of CO2 in the air, and we need to do something about that.

And this law says, yeah, we should. It says we can take carbon dioxide out of the air and turn it into stuff.

At the Petra Nova plant in Texas, my understanding is the carbon capture costs are something like $60 to $70 a ton, which is still going to outstrip the tax credit today. How are we going to close that gap?

There are many different ways to go about it. For example, the state of New Jersey today passed a 90 percent clean energy portfolio standard. Changing the policy from a renewable portfolio standard [which would exclude CCS technologies] to a clean energy standard [which would allow them] allowed higher ambition.

In that context, somebody who would build a CCS project and would get a contract to deliver that power, or deliver that emissions abatement, can actually again get staked, get financed, and get built. That can happen without any technology advancement.

The technology today is already cost competitive. CCS today, as a retrofit, is cheaper than a whole bunch of stuff. It’s cheaper than new-build nuclear, it’s cheaper than offshore wind. It’s cheaper than a whole bunch of things we like, and it’s cheaper than rooftop solar, almost everywhere. It’s cheaper than utility-scale concentrating solar pretty much everywhere, and it is cheaper than what solar and wind were 10 years ago.

What do you make of the critique that this is all just going to perpetuate the fossil-fuel industry?

The enemy is not fossil fuels; the enemy is emissions.

In a place like California that has terrific renewable resources and a good infrastructure for renewable energy, maybe you can get to zero [fossil fuels] someday.

If you’re in Saskatchewan, you really can’t do that. It is too cold for too much of the year, and they don’t have solar resources, and their wind resources are problematic because they’re so strong they tear up the turbines. Which is why they did the CCS project in Saskatchewan. For them it was the right solution.

Shifting gears to direct air capture, the basic math says that you’re moving 2,500 molecules to capture one of CO2. How good are we getting at this, and how cheaply can we do this at this point?

If you want to optimize the way that you would reduce carbon dioxide economy-wide, direct air capture is the last thing you would tackle. Turns out, though, that we don’t live in that society. We are not optimizing anything in any way.

So instead we realize we have this legacy of emissions in the atmosphere and we need tools to manage that. So there are companies like ClimeworksCarbon Engineering, and Global Thermostat. Those guys said we know we’re going to need this technology, so I’m going to work now. They’ve got decent financing, and the costs are coming down and improving (see “Can sucking CO2 out of the atmosphere really work?”).

The cost for all of these things now today, all-in costs, is somewhere between $300 and $600 a ton. I’ve looked inside all those companies and I believe all of them are on a glide path to get to below $200 a ton by somewhere between 2022 and 2025. And I believe that they’re going to get down to $100 a ton by 2030. At that point, these are real options.

At $200 a ton, we know today unambiguously that pulling CO2 out of the air is cheaper than trying to make a zero-carbon airplane, by a lot. So it becomes an option that you use to go after carbon in the hard-to-scrub parts of the economy.

Is it ever going to work as a business, or is it always going to be kind of a public-supported enterprise to buy ourselves out of climate catastrophes?

Direct air capture is not competitive today broadly, but there are places where the value proposition is real. So let me give you a couple of examples.

In many parts of the world there are no sources of CO2. If you’re running a Pepsi or a Coca-Cola plant in Sri Lanka, you literally burn diesel fuel and capture the CO2 from it to put into your cola, at a bonkers price. It can cost $300 to $800 a ton to get that CO2. So there are already going to be places in some people’s supply chain where direct air capture could be cheaper.

We talk to companies like Goodyear, Firestone, or Michelin. They make tires, and right now the way that they get their carbon black [a material used in tire production that’s derived from fossil fuel] is basically you pyrolize bunker fuel in the Gulf Coast, which is a horrible, environmentally destructive process. And then you ship it by rail cars to wherever they’re making the tires.

If they can decouple from that market by gathering CO2 wherever they are and turn that into carbon black, they can actually avoid market shocks. So even if it costs a little more, the value to that company might be high enough to bring it into the market. That’s where I see direct air actually gaining real traction in the next few years.

It’s not going to be enough for climate. We know that we will have to do carbon storage, for sure, if we want to really manage the atmospheric emissions. But there’s a lot of ground to chase this, and we never know quite where technology goes.

In one of your earlier Medium posts you said that we’re ultimately going to have to pull 10 billion tons of CO2 out of the atmosphere every year. Climeworks is doing about 50 [at their pilot plant in Iceland]. So what does that scale-up look like?

You don’t have to get all 10 billion tons with direct air capture. So let’s say you just want one billion.

Right now, Royal Dutch Shell as a company moves 300 million tons of refined product every year. This means that you need three to four companies the size of Royal Dutch Shell to pull CO2 out of the atmosphere.

The good news is we don’t need that billion tons today. We have 10 or 20 or 30 years to get to a billion tons of direct air capture. But in fact we’ve seen that kind of scaling in other kinds of clean-tech markets. There’s nothing in the laws of physics or chemistry that stops that.

Advertisements

MIT Technolgy Review: This battery advance could make electric vehicles far cheaper


Sila Nanotechnologies has pulled off double-digit performance gains for lithium-ion batteries, promising to lower costs or add capabilities for cars and phones.

For the last seven years, a startup based in Alameda, California, has quietly worked on a novel anode material that promises to significantly boost the performance of lithium-ion batteries.

Sila Nanotechnologies emerged from stealth mode last month, partnering with BMW to put the company’s silicon-based anode materials in at least some of the German automaker’s electric vehicles by 2023.

A BMW spokesman told the Wall Street Journal the company expects that the deal will lead to a 10 to 15 percent increase in the amount of energy you can pack into a battery cell of a given volume. Sila’s CEO Gene Berdichevsky says the materials could eventually produce as much as a 40 percent improvement (see “35 Innovators Under 35: Gene Berdichevsky”).

For EVs, an increase in so-called energy density either significantly extends the mileage range possible on a single charge or decreases the cost of the batteries needed to reach standard ranges. For consumer gadgets, it could alleviate the frustration of cell phones that can’t make it through the day, or it might enable power-hungry next-generation features like bigger cameras or ultrafast 5G networks.

Researchers have spent decades working to advance the capabilities of lithium-ion batteries, but those gains usually only come a few percentage points at a time. So how did Sila Nanotechnologies make such a big leap?

Berdichevsky, who was employee number seven at Tesla, and CTO Gleb Yushin, a professor of materials science at the Georgia Institute of Technology, recently provided a deeper explanation of the battery technology in an interview with MIT Technology Review.

Sila co-founders (from left to right), Gleb Yushin, Gene Berdichevsky and Alex Jacobs.

An anode is the battery’s negative electrode, which in this case stores lithium ions when a battery is charged. Engineers have long believed that silicon holds great potential as an anode material for a simple reason: it can bond with 25 times more lithium ions than graphite, the main material used in lithium-ion batteries today.

But this comes with a big catch. When silicon accommodates that many lithium ions, its volume expands, stressing the material in a way that tends to make it crumble during charging. That swelling also triggers electrochemical side reactions that reduce battery performance.

In 2010, Yushin coauthored a scientific paper that identified a method for producing rigid silicon-based nanoparticles that are internally porous enough to accommodate significant volume changes. He teamed up with Berdichevsky and another former Tesla battery engineer, Alex Jacobs, to form Sila the following year.

The company has been working to commercialize that basic concept ever since, developing, producing, and testing tens of thousands of different varieties of increasingly sophisticated anode nanoparticles. It figured out ways to alter the internal structure to prevent the battery electrolyte from seeping into the particles, and it achieved dozens of incremental gains in energy density that ultimately added up to an improvement of about 20 percent over the best existing technology.

Ultimately, Sila created a robust, micrometer-size spherical particle with a porous core, which directs much of the swelling within the internal structure. The outside of the particle doesn’t change shape or size during charging, ensuring otherwise normal performance and cycle life.

The resulting composite anode powders work as a drop-in material for existing manufacturers of lithium-ion cells.

With any new battery technology, it takes at least five years to work through the automotive industry’s quality and safety assurance processes—hence the 2023 timeline with BMW. But Sila is on a faster track with consumer electronics, where it expects to see products carrying its battery materials on shelves early next year.

Venkat Viswanathan, a mechanical engineer at Carnegie Mellon, says Sila is “making great progress.” But he cautions that gains in one battery metric often come at the expense of others—like safety, charging time, or cycle life—and that what works in the lab doesn’t always translate perfectly into end products.

Companies including Enovix and Enevate are also developing silicon-dominant anode materials. Meanwhile, other businesses are pursuing entirely different routes to higher-capacity storage, notably including solid-state batteries. These use materials such as glass, ceramics, or polymers to replace liquid electrolytes, which help carry lithium ions between the cathode and anode.

BMW has also partnered with Solid Power, a spinout from the University of Colorado Boulder, which claims that its solid-state technology relying on lithium-metal anodes can store two to three times more energy than traditional lithium-ion batteries. Meanwhile, Ionic Materials, which recently raised $65 million from Dyson and others, has developed a solid polymer electrolyte that it claims will enable safer, cheaper batteries that can operate at room temperature and will also work with lithium metal.

Some battery experts believe that solid-state technology ultimately promises bigger gains in energy density, if researchers can surmount some large remaining technical obstacles.

But Berdichevsky stresses that Sila’s materials are ready for products now and, unlike solid-state lithium-metal batteries, don’t require any expensive equipment upgrades on the part of battery manufacturers.

As the company develops additional ways to limit volume change in the silicon-based particles, Berdichevsky and Yushin believe they’ll be able to extend energy density further, while also improving charging times and total cycle life.

This story was updated to clarify that Samsung didn’t invest in Ionic Material’s most recent funding round.

Read and Watch More:

Tenka Energy, Inc. Building Ultra-Thin Energy Dense SuperCaps and NexGen Nano-Enabled Pouch & Cylindrical Batteries – Energy Storage Made Small and POWERFUL! YouTube Video:

3 Questions for Innovating the Clean Energy Economy (MIT Energy Initiative)


daniel-kammen-mit-energy-initiative-mitei-2018_0Daniel Kammen, professor of energy at the University of California at Berkeley, spoke on clean energy innovation and implementation in a talk at MIT. Photo: Francesca McCaffrey/MIT Energy Initiative

Daniel Kammen of the University of California at Berkeley discusses current efforts in clean energy innovation and implementation, and what’s coming next.

Daniel Kammen is a professor of energy at the University of California at Berkeley, with parallel appointments in the Energy and Resources Group (which he chairs), the Goldman School of Public Policy, and the Department of Nuclear Science and Engineering.

Recently, he gave a talk at MIT examining the current state of clean energy innovation and implementation, both in the U.S. and internationally. Using a combination of analytical and empirical approaches, he discussed the strengths and weaknesses of clean energy efforts on the household, city, and regional levels. The MIT Energy Initiative (MITEI) followed up with him on these topics.

Q: Your team has built energy transition models for several countries, including Chile, Nicaragua, China, and India. Can you describe how these models work and how they can inform global climate negotiations like the Paris Accords?

Clean Energy Storage I header1

A: My laboratory has worked with three governments to build open-source models of the current state of their energy systems and possible opportunities for improvement. This model, SWITCH , is an exceptionally high-resolution platform for examining the costs, reliability, and carbon emissions of energy systems as small as Nicaragua’s and as large as China’s. The exciting recent developments in the cost and performance improvements of solar, wind, energy storage, and electric vehicles permit the planning of dramatically decarbonized systems that have a wide range of ancillary benefits: increased reliability, improved air quality, and monetizing energy efficiency, to name just a few. With the Paris Climate Accords placing 80 percent or greater decarbonization targets on all nations’ agendas (sadly, except for the U.S. federal government), the need for an “honest broker” for the costs and operational issues around power systems is key.

Q: At the end of your talk, you mentioned a carbon footprint calculator that you helped create. How much do individual behaviors matter in addressing climate change?

A: The carbon footprint, or CoolClimate project, is a visualization and behavioral economics tool that can be used to highlight the impacts of individual decisions at the household, school, and city level. We have used it to support city-city competitions for “California’s coolest city,” to explore the relative impacts of lifetime choices (buying an electric vehicle versus or along with changes of diet), and more.

Q: You touched on the topic of the “high ambition coalition,” a 2015 United Nations Climate Change Conference goal of keeping warming under 1.5 degrees Celsius. Can you expand on this movement and the carbon negative strategies it would require?

A: As we look at paths to a sustainable global energy system, efforts to limit warming to 1.5 degrees Celsius will require not only zeroing out industrial and agricultural emissions, but also removing carbon from the atmosphere. This demands increasing natural carbon sinks by preserving or expanding forests, sustaining ocean systems, and making agriculture climate- and water-smart. One pathway, biomass energy with carbon capture and sequestration, has both supporters and detractors. It involves growing biomass, using it for energy, and then sequestering the emissions.

This talk was one in a series of MITEI seminars supported by IHS Markit.

MIT: Finding a New Way to Design and Analyze Better Battery Materials: Discoveries could accelerate the development of high-energy lithium batteries


Diagram illustrates the crystal lattice of a proposed battery electrolyte material called Li3PO4. The researchers found that measuring how vibrations of sound move through the lattice could reveal how well ions – electrically charged atoms or molecules – could travel through the solid material, and therefore how they would work in a real battery. In this diagram, the oxygen atoms are shown in red, the purple pyramid-like shapes are phosphate (PO4) molecules. The orange and green spheres are ions of lithium.
Image: Sokseiha Muy

Design principles could point to better electrolytes for next-generation lithium batteries.

A new approach to analyzing and designing new ion conductors — a key component of rechargeable batteries — could accelerate the development of high-energy lithium batteries and possibly other energy storage and delivery devices such as fuel cells, researchers say.

The new approach relies on understanding the way vibrations move through the crystal lattice of lithium ion conductors and correlating that with the way they inhibit ion migration. This provides a way to discover new materials with enhanced ion mobility, allowing rapid charging and discharging.

At the same time, the method can be used to reduce the material’s reactivity with the battery’s electrodes, which can shorten its useful life. These two characteristics — better ion mobility and low reactivity — have tended to be mutually exclusive.

The new concept was developed by a team led by W.M. Keck Professor of Energy Yang Shao-Horn, graduate student Sokseiha Muy, recent graduate John Bachman PhD ’17, and Research Scientist Livia Giordano, along with nine others at MIT, Oak Ridge National Laboratory, and institutions in Tokyo and Munich. Their findings were reported in the journal Energy and Environmental Science.

The new design principle has been about five years in the making, Shao-Horn says. The initial thinking started with the approach she and her group have used to understand and control catalysts for water splitting, and applying it to ion conduction — the process that lies at the heart of not only rechargeable batteries, but also other key technologies such as fuel cells and desalination systems.

While electrons, with their negative charge, flow from one pole of the battery to the other (thus providing power for devices), positive ions flow the other way, through an electrolyte, or ion conductor, sandwiched between those poles, to complete the flow.

Typically, that electrolyte is a liquid. A lithium salt dissolved in an organic liquid is a common electrolyte in today’s lithium-ion batteries. But that substance is flammable and has sometimes caused these batteries to catch fire. The search has been on for a solid material to replace it, which would eliminate that issue.

A variety of promising solid ion conductors exist, but none is stable when in contact with both the positive and negative electrodes in lithium-ion batteries, Shao-Horn says.

Therefore, seeking new solid ion conductors that have both high ion conductivity and stability is critical. But sorting through the many different structural families and compositions to find the most promising ones is a classic needle in a haystack problem. That’s where the new design principle comes in.

The idea is to find materials that have ion conductivity comparable to that of liquids, but with the long-term stability of solids. The team asked, “What is the fundamental principle? What are the design principles on a general structural level that govern the desired properties?” Shao-Horn says. A combination of theoretical analysis and experimental measurements has now yielded some answers, the researchers say.

“We realized that there are a lot of materials that could be discovered, but no understanding or common principle that allows us to rationalize the discovery process,” says Muy, the paper’s lead author. “We came up with an idea that could encapsulate our understanding and predict which materials would be among the best.”

The key was to look at the lattice properties of these solid materials’ crystalline structures. This governs how vibrations such as waves of heat and sound, known as phonons, pass through materials. This new way of looking at the structures turned out to allow accurate predictions of the materials’ actual properties. “Once you know [the vibrational frequency of a given material], you can use it to predict new chemistry or to explain experimental results,” Shao-Horn says.

The researchers observed a good correlation between the lattice properties determined using the model and the lithium ion conductor material’s conductivity. “We did some experiments to support this idea experimentally” and found the results matched well, she says.

They found, in particular, that the vibrational frequency of lithium itself can be fine-tuned by tweaking its lattice structure, using chemical substitution or dopants to subtly change the structural arrangement of atoms.

The new concept can now provide a powerful tool for developing new, better-performing materials that could lead to dramatic improvements in the amount of power that could be stored in a battery of a given size or weight, as well as improved safety, the researchers say.

Already, they used the method to find some promising candidates. And the techniques could also be adapted to analyze materials for other electrochemical processes such as solid-oxide fuel cells, membrane based desalination systems, or oxygen-generating reactions.

The team included Hao-Hsun Chang at MIT; Douglas Abernathy, Dipanshu Bansal, and Olivier Delaire at Oak Ridge; Santoshi Hori and Ryoji Kanno at Tokyo Institute of Technology; and Filippo Maglia, Saskia Lupart, and Peter Lamp at Research Battery Technology at BMW Group in Munich.

The work was supported by BMW, the National Science Foundation, and the U.S. Department of Energy.

Watch a YouTube Video on New Nano-Enabled Super Capacitors and Batteries

MIT and HARVARD Update: PHYSICS CREATES NEW FORM OF LIGHT THAT COULD DRIVE THE QUANTUM COMPUTING REVOLUTION


The discovery that photons can interact could be harnessed for quantum computing. PHOTO: CHRISTINE DANILOFF/MIT

For the first time, scientists have watched groups of three photons interacting and effectively producing a new form of light.

In results published in Science, researchers suggest that this new light could be used to perform highly complex, incredibly fast quantum computations.

Photons are tiny particles that normally travel solo through beams of light, never interacting with each other. But in 2013 scientists made them clump together in pairs, creating a new state of matter. This discovery shows that interactions are possible on a greater scale.

“It was an open question,” Vladan Vuletic from the Massachusetts Institute of Technology (MIT), who led the team with Mikhail Lukin from Harvard University, said in a statement. “Can you add more photons to a molecule to make bigger and bigger things?”

The scientists cooled a cloud of rubidium atoms to an ultralow temperature to answer their question. This slowed the atoms down till they were almost still. A very faint laser beam sent just a few photons through the freezing cloud at once.

The photons came out the other side as pairs and triplets, rather than just as individuals.

Photons flit between atoms like bees among flowers.

The researchers think the particles might flit from one nearby atom to another as they pass through the rubidium cloud—like bees in a field of flowers.

These passing photons could form “polaritons”—part photon, part atom hybrids. If more than one photon passes by the same atom at the same time, they might form polaritons that are linked.

As they leave the atom, they could stay together as a pair, or even a triplet.

“What’s neat about this is, when photons go through the medium, anything that happens in the medium, they ‘remember’ when they get out,” said co-author Sergio Cantu from MIT.

This whole process takes about a millionth of a second.

Read About: MIT Researchers Link Photons

The future of computing

This research is the latest step toward a long-fabled quantum computer, an ultra-powerful machine that could solve problems beyond the realm of traditional computers. Your desktop PC would, for example, struggle to solve the question: “If a salesman has lots of places to visit, what is the quickest route?”

“[A traditional computer] could solve this for a certain number of cities, but if I wanted to add more cities, it would get much harder, very quickly,” Vuletic previously stated in a press release.

Read more: What did the Big Bang look like? The physics of light during the formation of the universe

Light, he said, is already used to transmit data very quickly over long distances via fiber optic cables. Being able to manipulate these photons could enable the distribution of data in much more powerful ways.

The team is now aiming to coerce photons in ways beyond attraction. The next stop is repulsion, where photons slam into each other and scatter.

“It’s completely novel in the sense that we don’t even know sometimes qualitatively what to expect,” Vuletic says. “With repulsion of photons, can they be such that they form a regular pattern, like a crystal of light? Or will something else happen? It’s very uncharted territory.”

MIT launches the “MIT Intelligence Quest … MIT IQ” (Video)


MIT AI IQ 87e48072-b50e-4701-b38e-f236c0c22280-original

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known. Courtesy of MIT IQ

New Institute-wide initiative will advance human and machine intelligence research

MIT today announced the launch of the MIT Intelligence Quest, an initiative to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society.

The announcement was first made in a letter MIT President L. Rafael Reif sent to the Institute community.

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known. (continued below)

Watch and Read About: Scott Zoldi, Director of Analytics at FICO, has published a report that “we are just at the beginning of the golden age of analytics, in which the value and contributions of artificial intelligence (AI), machine learning (AA) and of deep learning can only continue to expand as we accept and incorporate those tools into our businesses. ” And according to the expert’s predictions, in 2018, the development and use of these technologies will continue to expand and strengthen. And consider that next year:

 

(Continued)

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

“Today we set out to answer two big questions, says President Reif. “How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines, to the benefit of society?”

MIT IQ: The Core and The Bridge

MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science. brain-quantum-2-b2b_wsf

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware.

Along with developing and advancing the technologies of intelligence, MIT IQ researchers will also investigate the societal and ethical implications of advanced analytical and predictive tools. There are already active projects and groups at the Institute investigating autonomous systems, media and information quality, labor markets and the work of the future, innovation and the digital economy, and the role of AI in the legal system.

In all its activities, MIT IQ is intended to take advantage of — and strengthen — the Institute’s culture of collaboration. MIT IQ will connect and amplify existing excellence across labs and centers already engaged in intelligence research. It will also establish shared, central spaces conducive to group work, and its resources will directly support research.

“Our quest is meant to power world-changing possibilities,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. Chandrakasan, in collaboration with Provost Martin Schmidt and all four of MIT’s other school deans, has led the development and establishment of MIT IQ.

“We imagine preventing deaths from cancer by using deep learning for early detection and personalized treatment,” Chandrakasan continues. “We imagine artificial intelligence in sync with, complementing, and assisting our own intelligence. And we imagine every scientist and engineer having access to human-intelligence-inspired algorithms that open new avenues of discovery in their fields. Researchers across our campus want to push the boundaries of what’s possible.”

Engaging energetically with partners

In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab, which was announced in September 2017. MIT researchers will collaborate with each other and with industry on challenges that range in scale from the very broad to the very specific.

“In the short time since we began our collaboration with IBM, the lab has garnered tremendous interest inside and outside MIT, and it will be a vital part of MIT IQ,” says President Reif.

John E. Kelly III, IBM senior vice president for cognitive solutions and research, says, “To take on the world’s greatest challenges and seize its biggest opportunities, we need to rapidly advance both AI technology and our understanding of human intelligence. Building on decades of collaboration — including our extensive joint MIT–IBM Watson AI Lab — IBM and MIT will together shape a new agenda for intelligence research and its applications. We are proud to be a cornerstone of this expanded initiative.”

MIT will seek to establish additional entities within MIT IQ, in partnership with corporate and philanthropic organizations.

Why MIT

MIT has been on the frontier of intelligence research since the 1950s, when pioneers Marvin Minsky and John McCarthy helped establish the field of artificial intelligence.

MIT now has over 200 principal investigators whose research bears directly on intelligence. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Department of Brain and Cognitive Sciences (BCS) — along with the McGovern Institute for Brain Research and the Picower Institute for Learning and Memory — collaborate on a range of projects. MIT is also home to the National Science Foundation–funded center for Brains, Minds and Machines (CBMM) — the only national center of its kind.

Four years ago, MIT launched the Institute for Data, Systems, and Society (IDSS) with a mission promoting data science, particularly in the context of social systems. It is  anticipated that faculty and students from IDSS will play a critical role in this initiative.

Faculty from across the Institute will participate in the initiative, including researchers in the Media Lab, the Operations Research Center, the Sloan School of Management, the School of Architecture and Planning, and the School of Humanities, Arts, and Social Sciences.

“Our quest will amount to a journey taken together by all five schools at MIT,” says Provost Schmidt. “Success will rest on a shared sense of purpose and a mix of contributions from a wide variety of disciplines. I’m excited by the new thinking we can help unlock.”

At the heart of MIT IQ will be collaboration among researchers in human and artificial intelligence.

“To revolutionize the field of artificial intelligence, we should continue to look to the roots of intelligence: the brain,” says James DiCarlo, department head and Peter de Florez Professor of Neuroscience in the Department of Brain and Cognitive Sciences. “By working with engineers and artificial intelligence researchers, human intelligence researchers can build models of the brain systems that produce intelligent behavior. The time is now, as model building at the scale of those brain systems is now possible. Discovering how the brain works in the language of engineers will not only lead to transformative AI — it will also illuminate entirely new ways to repair, educate, and augment our own minds.”

Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, and director of CSAIL, agrees. MIT researchers, she says, “have contributed pioneering and visionary solutions for intelligence since the beginning of the field, and are excited to make big leaps to understand human intelligence and to engineer significantly more capable intelligent machines. Understanding intelligence will give us the knowledge to understand ourselves and to create machines that will support us with cognitive and physical work.”

David Siegel, who earned a PhD in computer science at MIT in 1991 pursuing research at MIT’s Artificial Intelligence Laboratory, and who is a member of the MIT Corporation and an advisor to the MIT Center for Brains, Minds, and Machines, has been integral to the vision and formation of MIT IQ and will continue to help shape the effort. “Understanding human intelligence is one of the greatest scientific challenges,” he says, “one that helps us understand who we are while meaningfully advancing the field of artificial intelligence.” Siegel is co-chairman and a founder of Two Sigma Investments, LP.

The fruits of research

MIT IQ will thus provide a platform for long-term research, encouraging the foundational advances of the future. At the same time, MIT professors and researchers may develop technologies with near-term value, leading to new kinds of collaborations with existing companies — and to new companies.

Some such entrepreneurial efforts could be supported by The Engine, an Institute initiative launched in October 2016 to support startup companies pursuing particularly ambitious goals.

Other innovations stemming from MIT IQ could be absorbed into the innovation ecosystem surrounding the Institute — in Kendall Square, Cambridge, and the Boston metropolitan area. MIT is located in close proximity to a world-leading nexus of biotechnology and medical-device research and development, as well as a cluster of leading-edge technology firms that study and deploy machine intelligence.

MIT also has roots in centers of innovation elsewhere in the United States and around the world, through faculty research projects, institutional and industry collaborations, and the activities and leadership of its alumni. MIT IQ will seek to connect to innovative companies and individuals who share MIT’s passion for work in intelligence.

Eric Schmidt, former executive chairman of Alphabet, has helped MIT form the vision for MIT IQ. “Imagine the good that can be done by putting novel machine-learning tools in the hands of those who can make great use of them,” he says. “MIT IQ can become a fount of exciting new capabilities.”

“I am thrilled by today’s news,” says President Reif. “Drawing on MIT’s deep strengths and signature values, culture, and history, MIT IQ promises to make important contributions to understanding the nature of intelligence, and to harnessing it to make a better world.”

“MIT is placing a bet,” he says, “on the central importance of intelligence research to meeting the needs of humanity.”

MIT: Optimizing carbon nanotube electrodes for energy storage and water desalination applications


Opt CNTs for Water Wang-Mutha-nanotubes_0Evelyn Wang (left) and Heena Mutha have developed a nondestructive method of quantifying the detailed characteristics of carbon nanotube (CNT) samples — a valuable tool for optimizing these materials for use as electrodes in a variety of practical devices. Photo: Stuart Darsch

New model measures characteristics of carbon nanotube structures for energy storage and water desalination applications.

Using electrodes made of carbon nanotubes (CNTs) can significantly improve the performance of devices ranging from capacitors and batteries to water desalination systems. But figuring out the physical characteristics of vertically aligned CNT arrays that yield the most benefit has been difficult.

Now an MIT team has developed a method that can help. By combining simple benchtop experiments with a model describing porous materials, the researchers have found they can quantify the morphology of a CNT sample, without destroying it in the process.

In a series of tests, the researchers confirmed that their adapted model can reproduce key measurements taken on CNT samples under varying conditions. They’re now using their approach to determine detailed parameters of their samples — including the spacing between the nanotubes — and to optimize the design of CNT electrodes for a device that rapidly desalinates brackish water.

A common challenge in developing energy storage devices and desalination systems is finding a way to transfer electrically charged particles onto a surface and store them there temporarily. In a capacitor, for example, ions in an electrolyte must be deposited as the device is being charged and later released when electricity is being delivered. During desalination, dissolved salt must be captured and held until the cleaned water has been withdrawn.

One way to achieve those goals is by immersing electrodes into the electrolyte or the saltwater and then imposing a voltage on the system. The electric field that’s created causes the charged particles to cling to the electrode surfaces. When the voltage is cut, the particles immediately let go.

“Whether salt or other charged particles, it’s all about adsorption and desorption,” says Heena Mutha PhD ’17, a senior member of technical staff at the Charles Stark Draper Laboratory. “So the electrodes in your device should have lots of surface area as well as open pathways that allow the electrolyte or saltwater carrying the particles to travel in and out easily.”

One way to increase the surface area is by using CNTs. In a conventional porous material, such as activated charcoal, interior pores provide extensive surface area, but they’re irregular in size and shape, so accessing them can be difficult. In contrast, a CNT “forest” is made up of aligned pillars that provide the needed surfaces and straight pathways, so the electrolyte or saltwater can easily reach them.

However, optimizing the design of CNT electrodes for use in devices has proven tricky. Experimental evidence suggests that the morphology of the material — in particular, how the CNTs are spaced out — has a direct impact on device performance. Increasing the carbon concentration when fabricating CNT electrodes produces a more tightly packed forest and more abundant surface area. But at a certain density, performance starts to decline, perhaps because the pillars are too close together for the electrolyte or saltwater to pass through easily.

Designing for device performance

OPT CNTs III graphic-1

“Much work has been devoted to determining how CNT morphology affects electrode performance in various applications,” says Evelyn Wang, the Gail E. Kendall Professor of Mechanical Engineering. “But an underlying question is, ‘How can we characterize these promising electrode materials in a quantitative way, so as to investigate the role played by such details as the nanometer-scale interspacing?'”

Inspecting a cut edge of a sample can be done using a scanning electron microscope (SEM). But quantifying features, such as spacing, is difficult, time-consuming, and not very precise. Analyzing data from gas adsorption experiments works well for some porous materials, but not for CNT forests. Moreover, such methods destroy the material being tested, so samples whose morphologies have been characterized can’t be used in tests of overall device performance.

For the past two years, Wang and Mutha have been working on a better option. “We wanted to develop a nondestructive method that combines simple electrochemical experiments with a mathematical model that would let us ‘back calculate’ the interspacing in a CNT forest,” Mutha says. “Then we could estimate the porosity of the CNT forest — without destroying it.”

Adapting the conventional model

One widely used method for studying porous electrodes is electrochemical impedance spectroscopy (EIS). It involves pulsing voltage across electrodes in an electrochemical cell at a set time interval (frequency) while monitoring “impedance,” a measure that depends on the available storage space and resistance to flow. Impedance measurements at different frequencies is called the “frequency response.”Opt CNTs II 1-newmodelmeas

The classic model describing porous media uses that frequency response to calculate how much open space there is in a porous material. “So we should be able to use [the model] to calculate the space between the carbon nanotubes in a CNT electrode,” Mutha says.

But there’s a problem: This model assumes that all pores are uniform, cylindrical voids. But that description doesn’t fit electrodes made of CNTs. Mutha modified the model to more accurately define the pores in CNT materials as the void spaces surrounding solid pillars. While others have similarly altered the classic model, Mutha took her alterations a step further. The nanotubes in a CNT material are unlikely to be packed uniformly, so she added to her equations the ability to account for variations in the spacing between the nanotubes. With this modified model, Mutha could analyze EIS data from real samples to calculate CNT spacings.

Using the model

To demonstrate her approach, Mutha first fabricated a series of laboratory samples and then measured their frequency response. In collaboration with Yuan “Jenny” Lu ’15, a materials science and engineering graduate, she deposited thin layers of aligned CNTs onto silicon wafers inside a furnace and then used water vapor to separate the CNTs from the silicon, producing free-standing forests of nanotubes. To vary the CNT spacing, she used a technique developed by MIT collaborators in the Department of Aeronautics and Astronautics, Professor Brian Wardle and postdoc associate Itai Stein PhD ’16. Using a custom plastic device, she mechanically squeezed her samples from four sides, thereby packing the nanotubes together more tightly and increasing the volume fraction — that is, the fraction of the total volume occupied by the solid CNTs.

To test the frequency response of the samples, she used a glass beaker containing three electrodes immersed in an electrolyte. One electrode is the CNT-coated sample, while the other two are used to monitor the voltage and to absorb and measure the current. Using that setup, she first measured the capacitance of each sample, meaning how much charge it could store in each square centimeter of surface area at a given constant voltage. She then ran EIS tests on the samples and analyzed results using her modified porous media model.

Results for the three volume fractions tested show the same trends. As the voltage pulses become less frequent, the curves initially rise at about a 45 degree slope. But at some point, each one shifts toward vertical, with resistance becoming constant and impedance continuing to rise.

As Mutha explains, those trends are typical of EIS analyses. “At high frequencies, the voltage changes so quickly that — because of resistance in the CNT forest — it doesn’t penetrate the depth of the entire electrode material, so the response comes only from the surface or partway in,” she says. “But eventually the frequency is low enough that there’s time between pulses for the voltage to penetrate and for the whole sample to respond.”

Resistance is no longer a noticeable factor, so the line becomes vertical, with the capacitance component causing impedance to rise as more charged particles attach to the CNTs. That switch to vertical occurs earlier with the lower-volume-fraction samples. In sparser forests, the spaces are larger, so the resistance is lower.

The most striking feature of Mutha’s results is the gradual transition from the high-frequency to the low-frequency regime. Calculations from a model based on uniform spacing — the usual assumption — show a sharp transition from partial to complete electrode response. Because Mutha’s model incorporates subtle variations in spacing, the transition is gradual rather than abrupt. Her experimental measurements and model results both exhibit that behavior, suggesting that the modified model is more accurate.

By combining their impedance spectroscopy results with their model, the MIT researchers inferred the CNT interspacing in their samples. Since the forest packing geometry is unknown, they performed the analyses based on three- and six-pillar configurations to establish upper and lower bounds. Their calculations showed that spacing can range from 100 nanometers in sparse forests to below 10 nanometers in densely packed forests.

Comparing approaches

Work in collaboration with Wardle and Stein has validated the two groups’ differing approaches to determining CNT morphology. In their studies, Wardle and Stein use an approach similar to Monte Carlo modeling, which is a statistical technique that involves simulating the behavior of an uncertain system thousands of times under varying assumptions to produce a range of plausible outcomes, some more likely than others. For this application, they assumed a random distribution of “seeds” for carbon nanotubes, simulated their growth, and then calculated characteristics, such as inter-CNT spacing with an associated variability. Along with other factors, they assigned some degree of waviness to the individual CNTs to test the impact on the calculated spacing.

To compare their approaches, the two MIT teams performed parallel analyses that determined average spacing at increasing volume fractions. The trends they exhibited matched well, with spacing decreasing as volume fraction increases. However, at a volume fraction of about 26 percent, the EIS spacing estimates suddenly go up — an outcome that Mutha believes may reflect packing irregularities caused by buckling of the CNTs as she was densifying them.

To investigate the role played by waviness, Mutha compared the variabilities in her results with those in Stein’s results from simulations assuming different degrees of waviness. At high volume fractions, the EIS variabilities were closest to those from the simulations assuming little or no waviness. But at low volume fractions, the closest match came from simulations assuming high waviness.

Based on those findings, Mutha concludes that waviness should be considered when performing EIS analyses — at least in some cases. “To accurately predict the performance of devices with sparse CNT electrodes, we may need to model the electrode as having a broad distribution of interspacings due to the waviness of the CNTs,” she says. “At higher volume fractions, waviness effects may be negligible, and the system can be modeled as simple pillars.”

The researchers’ nondestructive yet quantitative technique provides device designers with a valuable new tool for optimizing the morphology of porous electrodes for a wide range of applications. Already, Mutha and Wang have been using it to predict the performance of supercapacitors and desalination systems. Recent work has focused on designing a high-performance, portable device for the rapid desalination of brackish water. Results to date show that using their approach to optimize the design of CNT electrodes and the overall device simultaneously can as much as double the salt adsorption capacity of the system, while speeding up the rate at which clean water is produced.

This research was supported in part by the MIT Energy Initiative Seed Fund Program and by the King Fahd University of Petroleum and Minerals (KFUPM) in Dhahran, Saudi Arabia, through the Center for Clean Water and Clean Energy at MIT and KFUPM. Mutha’s work was supported by a National Science Foundation Graduate Research Fellowship and Stein’s work by the Department of Defense through the National Defense Science and Engineering Graduate Fellowship Program.

MIT: A new approach to rechargeable batteries – metal-mesh membrane could solve longstanding problems – lead to inexpensive power storage


MIT-Battery-Membranes_0A type of battery first invented nearly five decades ago could catapult to the forefront of energy storage technologies, thanks to a new finding by researchers at MIT. Illustration modified from an original image by Felice Frankel

New metal-mesh membrane could solve longstanding problems and lead to inexpensive power storage.

A type of battery first invented nearly five decades ago could catapult to the forefront of energy storage technologies, thanks to a new finding by researchers at MIT. The battery, based on electrodes made of sodium and nickel chloride and using a new type of metal mesh membrane, could be used for grid-scale installations to make intermittent power sources such as wind and solar capable of delivering reliable baseload electricity.

The findings are being reported today in the journal Nature Energy, by a team led by MIT professor Donald Sadoway, postdocs Huayi Yin and Brice Chung, and four others.

Although the basic battery chemistry the team used, based on a liquid sodium electrode material, was first described in 1968, the concept never caught on as a practical approach because of one significant drawback: It required the use of a thin membrane to separate its molten components, and the only known material with the needed properties for that membrane was a brittle and fragile ceramic. These paper-thin membranes made the batteries too easily damaged in real-world operating conditions, so apart from a few specialized industrial applications, the system has never been widely implemented.

But Sadoway and his team took a different approach, realizing that the functions of that membrane could instead be performed by a specially coated metal mesh, a much stronger and more flexible material that could stand up to the rigors of use in industrial-scale storage systems.

“I consider this a breakthrough,” Sadoway says, because for the first time in five decades, this type of battery — whose advantages include cheap, abundant raw materials, very safe operational characteristics, and an ability to go through many charge-discharge cycles without degradation — could finally become practical.

While some companies have continued to make liquid-sodium batteries for specialized uses, “the cost was kept high because of the fragility of the ceramic membranes,” says Sadoway, the John F. Elliott Professor of Materials Chemistry. “Nobody’s really been able to make that process work,” including GE, which spent nearly 10 years working on the technology before abandoning the project.

As Sadoway and his team explored various options for the different components in a molten-metal-based battery, they were surprised by the results of one of their tests using lead compounds. “We opened the cell and found droplets” inside the test chamber, which “would have to have been droplets of molten lead,” he says. But instead of acting as a membrane, as expected, the compound material “was acting as an electrode,” actively taking part in the battery’s electrochemical reaction.

“That really opened our eyes to a completely different technology,” he says. The membrane had performed its role — selectively allowing certain molecules to pass through while blocking others — in an entirely different way, using its electrical properties rather than the typical mechanical sorting based on the sizes of pores in the material.

In the end, after experimenting with various compounds, the team found that an ordinary steel mesh coated with a solution of titanium nitride could perform all the functions of the previously used ceramic membranes, but without the brittleness and fragility. The results could make possible a whole family of inexpensive and durable materials practical for large-scale rechargeable batteries.

The use of the new type of membrane can be applied to a wide variety of molten-electrode battery chemistries, he says, and opens up new avenues for battery design. “The fact that you can build a sodium-sulfur type of battery, or a sodium/nickel-chloride type of battery, without resorting to the use of fragile, brittle ceramic — that changes everything,” he says.

The work could lead to inexpensive batteries large enough to make intermittent, renewable power sources practical for grid-scale storage, and the same underlying technology could have other applications as well, such as for some kinds of metal production, Sadoway says.

Sadoway cautions that such batteries would not be suitable for some major uses, such as cars or phones. Their strong point is in large, fixed installations where cost is paramount, but size and weight are not, such as utility-scale load leveling. In those applications, inexpensive battery technology could potentially enable a much greater percentage of intermittent renewable energy sources to take the place of baseload, always-available power sources, which are now dominated by fossil fuels.

The research team included Fei Chen, a visiting scientist from Wuhan University of Technology; Nobuyuki Tanaka, a visiting scientist from the Japan Atomic Energy Agency; MIT research scientist Takanari Ouchi; and postdocs Huayi Yin, Brice Chung, and Ji Zhao. The work was supported by the French oil company Total S.A. through the MIT Energy Initiative.

MIT: Novel methods of synthesizing quantum dot materials – promising materials for high performance in electronic and optical devices


QD 3-novelmethodsThese images show scanning electron micrographs of the researchers’ sample quantum dot films. The dark spots are the individual quantum dots, each about 5 nanometers in diameter. Images a and b show the consistent size and alignment of the …more

For quantum dot (QD) materials to perform well in devices such as solar cells, the nanoscale crystals in them need to pack together tightly so that electrons can hop easily from one dot to the next and flow out as current. MIT researchers have now made QD films in which the dots vary by just one atom in diameter and are organized into solid lattices with unprecedented order. Subsequent processing pulls the QDs in the film closer together, further easing the electrons’ pathway. Tests using an ultrafast laser confirm that the energy levels of vacancies in adjacent QDs are so similar that hopping electrons don’t get stuck in low-energy dots along the way.

Taken together, the results suggest a new direction for ongoing efforts to develop these promising materials for high performance in electronic and optical devices.

In recent decades, much research attention has focused on electronic materials made of , which are tiny crystals of semiconducting materials a few nanometers in diameter. After three decades of research, QDs are now being used in TV displays, where they emit bright light in vivid colors that can be fine-tuned by changing the sizes of the nanoparticles. But many opportunities remain for taking advantage of these remarkable materials.

“QDs are a really promising underlying materials technology for  applications,” says William Tisdale, the ARCO Career Development Professor in Energy Studies and an associate professor of chemical engineering.

QD materials pique his interest for several reasons. QDs are easily synthesized in a solvent at low temperatures using standard procedures. The QD-bearing solvent can then be deposited on a surface—small or large, rigid or flexible—and as it dries, the QDs are left behind as a solid. Best of all, the electronic and optical properties of that solid can be controlled by tuning the QDs.

“With QDs, you have all these degrees of freedom,” says Tisdale. “You can change their composition, size, shape, and surface chemistry to fabricate a material that’s tailored for your application.”

The ability to adjust electron behavior to suit specific devices is of particular interest. For example, in solar photovoltaics (PVs), electrons should pick up energy from sunlight and then move rapidly through the material and out as current before they lose their excess energy. In light-emitting diodes (LEDs), high-energy “excited” electrons should relax on cue, emitting their extra energy as light.

With thermoelectric (TE) devices, QD materials could be a game-changer. When TE materials are hotter on one side than the other, they generate electricity. So TE devices could turn waste heat in car engines, industrial equipment, and other sources into power—without combustion or moving parts. The TE effect has been known for a century, but devices using TE materials have remained inefficient. The problem: While those materials conduct electricity well, they also conduct heat well, so the temperatures of the two ends of a device quickly equalize. In most materials, measures to decrease heat flow also decrease electron flow.

“With QDs, we can control those two properties separately,” says Tisdale. “So we can simultaneously engineer our material so it’s good at transferring electrical charge but bad at transporting heat.”

Making good arrays

One challenge in working with QDs has been to make particles that are all the same size and shape. During QD synthesis, quadrillions of nanocrystals are deposited onto a surface, where they self-assemble in an orderly fashion as they dry. If the individual QDs aren’t all exactly the same, they can’t pack together tightly, and electrons won’t move easily from one nanocrystal to the next.

Three years ago, a team in Tisdale’s lab led by Mark Weidman Ph.D. ’16 demonstrated a way to reduce that structural disorder. In a series of experiments with lead-sulfide QDs, team members found that carefully selecting the ratio between the lead and sulfur in the starting materials would produce QDs of uniform size.

“As those nanocrystals dry, they self-assemble into a beautifully ordered arrangement we call a superlattice,” Tisdale says.

Novel methods of synthesizing quantum dot materials
As shown in these schematics, at the center of a quantum dot is a core of a semiconducting material. Radiating outward from that core are arms, or ligands, of an organic material. The ligands keep the quantum dots in solution from sticking …more

Scattering electron microscope images of those superlattices taken from several angles show lined-up, 5-nanometer-diameter nanocrystals throughout the samples and confirm the long-range ordering of the QDs.

For a closer examination of their materials, Weidman performed a series of X-ray scattering experiments at the National Synchrotron Light Source at Brookhaven National Laboratory. Data from those experiments showed both how the QDs are positioned relative to one another and how they’re oriented, that is, whether they’re all facing the same way. The results confirmed that QDs in the superlattices are well ordered and essentially all the same.

“On average, the difference in diameter between one nanocrystal and another was less than the size of one more atom added to the surface,” says Tisdale. “So these QDs have unprecedented monodispersity, and they exhibit structural behavior that we hadn’t seen previously because no one could make QDs this monodisperse.”

Controlling electron hopping

The researchers next focused on how to tailor their monodisperse QD materials for efficient transfer of electrical current. “In a PV or TE device made of QDs, the electrons need to be able to hop effortlessly from one dot to the next and then do that many thousands of times as they make their way to the metal electrode,” Tisdale explains.

One way to influence hopping is by controlling the spacing from one QD to the next. A single QD consists of a core of semiconducting material—in this work, lead sulfide—with chemically bound arms, or ligands, made of organic (carbon-containing) molecules radiating outward. The ligands play a critical role—without them, as the QDs form in solution, they’d stick together and drop out as a solid clump. Once the QD layer is dry, the ligands end up as solid spacers that determine how far apart the nanocrystals are.

A standard ligand material used in QD synthesis is . Given the length of an oleic acid ligand, the QDs in the dry superlattice end up about 2.6 nanometers apart—and that’s a problem.

“That may sound like a small distance, but it’s not,” says Tisdale. “It’s way too big for a hopping electron to get across.”

Using shorter ligands in the starting solution would reduce that distance, but they wouldn’t keep the QDs from sticking together when they’re in solution. “So we needed to swap out the long oleic acid ligands in our solid materials for something shorter” after the film formed, Tisdale says.

To achieve that replacement, the researchers use a process called ligand exchange. First, they prepare a mixture of a shorter ligand and an organic solvent that will dissolve oleic acid but not the lead sulfide QDs. They then submerge the QD film in that mixture for 24 hours. During that time, the oleic acid ligands dissolve, and the new, shorter ligands take their place, pulling the QDs closer together. The solvent and oleic acid are then rinsed off.

Tests with various ligands confirmed their impact on interparticle spacing. Depending on the length of the selected ligand, the researchers could reduce that spacing from the original 2.6 nanometers with oleic acid all the way down to 0.4 nanometers. However, while the resulting films have beautifully ordered regions—perfect for fundamental studies—inserting the shorter ligands tends to generate cracks as the overall volume of the QD sample shrinks.

Energetic alignment of nanocrystals

One result of that work came as a surprise: Ligands known to yield high performance in lead-sulfide-based solar cells didn’t produce the shortest interparticle spacing in their tests.

Novel methods of synthesizing quantum dot materials
These graphs show electron energy measurements in a standard quantum dot film (top) and in a film made from monodisperse quantum dots (bottom). In each graph, the data points show energy measurements at initial excitation — indicated by the …more

“Reducing that spacing to get good conductivity is necessary,” says Tisdale. “But there may be other aspects of our QD material that we need to optimize to facilitate electron transfer.”

One possibility is a mismatch between the energy levels of the electrons in adjacent QDs. In any material, electrons exist at only two energy levels—a low ground state and a high excited state. If an electron in a QD film receives extra energy—say, from incoming sunlight—it can jump up to its excited state and move through the material until it finds a low-energy opening left behind by another traveling electron. It then drops down to its ground state, releasing its excess energy as heat or light.

In solid crystals, those two energy levels are a fixed characteristic of the material itself. But in QDs, they vary with particle size. Make a QD smaller and the energy level of its excited electrons increases. Again, variability in QD size can create problems. Once excited, a high-energy electron in a small QD will hop from dot to dot—until it comes to a large, low-energy QD.

“Excited electrons like going downhill more than they like going uphill, so they tend to hang out on the low-energy dots,” says Tisdale. “If there’s then a high-energy dot in the way, it takes them a long time to get past that bottleneck.”

So the greater mismatch between energy levels—called energetic disorder—the worse the electron mobility. To measure the impact of energetic disorder on electron flow in their samples, Rachel Gilmore Ph.D. ’17 and her collaborators used a technique called pump-probe spectroscopy—as far as they know, the first time this method has been used to study electron hopping in QDs.

QDs in an excited state absorb light differently than do those in the ground state, so shining light through a material and taking an absorption spectrum provides a measure of the electronic states in it. But in QD materials, electron hopping events can occur within picoseconds—10-12 of a second—which is faster than any electrical detector can measure.

The researchers therefore set up a special experiment using an ultrafast laser, whose beam is made up of quick pulses occurring at 100,000 per second. Their setup subdivides the laser beam such that a single pulse is split into a pump pulse that excites a sample and—after a delay measured in femtoseconds (10-15 seconds)—a corresponding probe pulse that measures the sample’s energy state after the delay. By gradually increasing the delay between the pump and probe pulses, they gather absorption spectra that show how much electron transfer has occurred and how quickly the excited electrons drop back to their ground state.

Using this technique, they measured electron energy in a QD sample with standard dot-to-dot variability and in one of the monodisperse samples. In the sample with standard variability, the excited electrons lose much of their excess energy within 3 nanoseconds. In the monodisperse sample, little energy is lost in the same time period—an indication that the energy levels of the QDs are all about the same.

By combining their spectroscopy results with computer simulations of the electron transport process, the researchers extracted electron hopping times ranging from 80 picoseconds for their smallest quantum dots to over 1 nanosecond for the largest ones. And they concluded that their QD materials are at the theoretical limit of how little energetic disorder is possible. Indeed, any difference in energy between neighboring QDs isn’t a problem. At room temperature, energy levels are always vibrating a bit, and those fluctuations are larger than the small differences from one QD to the next.

“So at some instant, random kicks in energy from the environment will cause the  of the QDs to line up, and the electron will do a quick hop,” says Tisdale.

The way forward

With energetic disorder no longer a concern, Tisdale concludes that further progress in making commercially viable QD  will require better ways of dealing with structural disorder. He and his team tested several methods of performing ligand exchange in solid samples, and none produced films with consistent QD size and spacing over large areas without cracks. As a result, he now believes that efforts to optimize that process “may not take us where we need to go.”

What’s needed instead is a way to put short ligands on the QDs when they’re in solution and then let them self-assemble into the desired structure.

“There are some emerging strategies for solution-phase ligand exchange,” he says. “If they’re successfully developed and combined with monodisperse QDs, we should be able to produce beautifully ordered, large-area structures well suited for devices such as solar cells, LEDs, and thermoelectric systems.”

 Explore further: Extremely bright and fast light emission

More information: Rachel H. Gilmore et al. Charge Carrier Hopping Dynamics in Homogeneously Broadened PbS Quantum Dot Solids, Nano Letters (2017). DOI: 10.1021/acs.nanolett.6b04201

Mark C. Weidman et al. Monodisperse, Air-Stable PbS Nanocrystals via Precursor Stoichiometry Control, ACS Nano (2014). DOI: 10.1021/nn5018654

Mark C. Weidman et al. Interparticle Spacing and Structural Ordering in Superlattice PbS Nanocrystal Solids Undergoing Ligand Exchange, Chemistry of Materials (2014). DOI: 10.1021/cm503626s

 

MIT Engineers create Plants that “Glow” – Embedded Nanoparticles could Illuminate Workspace


Plant Glow 4-engineerscreIllumination of a book (“Paradise Lost,” by John Milton) with the nanobionic light-emitting plants (two 3.5-week-old watercress plants). The book and the light-emitting watercress plants were placed in front of a reflective paper to …more

Imagine that instead of switching on a lamp when it gets dark, you could read by the light of a glowing plant on your desk.

MIT engineers have taken a critical first step toward making that vision a reality. By embedding specialized nanoparticles into the leaves of a watercress plant, they induced the plants to give off dim  for nearly four hours. They believe that, with further optimization, such plants will one day be bright enough to illuminate a workspace.

“The vision is to make a plant that will function as a desk lamp—a lamp that you don’t have to plug in. The light is ultimately powered by the energy metabolism of the plant itself,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study.

This technology could also be used to provide low-intensity indoor lighting, or to transform trees into self-powered streetlights, the researchers say.

MIT postdoc Seon-Yeong Kwak is the lead author of the study, which appears in the journal Nano Letters.

Nanobionic plants

Plant nanobionics, a new research area pioneered by Strano’s lab, aims to give plants novel features by embedding them with different types of nanoparticles. The group’s goal is to engineer plants to take over many of the functions now performed by electrical devices. The researchers have previously designed plants that can detect explosives and communicate that information to a smartphone, as well as plants that can monitor drought conditions.

Engineers create plants that glow
Glowing MIT logo printed on the leaf of an arugula plant. The mixture of nanoparticles was infused into the leaf using lab-designed syringe termination adaptors. The image is merged of the bright-field image and light emission in the dark. Credit: Kwak Seonyeong

Lighting, which accounts for about 20 percent of worldwide energy consumption, seemed like a logical next target. “Plants can self-repair, they have their own energy, and they are already adapted to the outdoor environment,” Strano says. “We think this is an idea whose time has come. It’s a perfect problem for plant nanobionics.”

To create their glowing plants, the MIT team turned to luciferase, the enzyme that gives fireflies their glow. Luciferase acts on a molecule called luciferin, causing it to emit light. Another molecule called co-enzyme A helps the process along by removing a reaction byproduct that can inhibit luciferase activity.

The MIT team packaged each of these three components into a different type of nanoparticle carrier. The nanoparticles, which are all made of materials that the U.S. Food and Drug Administration classifies as “generally regarded as safe,” help each component get to the right part of the plant. They also prevent the components from reaching concentrations that could be toxic to the plants.

The researchers used silica nanoparticles about 10 nanometers in diameter to carry luciferase, and they used slightly larger particles of the polymers PLGA and chitosan to carry luciferin and coenzyme A, respectively. To get the particles into , the researchers first suspended the particles in a solution. Plants were immersed in the solution and then exposed to high pressure, allowing the particles to enter the leaves through tiny pores called stomata.

Particles releasing luciferin and coenzyme A were designed to accumulate in the extracellular space of the mesophyll, an inner layer of the leaf, while the smaller particles carrying luciferase enter the cells that make up the mesophyll. The PLGA particles gradually release luciferin, which then enters the plant cells, where luciferase performs the chemical reaction that makes luciferin glow.

The researchers’ early efforts at the start of the project yielded plants that could glow for about 45 minutes, which they have since improved to 3.5 hours. The light generated by one 10-centimeter watercress seedling is currently about one-thousandth of the amount needed to read by, but the researchers believe they can boost the light emitted, as well as the duration of light, by further optimizing the concentration and release rates of the components.

Credit: Melanie Gonick/MIT

Plant transformation

Previous efforts to create light-emitting plants have relied on genetically engineering plants to express the gene for luciferase, but this is a laborious process that yields extremely dim light. Those studies were performed on tobacco plants and Arabidopsis thaliana, which are commonly used for plant genetic studies. However, the method developed by Strano’s lab could be used on any type of plant. So far, they have demonstrated it with arugula, kale, and spinach, in addition to watercress.

For future versions of this technology, the researchers hope to develop a way to paint or spray the nanoparticles onto plant leaves, which could make it possible to transform trees and other large plants into light sources.

“Our target is to perform one treatment when the plant is a seedling or a mature plant, and have it last for the lifetime of the plant,” Strano says. “Our work very seriously opens up the doorway to streetlamps that are nothing but treated trees, and to indirect lighting around homes.”

The researchers have also demonstrated that they can turn the light off by adding nanoparticles carrying a luciferase inhibitor. This could enable them to eventually create  that shut off their light emission in response to environmental conditions such as sunlight, the researchers say.

 Explore further: Nanobionic spinach plants can detect explosives

More information: Seon-Yeong Kwak et al. A Nanobionic Light-Emitting Plant, Nano Letters (2017). DOI: 10.1021/acs.nanolett.7b04369