Form Energy – A formidable (and notable) Startup Company Tackling the Toughest Problem(s) in Energy Storage


Industry veterans from Tesla, Aquion and A123 are trying to create cost-effective energy storage to last for weeks and months.

A crew of battle-tested cleantech veterans raised serious cash to solve the thorniest problem in clean energy.

As wind and solar power supply more and more of the grid’s electricity, seasonal swings in production become a bigger obstacle. A low- or no-carbon electricity system needs a way to dispatch clean energy on demand, even when wind and solar aren’t producing at their peaks.

Four-hour lithium-ion batteries can help on a given day, but energy storage for weeks or months has yet to arrive at scale.

Into the arena steps Form Energy, a new startup whose founders hope for commercialization not in a couple of years, but in the next decade.

More surprising, they’ve secured $9 million in Series A funding from investors who are happy to wait that long. The funders include both a major oil company and an international consortium dedicated to stopping climate change.

“Renewables have already gotten cheap,” said co-founder Ted Wiley, who worked at saltwater battery company Aquion prior to its bankruptcy. “They are cheaper than thermal generation. In order to foster a change, they need to be just as dependable and just as reliable as the alternative. Only long-duration storage can make that happen.”

It’s hard to overstate just how difficult it will be to deliver.

The members of Form will have to make up the playbook as they go along. The founders, though, have a clear-eyed view of the immense risks. They’ve systematically identified materials that they think can work, and they have a strategy for proving them out.

Wiley and Mateo Jaramillo, who built the energy storage business at Tesla, detailed their plans in an exclusive interview with Greentech Media, describing the pathway to weeks- and months-long energy storage and how it would reorient the entirety of the grid.

The team

Form Energy tackles its improbable mission with a team of founders who have already made their mark on the storage industry, and learned from its most notable failures.

There’s Jaramillo, the former theology student who built the world’s most recognizable stationary storage brand at Tesla before stepping away in late 2016. Soon after, he started work on the unsolved long-duration storage problem with a venture he called Verse Energy.

Separately, MIT professor Yet-Ming Chiang set his sights on the same problem with a new venture, Baseload Renewables. His battery patents made their mark on the industry and launched A123 and 24M. More recently, he’d been working with the Department of Energy’s Joint Center on Energy Storage Research on an aqueous sulfur formula for cost-effective long-duration flow batteries.

He brought on Wiley, who had helped found Aquion and served as vice president of product and corporate strategy before he stepped away in 2015. Measured in real deployments, Aquion led the pack of long-duration storage companies until it suddenly went bankrupt in March 2017.

Chiang and Wiley focused on storing electricity for days to weeks; Jaramillo was looking at weeks to months. MIT’s “tough tech” incubator The Engine put in $2 million in seed funding, while Jaramillo had secured a term sheet of his own. In an unusual move, they elected to join forces rather than compete.

Rounding out the team are Marco Ferrara, the lead storage modeler at IHI who holds two Ph.D.s; and Billy Woodford, an MIT-trained battery scientist and former student of Chiang’s.

The product

Form doesn’t think of itself as a battery company.

It wants to build what Jaramillo calls a “bidirectional power plant,” one which produces renewable energy and delivers it precisely when it is needed. This would create a new class of energy resource: “deterministic renewables.”

By making renewable energy dispatchable throughout the year, this resource could replace the mid-range and baseload power plants that currently burn fossil fuels to supply the grid.

Without such a tool, transitioning to high levels of renewables creates problems.

Countries could overbuild their renewable generation to ensure that the lowest production days still meet demand, but that imposes huge costs and redundancies. One famous 100 percent renewables scenario notoriously relied on a 15x increase in U.S. hydropower capacity to balance the grid in the winter.

The founders are remaining coy about the details of the technology itself.

Jaramillo and Wiley confirmed that both products in development use electrochemical energy storage. The one Chiang started developing uses aqueous sulfur, chosen for its abundance and cheap price relative to its storage ability. Jaramillo has not specified what he chose for seasonal storage.

What I did confirm is that they have been studying all the known materials that can store electricity, and crossing off the ones that definitely won’t work for long duration based on factors like abundance and fundamental cost per embodied energy.

“Because we’ve done the work looking at all the options in the electrochemical set, you can positively prove that almost all of them will not work,” Jaramillo said. “We haven’t been able to prove that these won’t work.”

The company has small-scale prototypes in the lab, but needs to prove that they can scale up to a power plant that’s not wildly expensive. It’s one thing to store energy for months, it’s another to do so at a cost that’s radically lower than currently available products.

“We can’t sit here and tell you exactly what the business model is, but we know that we’re engaged with the right folks to figure out what it is, assuming the technical work is successful,” Jaramillo said.

Given the diversity of power markets around the world, there likely won’t be one single business model.

The bidirectional power plant may bid in just like gas plants do today, but the dynamics of charging up on renewable energy could alter the way it engages with traditional power markets. Then again, power markets themselves could look very different by that time.

If the team can characterize a business case for the technology, the next step will be developing a full-scale pilot. If that works, full deployment comes next.

But don’t bank on that happening in a jiffy.

“It’s a decade-long project,” Jaramillo said. “The first half of that is spent on developing things and the second half is hopefully spent deploying things.”

The backer says

The Form founders had to find financial backers who were comfortable chasing a market that doesn’t exist with a product that won’t arrive for up to a decade.

That would have made for a dubious proposition for cleantech VCs a couple of years ago, but the funding landscape has shifted.

The Engine, an offshoot of MIT, started in 2016 to commercialize “tough tech” with long-term capital.

“We’re here for the long shots, the unimaginable, and the unbelievable,” its website proclaims. That group funded Baseload Renewables with $2 million before it merged into Form.

Breakthrough Energy Ventures, the entity Bill Gates launched to provide “patient, risk-tolerant capital” for clean energy game-changers, joined for the Series A.

San Francisco venture capital firm Prelude Ventures joined as well. It previously bet on next-gen battery companies like the secretive QuantumScape and Natron Energy.

The round also included infrastructure firm Macquarie Capital, which has shown an interest in owning clean energy assets for the long haul.

Saudi Aramco, one of the largest oil and gas supermajors in the world, is another backer.

Saudi Arabia happens to produce more sulfur than most other countries, as a byproduct of its petrochemical industry.

While the kingdom relies on oil revenues currently, the leadership has committed to investing billions of dollars in clean energy as a way to scope out a more sustainable energy economy.

“It’s very much consistent with all of the oil supermajors taking a hard look at what the future is,” Jaramillo said. “That entire sector is starting to look beyond petrochemicals.”

Indeed, oil majors have emerged as a leading source of cleantech investment in recent months.

BP re-entered the solar industry with a $200 million investment in developer Lightsource. Total made the largest battery acquisition in history when it bought Saft in 2016; it also has a controlling stake in SunPower. Shell has ramped up investments in distributed energy, including the underappreciated thermal energy storage subsegment.

The $9 million won’t put much steel in the ground, but it’s enough to fund the preliminary work refining the technology.

“We would like to come out of this round with a clear understanding of the market need and a clear understanding of exactly how our technology meets the market need,” Wiley said.

The many paths to failure

Throughout the conversation, Jaramillo and Wiley avoided the splashy rhetoric one often hears from new startups intent on saving the world.

Instead, they acknowledge that the project could fail for a multitude of reasons. Here are just a few possibilities:

• The technologies don’t achieve radically lower cost.

• They can’t last for the 20- to 25-year lifetime expected of infrastructural assets.

• Power markets don’t allow this type of asset to be compensated.

• Financiers don’t consider the product bankable.

• Societies build a lot more transmission lines.

• Carbon capture technology removes the greenhouse gases from conventional generation.

• Small modular nuclear plants get permitting, providing zero-carbon energy on demand.

• The elusive hydrogen economy materializes.

Those last few scenarios face problems of their own. Transmission lines cost billions of dollars and provoke fierce local opposition.

Carbon capture technology hasn’t worked economically yet, although many are trying.

Small modular reactors face years of scrutiny before they can even get permission to operate in the U.S.

The costliness of hydrogen has thwarted wide-scale adoption.

One thing the Form Energy founders are not worried about is that lithium-ion makes an end run around their technology on price. That tripped up the initial wave of flow batteries, Wiley noted.

“By the time they were technically mature enough to be deployed, lithium-ion had declined in price to be at or below the price that they could deploy at,” he said.

Those early flow batteries, though, weren’t delivering much longer duration than commercially available lithium-ion. When the storage has to last for weeks or months, the cost of lithium-ion components alone makes it prohibitive.

“Our view is, just from a chemical standpoint, [lithium-ion] is not capable of declining another order of magnitude, but there does seem to be a need for storage that is an order of magnitude cheaper and an order of magnitude longer in duration than is currently being deployed,” Wiley explained.

They also plan to avoid a scenario that helped bring down many a storage startup, Aquion and A123 included: investing lots of capital in a factory before the market had arrived.

Form Energy isn’t building small commoditized products; it’s constructing a power plant.

“When we say we’re building infrastructure, we mean that this is intended to be infrastructure,” Wiley said.

So far, at least, there isn’t much competition to speak of in the super-long duration battery market.

That could start to change. Now that brand-name investors have gotten involved, others are sure to take notice. The Department of Energy launched its own long-duration storage funding opportunity in May, targeting the 10- to 100-hour range.

It may be years before Form’s investigations produce results, if they ever do.

But the company has already succeeded in expanding the realm of what’s plausible and fundable in the energy storage industry.

* From Greentech Media J. Spector

Advertisements

MIT: Novel transmitter protects wireless data from hackers


MIT-Frequenxy-Hopping1

MIT researchers developed a transmitter that frequency hops data bits ultrafast to prevent signal jamming on wireless devices. The transmitter’s design (pictured) features bulk acoustic wave resonators (side boxes) that rapidly switch between radio frequency channels, sending data bits with each hop. A channel generator (top box) each microsecond selects the random channels to send bits. Two transmitters work in alternating paths (center boxes), so one receives channel selection, while the other sends data, to ensure ultrafast speeds. Courtesy of the researchers

Device uses ultrafast “frequency hopping” and data encryption to protect signals from being intercepted and jammed.

Today, more than 8 billion devices are connected around the world, forming an “internet of things” that includes medical devices, wearables, vehicles, and smart household and city technologies. By 2020, experts estimate that number will rise to more than 20 billion devices, all uploading and sharing data online.

But those devices are vulnerable to hacker attacks that locate, intercept, and overwrite the data, jamming signals and generally wreaking havoc. One method to protect the data is called “frequency hopping,” which sends each data packet, containing thousands of individual bits, on a random, unique radio frequency (RF) channel, so hackers can’t pin down any given packet. Hopping large packets, however, is just slow enough that hackers can still pull off an attack.

Now MIT researchers have developed a novel transmitter that frequency hops each individual 1 or 0 bit of a data packet, every microsecond, which is fast enough to thwart even the quickest hackers.

The transmitter leverages frequency-agile devices called bulk acoustic wave (BAW) resonators and rapidly switches between a wide range of RF channels, sending information for a data bit with each hop. In addition, the researchers incorporated a channel generator that, each microsecond, selects the random channel to send each bit. On top of that, the researchers developed a wireless protocol — different from the protocol used today — to support the ultrafast frequency hopping.

“With the current existing [transmitter] architecture, you wouldn’t be able to hop data bits at that speed with low power,” says Rabia Tugce Yazicigil, a postdoc in the Department of Electrical Engineering and Computer Science and first author on a paper describing the transmitter, which is being presented at the IEEE Radio Frequency Integrated Circuits Symposium. “By developing this protocol and radio frequency architecture together, we offer physical-layer security for connectivity of everything.” Initially, this could mean securing smart meters that read home utilities, control heating, or monitor the grid.

“More seriously, perhaps, the transmitter could help secure medical devices, such as insulin pumps and pacemakers, that could be attacked if a hacker wants to harm someone,” Yazicigil says. “When people start corrupting the messages [of these devices] it starts affecting people’s lives.”

Co-authors on the paper are Anantha P. Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science (EECS); former MIT postdoc Phillip Nadeau; former MIT undergraduate student Daniel Richman; EECS graduate student Chiraag Juvekar; and visiting research student Kapil Vaidya.

Ultrafast frequency hopping

One particularly sneaky attack on wireless devices is called selective jamming, where a hacker intercepts and corrupts data packets transmitting from a single device but leaves all other nearby devices unscathed. Such targeted attacks are difficult to identify, as they’re often mistaken for poor a wireless link and are difficult to combat with current packet-level frequency-hopping transmitters.

With frequency hopping, a transmitter sends data on various channels, based on a predetermined sequence shared with the receiver. Packet-level frequency hopping sends one data packet at a time, on a single 1-megahertz channel, across a range of 80 channels. A packet takes around 612 microseconds for BLE-type transmitters to send on that channel. But attackers can locate the channel during the first 1 microsecond and then jam the packet.

“Because the packet stays in the channel for long time, and the attacker only needs a microsecond to identify the frequency, the attacker has enough time to overwrite the data in the remainder of packet,” Yazicigil says.

To build their ultrafast frequency-hopping method, the researchers first replaced a crystal oscillator — which vibrates to create an electrical signal — with an oscillator based on a BAW resonator. However, the BAW resonators only cover about 4 to 5 megahertz of frequency channels, falling far short of the 80-megahertz range available in the 2.4-gigahertz band designated for wireless communication. Continuing recent work on BAW resonators — in a 2017 paper co-authored by Chandrakasan, Nadeau, and Yazicigil — the researchers incorporated components that divide an input frequency into multiple frequencies. An additional mixer component combines the divided frequencies with the BAW’s radio frequencies to create a host of new radio frequencies that can span about 80 channels.

Randomizing everything

The next step was randomizing how the data is sent. In traditional modulation schemes, when a transmitter sends data on a channel, that channel will display an offset — a slight deviation in frequency. With BLE modulations, that offset is always a fixed 250 kilohertz for a 1 bit and a fixed -250 kilohertz for a 0 bit. A receiver simply notes the channel’s 250-kilohertz or -250-kilohertz offset as each bit is sent and decodes the corresponding bits.

But that means, if hackers can pinpoint the carrier frequency, they too have access to that information. If hackers can see a 250-kilohertz offset on, say, channel 14, they’ll know that’s an incoming 1 and begin messing with the rest of the data packet.

To combat that, the researchers employed a system that each microsecond generates a pair of separate channels across the 80-channel spectrum. Based on a preshared secret key with the transmitter, the receiver does some calculations to designate one channel to carry a 1 bit and the other to carry a 0 bit. But the channel carrying the desired bit will always display more energy. The receiver then compares the energy in those two channels, notes which one has a higher energy, and decodes for the bit sent on that channel.

For example, by using the preshared key, the receiver will calculate that 1 will be sent on channel 14 and a 0 will be sent on channel 31 for one hop. But the transmitter only wants the receiver to decode a 1. The transmitter will send a 1 on channel 14, and send nothing on channel 31. The receiver sees channel 14 has a higher energy and, knowing that’s a 1-bit channel, decodes a 1. In the next microsecond, the transmitter selects two more random channels for the next bit and repeats the process.

Because the channel selection is quick and random, and there is no fixed frequency offset, a hacker can never tell which bit is going to which channel. “For an attacker, that means they can’t do any better than random guessing, making selective jamming infeasible,” Yazicigil says.

As a final innovation, the researchers integrated two transmitter paths into a time-interleaved architecture. This allows the inactive transmitter to receive the selected next channel, while the active transmitter sends data on the current channel. Then, the workload alternates. Doing so ensures a 1-microsecond frequency-hop rate and, in turn, preserves the 1-megabyte-per-second data rate similar to BLE-type transmitters.

“Most of the current vulnerability [to signal jamming] stems from the fact that transmitters hop slowly and dwell on a channel for several consecutive bits. Bit-level frequency hopping makes it very hard to detect and selectively jam the wireless link,” says Peter Kinget, a professor of electrical engineering and chair of the department at Columbia University. “This innovation was only possible by working across the various layers in the communication stack requiring new circuits, architectures, and protocols. It has the potential to address key security challenges in IoT devices across industries.”

The work was supported by Hong Kong Innovation and Technology Fund, the National Science Foundation, and Texas Instruments. The chip fabrication was supported by TSMC University Shuttle Program.

MIT engineers configure RFID tags to work as sensors


MIT-RFID-Sensing_0

MIT researchers are developing RFID stickers that sense their environment, enabling low-cost monitoring of chemicals and other signals in the environment Image: Chelsea Turner, MIT

Platform may enable continuous, low-cost, reliable devices that detect chemicals in the environment.

 

These days, many retailers and manufacturers are tracking their products using RFID, or radio-frequency identification tags. Often, these tags come in the form of paper-based labels outfitted with a simple antenna and memory chip. When slapped on a milk carton or jacket collar, RFID tags act as smart signatures, transmitting information to a radio-frequency reader about the identity, state, or location of a given product.

In addition to keeping tabs on products throughout a supply chain, RFID tags are used to trace everything from casino chips and cattle to amusement park visitors and marathon runners.

The Auto-ID Lab at MIT has long been at the forefront of developing RFID technology. Now engineers in this group are flipping the technology toward a new function: sensing. They have developed a new ultra-high-frequency, or UHF, RFID tag-sensor configuration that senses spikes in glucose and wirelessly transmits this information. In the future, the team plans to tailor the tag to sense chemicals and gases in the environment, such as carbon monoxide.

“People are looking toward more applications like sensing to get more value out of the existing RFID infrastructure,” says Sai Nithin Reddy Kantareddy, a graduate student in MIT’s Department of Mechanical Engineering. “Imagine creating thousands of these inexpensive RFID tag sensors which you can just slap onto the walls of an infrastructure or the surrounding objects to detect common gases like carbon monoxide or ammonia, without needing an additional battery. You could deploy these cheaply, over a huge network.”

Kantareddy developed the sensor with Rahul Bhattacharya, a research scientist in the group, and Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president of open learning at MIT. The researchers presented their design at the IEEE International Conference on RFID, and their results appear online this week.

“RFID is the cheapest, lowest-power RF communication protocol out there,” Sarma says. “When generic RFID chips can be deployed to sense the real world through tricks in the tag, true pervasive sensing can become reality.”

Confounding waves

Currently, RFID tags are available in a number of configurations, including battery-assisted and “passive” varieties. Both types of tags contain a small antenna which communicates with a remote reader by backscattering the RF signal, sending it a simple code or set of data that is stored in the tag’s small integrated chip. Battery-assisted tags include a small battery that powers this chip. Passive RFID tags are designed to harvest energy from the reader itself, which naturally emits just enough radio waves within FCC limits to power the tag’s memory chip and receive a reflected signal.

Recently, researchers have been experimenting with ways to turn passive RFID tags into sensors that can operate over long stretches of time without the need for batteries or replacements. These efforts have typically focused on manipulating a tag’s antenna, engineering it in such a way that its electrical properties change in response to certain stimuli in the environment. As a result, an antenna should reflect radio waves back to a reader at a characteristically different frequency or signal-strength, indicating that a certain stimuli has been detected.

For instance, Sarma’s group previously designed an RFID tag-antenna that changes the way it transmits radio waves in response to moisture content in the soil. The team also fabricated an antenna to sense signs of anemia in blood flowing across an RFID tag.

But Kantareddy says there are drawbacks to such antenna-centric designs, the main one being “multipath interference,” a confounding effect in which radio waves, even from a single source such as an RFID reader or antenna, can reflect off multiple surfaces.

“Depending on the environment, radio waves are reflecting off walls and objects before they reflect off the tag, which interferes and creates noise,” Kantareddy says. “With antenna-based sensors, there’s more chance you’ll get false positives or negatives, meaning a sensor will tell you it sensed something even if it didn’t, because it’s affected by the interference of the radio fields. So it makes antenna-based sensing a little less reliable.”

Chipping away

Sarma’s group took a new approach: Instead of manipulating a tag’s antenna, they tried tailoring its memory chip. They purchased off-the-shelf integrated chips that are designed to switch between two different power modes: an RF energy-based mode, similar to fully passive RFIDs; and a local energy-assisted mode, such as from an external battery or capacitor, similar to semipassive RFID tags.

The team worked each chip into an RFID tag with a standard radio-frequency antenna. In a key step, the researchers built a simple circuit around the memory chip, enabling the chip to switch to a local energy-assisted mode only when it senses a certain stimuli. When in this assisted mode (commercially called battery-assisted passive mode, or BAP), the chip emits a new protocol code, distinct from the normal code it transmits when in a passive mode. A reader can then interpret this new code as a signal that a stimuli of interest has been detected.

Kantareddy says this chip-based design can create more reliable RFID sensors than antenna-based designs because it essentially separates a tag’s sensing and communication capabilities. In antenna-based sensors, both the chip that stores data and the antenna that transmits data are dependent on the radio waves reflected in the environment. With this new design, a chip does not have to depend on confounding radio waves in order to sense something.

“We hope reliability in the data will increase,” Kantareddy says. “There’s a new protocol code along with the increased signal strength whenever you’re sensing, and there’s less chance for you to confuse when a tag is sensing versus not sensing.”

“This approach is interesting because it also solves the problem of information overload that can be associated with large numbers of tags in the environment,” Bhattacharyya says. “Instead of constantly having to parse through streams of information from short-range passive tags, an RFID reader can be placed far enough away so that only events of significance are communicated and need to be processed.”

“Plug-and-play” sensors

As a demonstration, the researchers developed an RFID glucose sensor. They set up commercially available glucose-sensing electrodes, filled with the electrolyte glucose oxidase. When the electrolyte interacts with glucose, the electrode produces an electric charge, acting as a local energy source, or battery.

The researchers attached these electrodes to an RFID tag’s memory chip and circuit. When they added glucose to each electrode, the resulting charge caused the chip to switch from its passive RF power mode, to the local charge-assisted power mode. The more glucose they added, the longer the chip stayed in this secondary power mode.

Kantareddy says that a reader, sensing this new power mode, can interpret this as a signal that glucose is present. The reader can potentially determine the amount of glucose by measuring the time during which the chip stays in the battery-assisted mode: The longer it remains in this mode, the more glucose there must be.

While the team’s sensor was able to detect glucose, its performance was below that of commercially available glucose sensors. The goal, Kantareddy says, was not necessarily to develop an RFID glucose sensor, but to show that the group’s design could be manipulated to sense something more reliably than antenna-based sensors.

“With our design, the data is more trustable,” Kantareddy says.

The design is also more efficient. A tag can run passively on RF energy reflected from a nearby reader until a stimuli of interest comes around. The stimulus itself produces a charge, which powers a tag’s chip to send an alarm code to the reader. The very act of sensing, therefore, produces additional power to power the integrated chip.

“Since you’re getting energy from RF and your electrodes, this increases your communication range,” Kantareddy says. “With this design, your reader can be 10 meters away, rather than 1 or 2. This can decrease the number and cost of readers that, say, a facility requires.”

Going forward, he plans to develop an RFID carbon monoxide sensor by combining his design with different types of electrodes engineered to produce a charge in the presence of the gas.

“With antenna-based designs, you have to design specific antennas for specific applications,” Kantareddy says. “With ours, you can just plug and play with these commercially available electrodes, which makes this whole idea scalable. Then you can deploy hundreds or thousands, in your house or in a facility where you could monitor boilers, gas containers, or pipes.”

This research was supported, in part, by the GS1 organization.

MIT Technology Review: Sustainable Energy: The daunting math of climate change means we’ll need carbon capture … eventually


 

MIT CC Friedman unknown-1_4

 Net Power’s pilot natural gas plant with carbon capture, near Houston, Texas.

An Interview with Julio Friedmann

At current rates of greenhouse-gas emissions, the world could lock in 1.5 ˚C of warming as soon as 2021, an analysis by the website Carbon Brief has found. We’re on track to blow the carbon budget for 2 ˚C by 2036.

Amid this daunting climate math, many researchers argue that capturing carbon dioxide from power plants, factories, and the air will have to play a big part in any realistic efforts to limit the dangers of global warming.

If it can be done economically, carbon capture and storage (CCS) offers the world additional flexibility and time to make the leap to cleaner systems. It means we can retrofit, rather than replace, vast parts of the global energy infrastructure. And once we reach disastrous levels of warming, so-called direct air capture offers one of the only ways to dig our way out of trouble, since carbon dioxide otherwise stays in the atmosphere for thousands of years.

Julio Friedmann has emerged as one of the most ardent advocates of these technologies. He oversaw research and development efforts on clean coal and carbon capture at the US Department of Energy’s Office of Fossil Energy under the last administration. Among other roles, he’s now working with or advising the Global CCS Institute, the Energy Futures Initiative, and Climeworks, a Switzerland-based company already building pilot plants that pull carbon dioxide from the air.

In an interview with MIT Technology Review, Friedmann argues that the technology is approaching a tipping point: a growing number of projects demonstrate that it works in the real world, and that it is becoming more reliable and affordable. He adds that the boosted US tax credit for capturing and storing carbon, passed in the form of the Future Act as part of the federal budget earlier this year, will push forward many more projects and help create new markets for products derived from carbon dioxide (see “The carbon-capture era may finally be starting”).

But serious challenges remain. Even with the tax credit, companies will incur steep costs by adding carbon capture systems to existing power plants. And a widely cited 2011 study, coauthored by MIT researcher Howard Herzog, found that direct air capture will require vast amounts of energy and cost 10 times as much as scrubbing carbon from power plants.

(This interview has been edited for length and clarity.)

In late February, you wrote a Medium post saying that with the passage of the increased tax credit for carbon capture and storage, we’ve “launched the climate counter-strike.” Why is that a big deal?

It actually sets a price on carbon formally. It says you should get paid to not emit carbon dioxide, and you should get paid somewhere between $35 a ton and $50 a ton. So that is already a massive change. In addition to that, it says you can do one of three things: you can store CO2, you can use it for enhanced oil recovery, or you can turn it into stuff. Fundamentally, it says not emitting has value.

As I’ve said many times before, the lack of progress in deploying CCS up until this point is not a question of cost. It’s really been a question of finance.

The Future Act creates that financing.

I identified an additional provision which said not only can you consider a power plant a source or an industrial site a source, you can consider the air a source.

Even if we zeroed out all our emissions today, we still have a legacy of harm of two trillion tons of CO2 in the air, and we need to do something about that.

And this law says, yeah, we should. It says we can take carbon dioxide out of the air and turn it into stuff.

At the Petra Nova plant in Texas, my understanding is the carbon capture costs are something like $60 to $70 a ton, which is still going to outstrip the tax credit today. How are we going to close that gap?

There are many different ways to go about it. For example, the state of New Jersey today passed a 90 percent clean energy portfolio standard. Changing the policy from a renewable portfolio standard [which would exclude CCS technologies] to a clean energy standard [which would allow them] allowed higher ambition.

In that context, somebody who would build a CCS project and would get a contract to deliver that power, or deliver that emissions abatement, can actually again get staked, get financed, and get built. That can happen without any technology advancement.

The technology today is already cost competitive. CCS today, as a retrofit, is cheaper than a whole bunch of stuff. It’s cheaper than new-build nuclear, it’s cheaper than offshore wind. It’s cheaper than a whole bunch of things we like, and it’s cheaper than rooftop solar, almost everywhere. It’s cheaper than utility-scale concentrating solar pretty much everywhere, and it is cheaper than what solar and wind were 10 years ago.

What do you make of the critique that this is all just going to perpetuate the fossil-fuel industry?

The enemy is not fossil fuels; the enemy is emissions.

In a place like California that has terrific renewable resources and a good infrastructure for renewable energy, maybe you can get to zero [fossil fuels] someday.

If you’re in Saskatchewan, you really can’t do that. It is too cold for too much of the year, and they don’t have solar resources, and their wind resources are problematic because they’re so strong they tear up the turbines. Which is why they did the CCS project in Saskatchewan. For them it was the right solution.

Shifting gears to direct air capture, the basic math says that you’re moving 2,500 molecules to capture one of CO2. How good are we getting at this, and how cheaply can we do this at this point?

If you want to optimize the way that you would reduce carbon dioxide economy-wide, direct air capture is the last thing you would tackle. Turns out, though, that we don’t live in that society. We are not optimizing anything in any way.

So instead we realize we have this legacy of emissions in the atmosphere and we need tools to manage that. So there are companies like ClimeworksCarbon Engineering, and Global Thermostat. Those guys said we know we’re going to need this technology, so I’m going to work now. They’ve got decent financing, and the costs are coming down and improving (see “Can sucking CO2 out of the atmosphere really work?”).

The cost for all of these things now today, all-in costs, is somewhere between $300 and $600 a ton. I’ve looked inside all those companies and I believe all of them are on a glide path to get to below $200 a ton by somewhere between 2022 and 2025. And I believe that they’re going to get down to $100 a ton by 2030. At that point, these are real options.

At $200 a ton, we know today unambiguously that pulling CO2 out of the air is cheaper than trying to make a zero-carbon airplane, by a lot. So it becomes an option that you use to go after carbon in the hard-to-scrub parts of the economy.

Is it ever going to work as a business, or is it always going to be kind of a public-supported enterprise to buy ourselves out of climate catastrophes?

Direct air capture is not competitive today broadly, but there are places where the value proposition is real. So let me give you a couple of examples.

In many parts of the world there are no sources of CO2. If you’re running a Pepsi or a Coca-Cola plant in Sri Lanka, you literally burn diesel fuel and capture the CO2 from it to put into your cola, at a bonkers price. It can cost $300 to $800 a ton to get that CO2. So there are already going to be places in some people’s supply chain where direct air capture could be cheaper.

We talk to companies like Goodyear, Firestone, or Michelin. They make tires, and right now the way that they get their carbon black [a material used in tire production that’s derived from fossil fuel] is basically you pyrolize bunker fuel in the Gulf Coast, which is a horrible, environmentally destructive process. And then you ship it by rail cars to wherever they’re making the tires.

If they can decouple from that market by gathering CO2 wherever they are and turn that into carbon black, they can actually avoid market shocks. So even if it costs a little more, the value to that company might be high enough to bring it into the market. That’s where I see direct air actually gaining real traction in the next few years.

It’s not going to be enough for climate. We know that we will have to do carbon storage, for sure, if we want to really manage the atmospheric emissions. But there’s a lot of ground to chase this, and we never know quite where technology goes.

In one of your earlier Medium posts you said that we’re ultimately going to have to pull 10 billion tons of CO2 out of the atmosphere every year. Climeworks is doing about 50 [at their pilot plant in Iceland]. So what does that scale-up look like?

You don’t have to get all 10 billion tons with direct air capture. So let’s say you just want one billion.

Right now, Royal Dutch Shell as a company moves 300 million tons of refined product every year. This means that you need three to four companies the size of Royal Dutch Shell to pull CO2 out of the atmosphere.

The good news is we don’t need that billion tons today. We have 10 or 20 or 30 years to get to a billion tons of direct air capture. But in fact we’ve seen that kind of scaling in other kinds of clean-tech markets. There’s nothing in the laws of physics or chemistry that stops that.

MIT Technolgy Review: This battery advance could make electric vehicles far cheaper


Sila Nanotechnologies has pulled off double-digit performance gains for lithium-ion batteries, promising to lower costs or add capabilities for cars and phones.

For the last seven years, a startup based in Alameda, California, has quietly worked on a novel anode material that promises to significantly boost the performance of lithium-ion batteries.

Sila Nanotechnologies emerged from stealth mode last month, partnering with BMW to put the company’s silicon-based anode materials in at least some of the German automaker’s electric vehicles by 2023.

A BMW spokesman told the Wall Street Journal the company expects that the deal will lead to a 10 to 15 percent increase in the amount of energy you can pack into a battery cell of a given volume. Sila’s CEO Gene Berdichevsky says the materials could eventually produce as much as a 40 percent improvement (see “35 Innovators Under 35: Gene Berdichevsky”).

For EVs, an increase in so-called energy density either significantly extends the mileage range possible on a single charge or decreases the cost of the batteries needed to reach standard ranges. For consumer gadgets, it could alleviate the frustration of cell phones that can’t make it through the day, or it might enable power-hungry next-generation features like bigger cameras or ultrafast 5G networks.

Researchers have spent decades working to advance the capabilities of lithium-ion batteries, but those gains usually only come a few percentage points at a time. So how did Sila Nanotechnologies make such a big leap?

Berdichevsky, who was employee number seven at Tesla, and CTO Gleb Yushin, a professor of materials science at the Georgia Institute of Technology, recently provided a deeper explanation of the battery technology in an interview with MIT Technology Review.

Sila co-founders (from left to right), Gleb Yushin, Gene Berdichevsky and Alex Jacobs.

An anode is the battery’s negative electrode, which in this case stores lithium ions when a battery is charged. Engineers have long believed that silicon holds great potential as an anode material for a simple reason: it can bond with 25 times more lithium ions than graphite, the main material used in lithium-ion batteries today.

But this comes with a big catch. When silicon accommodates that many lithium ions, its volume expands, stressing the material in a way that tends to make it crumble during charging. That swelling also triggers electrochemical side reactions that reduce battery performance.

In 2010, Yushin coauthored a scientific paper that identified a method for producing rigid silicon-based nanoparticles that are internally porous enough to accommodate significant volume changes. He teamed up with Berdichevsky and another former Tesla battery engineer, Alex Jacobs, to form Sila the following year.

The company has been working to commercialize that basic concept ever since, developing, producing, and testing tens of thousands of different varieties of increasingly sophisticated anode nanoparticles. It figured out ways to alter the internal structure to prevent the battery electrolyte from seeping into the particles, and it achieved dozens of incremental gains in energy density that ultimately added up to an improvement of about 20 percent over the best existing technology.

Ultimately, Sila created a robust, micrometer-size spherical particle with a porous core, which directs much of the swelling within the internal structure. The outside of the particle doesn’t change shape or size during charging, ensuring otherwise normal performance and cycle life.

The resulting composite anode powders work as a drop-in material for existing manufacturers of lithium-ion cells.

With any new battery technology, it takes at least five years to work through the automotive industry’s quality and safety assurance processes—hence the 2023 timeline with BMW. But Sila is on a faster track with consumer electronics, where it expects to see products carrying its battery materials on shelves early next year.

Venkat Viswanathan, a mechanical engineer at Carnegie Mellon, says Sila is “making great progress.” But he cautions that gains in one battery metric often come at the expense of others—like safety, charging time, or cycle life—and that what works in the lab doesn’t always translate perfectly into end products.

Companies including Enovix and Enevate are also developing silicon-dominant anode materials. Meanwhile, other businesses are pursuing entirely different routes to higher-capacity storage, notably including solid-state batteries. These use materials such as glass, ceramics, or polymers to replace liquid electrolytes, which help carry lithium ions between the cathode and anode.

BMW has also partnered with Solid Power, a spinout from the University of Colorado Boulder, which claims that its solid-state technology relying on lithium-metal anodes can store two to three times more energy than traditional lithium-ion batteries. Meanwhile, Ionic Materials, which recently raised $65 million from Dyson and others, has developed a solid polymer electrolyte that it claims will enable safer, cheaper batteries that can operate at room temperature and will also work with lithium metal.

Some battery experts believe that solid-state technology ultimately promises bigger gains in energy density, if researchers can surmount some large remaining technical obstacles.

But Berdichevsky stresses that Sila’s materials are ready for products now and, unlike solid-state lithium-metal batteries, don’t require any expensive equipment upgrades on the part of battery manufacturers.

As the company develops additional ways to limit volume change in the silicon-based particles, Berdichevsky and Yushin believe they’ll be able to extend energy density further, while also improving charging times and total cycle life.

This story was updated to clarify that Samsung didn’t invest in Ionic Material’s most recent funding round.

Read and Watch More:

Tenka Energy, Inc. Building Ultra-Thin Energy Dense SuperCaps and NexGen Nano-Enabled Pouch & Cylindrical Batteries – Energy Storage Made Small and POWERFUL! YouTube Video:

3 Questions for Innovating the Clean Energy Economy (MIT Energy Initiative)


daniel-kammen-mit-energy-initiative-mitei-2018_0Daniel Kammen, professor of energy at the University of California at Berkeley, spoke on clean energy innovation and implementation in a talk at MIT. Photo: Francesca McCaffrey/MIT Energy Initiative

Daniel Kammen of the University of California at Berkeley discusses current efforts in clean energy innovation and implementation, and what’s coming next.

Daniel Kammen is a professor of energy at the University of California at Berkeley, with parallel appointments in the Energy and Resources Group (which he chairs), the Goldman School of Public Policy, and the Department of Nuclear Science and Engineering.

Recently, he gave a talk at MIT examining the current state of clean energy innovation and implementation, both in the U.S. and internationally. Using a combination of analytical and empirical approaches, he discussed the strengths and weaknesses of clean energy efforts on the household, city, and regional levels. The MIT Energy Initiative (MITEI) followed up with him on these topics.

Q: Your team has built energy transition models for several countries, including Chile, Nicaragua, China, and India. Can you describe how these models work and how they can inform global climate negotiations like the Paris Accords?

Clean Energy Storage I header1

A: My laboratory has worked with three governments to build open-source models of the current state of their energy systems and possible opportunities for improvement. This model, SWITCH , is an exceptionally high-resolution platform for examining the costs, reliability, and carbon emissions of energy systems as small as Nicaragua’s and as large as China’s. The exciting recent developments in the cost and performance improvements of solar, wind, energy storage, and electric vehicles permit the planning of dramatically decarbonized systems that have a wide range of ancillary benefits: increased reliability, improved air quality, and monetizing energy efficiency, to name just a few. With the Paris Climate Accords placing 80 percent or greater decarbonization targets on all nations’ agendas (sadly, except for the U.S. federal government), the need for an “honest broker” for the costs and operational issues around power systems is key.

Q: At the end of your talk, you mentioned a carbon footprint calculator that you helped create. How much do individual behaviors matter in addressing climate change?

A: The carbon footprint, or CoolClimate project, is a visualization and behavioral economics tool that can be used to highlight the impacts of individual decisions at the household, school, and city level. We have used it to support city-city competitions for “California’s coolest city,” to explore the relative impacts of lifetime choices (buying an electric vehicle versus or along with changes of diet), and more.

Q: You touched on the topic of the “high ambition coalition,” a 2015 United Nations Climate Change Conference goal of keeping warming under 1.5 degrees Celsius. Can you expand on this movement and the carbon negative strategies it would require?

A: As we look at paths to a sustainable global energy system, efforts to limit warming to 1.5 degrees Celsius will require not only zeroing out industrial and agricultural emissions, but also removing carbon from the atmosphere. This demands increasing natural carbon sinks by preserving or expanding forests, sustaining ocean systems, and making agriculture climate- and water-smart. One pathway, biomass energy with carbon capture and sequestration, has both supporters and detractors. It involves growing biomass, using it for energy, and then sequestering the emissions.

This talk was one in a series of MITEI seminars supported by IHS Markit.

MIT: Finding a New Way to Design and Analyze Better Battery Materials: Discoveries could accelerate the development of high-energy lithium batteries


Diagram illustrates the crystal lattice of a proposed battery electrolyte material called Li3PO4. The researchers found that measuring how vibrations of sound move through the lattice could reveal how well ions – electrically charged atoms or molecules – could travel through the solid material, and therefore how they would work in a real battery. In this diagram, the oxygen atoms are shown in red, the purple pyramid-like shapes are phosphate (PO4) molecules. The orange and green spheres are ions of lithium.
Image: Sokseiha Muy

Design principles could point to better electrolytes for next-generation lithium batteries.

A new approach to analyzing and designing new ion conductors — a key component of rechargeable batteries — could accelerate the development of high-energy lithium batteries and possibly other energy storage and delivery devices such as fuel cells, researchers say.

The new approach relies on understanding the way vibrations move through the crystal lattice of lithium ion conductors and correlating that with the way they inhibit ion migration. This provides a way to discover new materials with enhanced ion mobility, allowing rapid charging and discharging.

At the same time, the method can be used to reduce the material’s reactivity with the battery’s electrodes, which can shorten its useful life. These two characteristics — better ion mobility and low reactivity — have tended to be mutually exclusive.

The new concept was developed by a team led by W.M. Keck Professor of Energy Yang Shao-Horn, graduate student Sokseiha Muy, recent graduate John Bachman PhD ’17, and Research Scientist Livia Giordano, along with nine others at MIT, Oak Ridge National Laboratory, and institutions in Tokyo and Munich. Their findings were reported in the journal Energy and Environmental Science.

The new design principle has been about five years in the making, Shao-Horn says. The initial thinking started with the approach she and her group have used to understand and control catalysts for water splitting, and applying it to ion conduction — the process that lies at the heart of not only rechargeable batteries, but also other key technologies such as fuel cells and desalination systems.

While electrons, with their negative charge, flow from one pole of the battery to the other (thus providing power for devices), positive ions flow the other way, through an electrolyte, or ion conductor, sandwiched between those poles, to complete the flow.

Typically, that electrolyte is a liquid. A lithium salt dissolved in an organic liquid is a common electrolyte in today’s lithium-ion batteries. But that substance is flammable and has sometimes caused these batteries to catch fire. The search has been on for a solid material to replace it, which would eliminate that issue.

A variety of promising solid ion conductors exist, but none is stable when in contact with both the positive and negative electrodes in lithium-ion batteries, Shao-Horn says.

Therefore, seeking new solid ion conductors that have both high ion conductivity and stability is critical. But sorting through the many different structural families and compositions to find the most promising ones is a classic needle in a haystack problem. That’s where the new design principle comes in.

The idea is to find materials that have ion conductivity comparable to that of liquids, but with the long-term stability of solids. The team asked, “What is the fundamental principle? What are the design principles on a general structural level that govern the desired properties?” Shao-Horn says. A combination of theoretical analysis and experimental measurements has now yielded some answers, the researchers say.

“We realized that there are a lot of materials that could be discovered, but no understanding or common principle that allows us to rationalize the discovery process,” says Muy, the paper’s lead author. “We came up with an idea that could encapsulate our understanding and predict which materials would be among the best.”

The key was to look at the lattice properties of these solid materials’ crystalline structures. This governs how vibrations such as waves of heat and sound, known as phonons, pass through materials. This new way of looking at the structures turned out to allow accurate predictions of the materials’ actual properties. “Once you know [the vibrational frequency of a given material], you can use it to predict new chemistry or to explain experimental results,” Shao-Horn says.

The researchers observed a good correlation between the lattice properties determined using the model and the lithium ion conductor material’s conductivity. “We did some experiments to support this idea experimentally” and found the results matched well, she says.

They found, in particular, that the vibrational frequency of lithium itself can be fine-tuned by tweaking its lattice structure, using chemical substitution or dopants to subtly change the structural arrangement of atoms.

The new concept can now provide a powerful tool for developing new, better-performing materials that could lead to dramatic improvements in the amount of power that could be stored in a battery of a given size or weight, as well as improved safety, the researchers say.

Already, they used the method to find some promising candidates. And the techniques could also be adapted to analyze materials for other electrochemical processes such as solid-oxide fuel cells, membrane based desalination systems, or oxygen-generating reactions.

The team included Hao-Hsun Chang at MIT; Douglas Abernathy, Dipanshu Bansal, and Olivier Delaire at Oak Ridge; Santoshi Hori and Ryoji Kanno at Tokyo Institute of Technology; and Filippo Maglia, Saskia Lupart, and Peter Lamp at Research Battery Technology at BMW Group in Munich.

The work was supported by BMW, the National Science Foundation, and the U.S. Department of Energy.

Watch a YouTube Video on New Nano-Enabled Super Capacitors and Batteries

MIT and HARVARD Update: PHYSICS CREATES NEW FORM OF LIGHT THAT COULD DRIVE THE QUANTUM COMPUTING REVOLUTION


The discovery that photons can interact could be harnessed for quantum computing. PHOTO: CHRISTINE DANILOFF/MIT

For the first time, scientists have watched groups of three photons interacting and effectively producing a new form of light.

In results published in Science, researchers suggest that this new light could be used to perform highly complex, incredibly fast quantum computations.

Photons are tiny particles that normally travel solo through beams of light, never interacting with each other. But in 2013 scientists made them clump together in pairs, creating a new state of matter. This discovery shows that interactions are possible on a greater scale.

“It was an open question,” Vladan Vuletic from the Massachusetts Institute of Technology (MIT), who led the team with Mikhail Lukin from Harvard University, said in a statement. “Can you add more photons to a molecule to make bigger and bigger things?”

The scientists cooled a cloud of rubidium atoms to an ultralow temperature to answer their question. This slowed the atoms down till they were almost still. A very faint laser beam sent just a few photons through the freezing cloud at once.

The photons came out the other side as pairs and triplets, rather than just as individuals.

Photons flit between atoms like bees among flowers.

The researchers think the particles might flit from one nearby atom to another as they pass through the rubidium cloud—like bees in a field of flowers.

These passing photons could form “polaritons”—part photon, part atom hybrids. If more than one photon passes by the same atom at the same time, they might form polaritons that are linked.

As they leave the atom, they could stay together as a pair, or even a triplet.

“What’s neat about this is, when photons go through the medium, anything that happens in the medium, they ‘remember’ when they get out,” said co-author Sergio Cantu from MIT.

This whole process takes about a millionth of a second.

Read About: MIT Researchers Link Photons

The future of computing

This research is the latest step toward a long-fabled quantum computer, an ultra-powerful machine that could solve problems beyond the realm of traditional computers. Your desktop PC would, for example, struggle to solve the question: “If a salesman has lots of places to visit, what is the quickest route?”

“[A traditional computer] could solve this for a certain number of cities, but if I wanted to add more cities, it would get much harder, very quickly,” Vuletic previously stated in a press release.

Read more: What did the Big Bang look like? The physics of light during the formation of the universe

Light, he said, is already used to transmit data very quickly over long distances via fiber optic cables. Being able to manipulate these photons could enable the distribution of data in much more powerful ways.

The team is now aiming to coerce photons in ways beyond attraction. The next stop is repulsion, where photons slam into each other and scatter.

“It’s completely novel in the sense that we don’t even know sometimes qualitatively what to expect,” Vuletic says. “With repulsion of photons, can they be such that they form a regular pattern, like a crystal of light? Or will something else happen? It’s very uncharted territory.”

MIT launches the “MIT Intelligence Quest … MIT IQ” (Video)


MIT AI IQ 87e48072-b50e-4701-b38e-f236c0c22280-original

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known. Courtesy of MIT IQ

New Institute-wide initiative will advance human and machine intelligence research

MIT today announced the launch of the MIT Intelligence Quest, an initiative to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society.

The announcement was first made in a letter MIT President L. Rafael Reif sent to the Institute community.

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known. (continued below)

Watch and Read About: Scott Zoldi, Director of Analytics at FICO, has published a report that “we are just at the beginning of the golden age of analytics, in which the value and contributions of artificial intelligence (AI), machine learning (AA) and of deep learning can only continue to expand as we accept and incorporate those tools into our businesses. ” And according to the expert’s predictions, in 2018, the development and use of these technologies will continue to expand and strengthen. And consider that next year:

 

(Continued)

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

“Today we set out to answer two big questions, says President Reif. “How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines, to the benefit of society?”

MIT IQ: The Core and The Bridge

MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science. brain-quantum-2-b2b_wsf

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware.

Along with developing and advancing the technologies of intelligence, MIT IQ researchers will also investigate the societal and ethical implications of advanced analytical and predictive tools. There are already active projects and groups at the Institute investigating autonomous systems, media and information quality, labor markets and the work of the future, innovation and the digital economy, and the role of AI in the legal system.

In all its activities, MIT IQ is intended to take advantage of — and strengthen — the Institute’s culture of collaboration. MIT IQ will connect and amplify existing excellence across labs and centers already engaged in intelligence research. It will also establish shared, central spaces conducive to group work, and its resources will directly support research.

“Our quest is meant to power world-changing possibilities,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. Chandrakasan, in collaboration with Provost Martin Schmidt and all four of MIT’s other school deans, has led the development and establishment of MIT IQ.

“We imagine preventing deaths from cancer by using deep learning for early detection and personalized treatment,” Chandrakasan continues. “We imagine artificial intelligence in sync with, complementing, and assisting our own intelligence. And we imagine every scientist and engineer having access to human-intelligence-inspired algorithms that open new avenues of discovery in their fields. Researchers across our campus want to push the boundaries of what’s possible.”

Engaging energetically with partners

In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab, which was announced in September 2017. MIT researchers will collaborate with each other and with industry on challenges that range in scale from the very broad to the very specific.

“In the short time since we began our collaboration with IBM, the lab has garnered tremendous interest inside and outside MIT, and it will be a vital part of MIT IQ,” says President Reif.

John E. Kelly III, IBM senior vice president for cognitive solutions and research, says, “To take on the world’s greatest challenges and seize its biggest opportunities, we need to rapidly advance both AI technology and our understanding of human intelligence. Building on decades of collaboration — including our extensive joint MIT–IBM Watson AI Lab — IBM and MIT will together shape a new agenda for intelligence research and its applications. We are proud to be a cornerstone of this expanded initiative.”

MIT will seek to establish additional entities within MIT IQ, in partnership with corporate and philanthropic organizations.

Why MIT

MIT has been on the frontier of intelligence research since the 1950s, when pioneers Marvin Minsky and John McCarthy helped establish the field of artificial intelligence.

MIT now has over 200 principal investigators whose research bears directly on intelligence. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Department of Brain and Cognitive Sciences (BCS) — along with the McGovern Institute for Brain Research and the Picower Institute for Learning and Memory — collaborate on a range of projects. MIT is also home to the National Science Foundation–funded center for Brains, Minds and Machines (CBMM) — the only national center of its kind.

Four years ago, MIT launched the Institute for Data, Systems, and Society (IDSS) with a mission promoting data science, particularly in the context of social systems. It is  anticipated that faculty and students from IDSS will play a critical role in this initiative.

Faculty from across the Institute will participate in the initiative, including researchers in the Media Lab, the Operations Research Center, the Sloan School of Management, the School of Architecture and Planning, and the School of Humanities, Arts, and Social Sciences.

“Our quest will amount to a journey taken together by all five schools at MIT,” says Provost Schmidt. “Success will rest on a shared sense of purpose and a mix of contributions from a wide variety of disciplines. I’m excited by the new thinking we can help unlock.”

At the heart of MIT IQ will be collaboration among researchers in human and artificial intelligence.

“To revolutionize the field of artificial intelligence, we should continue to look to the roots of intelligence: the brain,” says James DiCarlo, department head and Peter de Florez Professor of Neuroscience in the Department of Brain and Cognitive Sciences. “By working with engineers and artificial intelligence researchers, human intelligence researchers can build models of the brain systems that produce intelligent behavior. The time is now, as model building at the scale of those brain systems is now possible. Discovering how the brain works in the language of engineers will not only lead to transformative AI — it will also illuminate entirely new ways to repair, educate, and augment our own minds.”

Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, and director of CSAIL, agrees. MIT researchers, she says, “have contributed pioneering and visionary solutions for intelligence since the beginning of the field, and are excited to make big leaps to understand human intelligence and to engineer significantly more capable intelligent machines. Understanding intelligence will give us the knowledge to understand ourselves and to create machines that will support us with cognitive and physical work.”

David Siegel, who earned a PhD in computer science at MIT in 1991 pursuing research at MIT’s Artificial Intelligence Laboratory, and who is a member of the MIT Corporation and an advisor to the MIT Center for Brains, Minds, and Machines, has been integral to the vision and formation of MIT IQ and will continue to help shape the effort. “Understanding human intelligence is one of the greatest scientific challenges,” he says, “one that helps us understand who we are while meaningfully advancing the field of artificial intelligence.” Siegel is co-chairman and a founder of Two Sigma Investments, LP.

The fruits of research

MIT IQ will thus provide a platform for long-term research, encouraging the foundational advances of the future. At the same time, MIT professors and researchers may develop technologies with near-term value, leading to new kinds of collaborations with existing companies — and to new companies.

Some such entrepreneurial efforts could be supported by The Engine, an Institute initiative launched in October 2016 to support startup companies pursuing particularly ambitious goals.

Other innovations stemming from MIT IQ could be absorbed into the innovation ecosystem surrounding the Institute — in Kendall Square, Cambridge, and the Boston metropolitan area. MIT is located in close proximity to a world-leading nexus of biotechnology and medical-device research and development, as well as a cluster of leading-edge technology firms that study and deploy machine intelligence.

MIT also has roots in centers of innovation elsewhere in the United States and around the world, through faculty research projects, institutional and industry collaborations, and the activities and leadership of its alumni. MIT IQ will seek to connect to innovative companies and individuals who share MIT’s passion for work in intelligence.

Eric Schmidt, former executive chairman of Alphabet, has helped MIT form the vision for MIT IQ. “Imagine the good that can be done by putting novel machine-learning tools in the hands of those who can make great use of them,” he says. “MIT IQ can become a fount of exciting new capabilities.”

“I am thrilled by today’s news,” says President Reif. “Drawing on MIT’s deep strengths and signature values, culture, and history, MIT IQ promises to make important contributions to understanding the nature of intelligence, and to harnessing it to make a better world.”

“MIT is placing a bet,” he says, “on the central importance of intelligence research to meeting the needs of humanity.”

MIT: Optimizing carbon nanotube electrodes for energy storage and water desalination applications


Opt CNTs for Water Wang-Mutha-nanotubes_0Evelyn Wang (left) and Heena Mutha have developed a nondestructive method of quantifying the detailed characteristics of carbon nanotube (CNT) samples — a valuable tool for optimizing these materials for use as electrodes in a variety of practical devices. Photo: Stuart Darsch

New model measures characteristics of carbon nanotube structures for energy storage and water desalination applications.

Using electrodes made of carbon nanotubes (CNTs) can significantly improve the performance of devices ranging from capacitors and batteries to water desalination systems. But figuring out the physical characteristics of vertically aligned CNT arrays that yield the most benefit has been difficult.

Now an MIT team has developed a method that can help. By combining simple benchtop experiments with a model describing porous materials, the researchers have found they can quantify the morphology of a CNT sample, without destroying it in the process.

In a series of tests, the researchers confirmed that their adapted model can reproduce key measurements taken on CNT samples under varying conditions. They’re now using their approach to determine detailed parameters of their samples — including the spacing between the nanotubes — and to optimize the design of CNT electrodes for a device that rapidly desalinates brackish water.

A common challenge in developing energy storage devices and desalination systems is finding a way to transfer electrically charged particles onto a surface and store them there temporarily. In a capacitor, for example, ions in an electrolyte must be deposited as the device is being charged and later released when electricity is being delivered. During desalination, dissolved salt must be captured and held until the cleaned water has been withdrawn.

One way to achieve those goals is by immersing electrodes into the electrolyte or the saltwater and then imposing a voltage on the system. The electric field that’s created causes the charged particles to cling to the electrode surfaces. When the voltage is cut, the particles immediately let go.

“Whether salt or other charged particles, it’s all about adsorption and desorption,” says Heena Mutha PhD ’17, a senior member of technical staff at the Charles Stark Draper Laboratory. “So the electrodes in your device should have lots of surface area as well as open pathways that allow the electrolyte or saltwater carrying the particles to travel in and out easily.”

One way to increase the surface area is by using CNTs. In a conventional porous material, such as activated charcoal, interior pores provide extensive surface area, but they’re irregular in size and shape, so accessing them can be difficult. In contrast, a CNT “forest” is made up of aligned pillars that provide the needed surfaces and straight pathways, so the electrolyte or saltwater can easily reach them.

However, optimizing the design of CNT electrodes for use in devices has proven tricky. Experimental evidence suggests that the morphology of the material — in particular, how the CNTs are spaced out — has a direct impact on device performance. Increasing the carbon concentration when fabricating CNT electrodes produces a more tightly packed forest and more abundant surface area. But at a certain density, performance starts to decline, perhaps because the pillars are too close together for the electrolyte or saltwater to pass through easily.

Designing for device performance

OPT CNTs III graphic-1

“Much work has been devoted to determining how CNT morphology affects electrode performance in various applications,” says Evelyn Wang, the Gail E. Kendall Professor of Mechanical Engineering. “But an underlying question is, ‘How can we characterize these promising electrode materials in a quantitative way, so as to investigate the role played by such details as the nanometer-scale interspacing?'”

Inspecting a cut edge of a sample can be done using a scanning electron microscope (SEM). But quantifying features, such as spacing, is difficult, time-consuming, and not very precise. Analyzing data from gas adsorption experiments works well for some porous materials, but not for CNT forests. Moreover, such methods destroy the material being tested, so samples whose morphologies have been characterized can’t be used in tests of overall device performance.

For the past two years, Wang and Mutha have been working on a better option. “We wanted to develop a nondestructive method that combines simple electrochemical experiments with a mathematical model that would let us ‘back calculate’ the interspacing in a CNT forest,” Mutha says. “Then we could estimate the porosity of the CNT forest — without destroying it.”

Adapting the conventional model

One widely used method for studying porous electrodes is electrochemical impedance spectroscopy (EIS). It involves pulsing voltage across electrodes in an electrochemical cell at a set time interval (frequency) while monitoring “impedance,” a measure that depends on the available storage space and resistance to flow. Impedance measurements at different frequencies is called the “frequency response.”Opt CNTs II 1-newmodelmeas

The classic model describing porous media uses that frequency response to calculate how much open space there is in a porous material. “So we should be able to use [the model] to calculate the space between the carbon nanotubes in a CNT electrode,” Mutha says.

But there’s a problem: This model assumes that all pores are uniform, cylindrical voids. But that description doesn’t fit electrodes made of CNTs. Mutha modified the model to more accurately define the pores in CNT materials as the void spaces surrounding solid pillars. While others have similarly altered the classic model, Mutha took her alterations a step further. The nanotubes in a CNT material are unlikely to be packed uniformly, so she added to her equations the ability to account for variations in the spacing between the nanotubes. With this modified model, Mutha could analyze EIS data from real samples to calculate CNT spacings.

Using the model

To demonstrate her approach, Mutha first fabricated a series of laboratory samples and then measured their frequency response. In collaboration with Yuan “Jenny” Lu ’15, a materials science and engineering graduate, she deposited thin layers of aligned CNTs onto silicon wafers inside a furnace and then used water vapor to separate the CNTs from the silicon, producing free-standing forests of nanotubes. To vary the CNT spacing, she used a technique developed by MIT collaborators in the Department of Aeronautics and Astronautics, Professor Brian Wardle and postdoc associate Itai Stein PhD ’16. Using a custom plastic device, she mechanically squeezed her samples from four sides, thereby packing the nanotubes together more tightly and increasing the volume fraction — that is, the fraction of the total volume occupied by the solid CNTs.

To test the frequency response of the samples, she used a glass beaker containing three electrodes immersed in an electrolyte. One electrode is the CNT-coated sample, while the other two are used to monitor the voltage and to absorb and measure the current. Using that setup, she first measured the capacitance of each sample, meaning how much charge it could store in each square centimeter of surface area at a given constant voltage. She then ran EIS tests on the samples and analyzed results using her modified porous media model.

Results for the three volume fractions tested show the same trends. As the voltage pulses become less frequent, the curves initially rise at about a 45 degree slope. But at some point, each one shifts toward vertical, with resistance becoming constant and impedance continuing to rise.

As Mutha explains, those trends are typical of EIS analyses. “At high frequencies, the voltage changes so quickly that — because of resistance in the CNT forest — it doesn’t penetrate the depth of the entire electrode material, so the response comes only from the surface or partway in,” she says. “But eventually the frequency is low enough that there’s time between pulses for the voltage to penetrate and for the whole sample to respond.”

Resistance is no longer a noticeable factor, so the line becomes vertical, with the capacitance component causing impedance to rise as more charged particles attach to the CNTs. That switch to vertical occurs earlier with the lower-volume-fraction samples. In sparser forests, the spaces are larger, so the resistance is lower.

The most striking feature of Mutha’s results is the gradual transition from the high-frequency to the low-frequency regime. Calculations from a model based on uniform spacing — the usual assumption — show a sharp transition from partial to complete electrode response. Because Mutha’s model incorporates subtle variations in spacing, the transition is gradual rather than abrupt. Her experimental measurements and model results both exhibit that behavior, suggesting that the modified model is more accurate.

By combining their impedance spectroscopy results with their model, the MIT researchers inferred the CNT interspacing in their samples. Since the forest packing geometry is unknown, they performed the analyses based on three- and six-pillar configurations to establish upper and lower bounds. Their calculations showed that spacing can range from 100 nanometers in sparse forests to below 10 nanometers in densely packed forests.

Comparing approaches

Work in collaboration with Wardle and Stein has validated the two groups’ differing approaches to determining CNT morphology. In their studies, Wardle and Stein use an approach similar to Monte Carlo modeling, which is a statistical technique that involves simulating the behavior of an uncertain system thousands of times under varying assumptions to produce a range of plausible outcomes, some more likely than others. For this application, they assumed a random distribution of “seeds” for carbon nanotubes, simulated their growth, and then calculated characteristics, such as inter-CNT spacing with an associated variability. Along with other factors, they assigned some degree of waviness to the individual CNTs to test the impact on the calculated spacing.

To compare their approaches, the two MIT teams performed parallel analyses that determined average spacing at increasing volume fractions. The trends they exhibited matched well, with spacing decreasing as volume fraction increases. However, at a volume fraction of about 26 percent, the EIS spacing estimates suddenly go up — an outcome that Mutha believes may reflect packing irregularities caused by buckling of the CNTs as she was densifying them.

To investigate the role played by waviness, Mutha compared the variabilities in her results with those in Stein’s results from simulations assuming different degrees of waviness. At high volume fractions, the EIS variabilities were closest to those from the simulations assuming little or no waviness. But at low volume fractions, the closest match came from simulations assuming high waviness.

Based on those findings, Mutha concludes that waviness should be considered when performing EIS analyses — at least in some cases. “To accurately predict the performance of devices with sparse CNT electrodes, we may need to model the electrode as having a broad distribution of interspacings due to the waviness of the CNTs,” she says. “At higher volume fractions, waviness effects may be negligible, and the system can be modeled as simple pillars.”

The researchers’ nondestructive yet quantitative technique provides device designers with a valuable new tool for optimizing the morphology of porous electrodes for a wide range of applications. Already, Mutha and Wang have been using it to predict the performance of supercapacitors and desalination systems. Recent work has focused on designing a high-performance, portable device for the rapid desalination of brackish water. Results to date show that using their approach to optimize the design of CNT electrodes and the overall device simultaneously can as much as double the salt adsorption capacity of the system, while speeding up the rate at which clean water is produced.

This research was supported in part by the MIT Energy Initiative Seed Fund Program and by the King Fahd University of Petroleum and Minerals (KFUPM) in Dhahran, Saudi Arabia, through the Center for Clean Water and Clean Energy at MIT and KFUPM. Mutha’s work was supported by a National Science Foundation Graduate Research Fellowship and Stein’s work by the Department of Defense through the National Defense Science and Engineering Graduate Fellowship Program.