MIT: Mass producing cell-sized robots that could monitor conditions inside oil/gas pipelines or search out disease while floating through the bloodstream


This photo shows circles on a graphene sheet where the sheet is draped over an array of round posts, creating stresses that will cause these discs to separate from the sheet. The gray bar across the sheet is liquid being used to lift the discs from the surface. Credit: Felice Frankel

Tiny robots no bigger than a cell could be mass-produced using a new method developed by researchers at MIT. The microscopic devices, which the team calls “syncells” (short for synthetic cells), might eventually be used to monitor conditions inside an oil or gas pipeline, or to search out disease while floating through the bloodstream.

The key to making such tiny devices in large quantities lies in a method the team developed for controlling the natural fracturing process of atomically-thin, brittle , directing the fracture lines so that they produce miniscule pockets of a predictable size and shape. Embedded inside these pockets are electronic circuits and materials that can collect, record, and output data.

The novel process, called “auto-perforation,” is described in a paper published today in the journal Nature Materials, by MIT Professor Michael Strano, postdoc Pingwei Liu, graduate student Albert Liu, and eight others at MIT.

The system uses a two-dimensional form of carbon called graphene, which forms the outer structure of the tiny syncells. One layer of the material is laid down on a surface, then tiny dots of a polymer material, containing the electronics for the devices, are deposited by a sophisticated laboratory version of an inkjet printer. Then, a second layer of graphene is laid on top.

Controlled fracturing

People think of graphene, an ultrathin but extremely strong material, as being “floppy,” but it is actually brittle, Strano explains. But rather than considering that brittleness a problem, the team figured out that it could be used to their advantage.

“We discovered that you can use the brittleness,” says Strano, who is the Carbon P. Dubbs Professor of Chemical Engineering at MIT. “It’s counterintuitive. Before this work, if you told me you could fracture a material to control its shape at the nanoscale, I would have been incredulous.”

But the new system does just that. It controls the fracturing process so that rather than generating random shards of material, like the remains of a broken window, it produces pieces of uniform shape and size. “What we discovered is that you can impose a strain field to cause the fracture to be guided, and you can use that for controlled fabrication,” Strano says.

When the top layer of graphene is placed over the array of polymer dots, which form round pillar shapes, the places where the graphene drapes over the round edges of the pillars form lines of high strain in the material.

As Albert Liu describes it, “imagine a tablecloth falling slowly down onto the surface of a circular table. One can very easily visualize the developing circular strain toward the table edges, and that’s very much analogous to what happens when a flat sheet of graphene folds around these printed polymer pillars.”

As a result, the fractures are concentrated right along those boundaries, Strano says. “And then something pretty amazing happens: The graphene will completely fracture, but the fracture will be guided around the periphery of the pillar.” The result is a neat, round piece of graphene that looks as if it had been cleanly cut out by a microscopic hole punch.

Because there are two layers of graphene, above and below the polymer pillars, the two resulting disks adhere at their edges to form something like a tiny pita bread pocket, with the polymer sealed inside. “And the advantage here is that this is essentially a single step,” in contrast to many complex clean-room steps needed by other processes to try to make microscopic robotic devices, Strano says.

The researchers have also shown that other two-dimensional materials in addition to graphene, such as molybdenum disulfide and hexagonal boronitride, work just as well.

Cell-like robots

Ranging in size from that of a human red blood cell, about 10 micrometers across, up to about 10 times that size, these tiny objects “start to look and behave like a living biological cell. In fact, under a microscope, you could probably convince most people that it is a cell,” Strano says. mit_logo

This work follows up on earlier research by Strano and his students on developing syncells that could gather information about the chemistry or other properties of their surroundings using sensors on their surface, and store the information for later retrieval, for example injecting a swarm of such particles in one end of a pipeline and retrieving them at the other to gain data about conditions inside it.

While the new syncells do not yet have as many capabilities as the earlier ones, those were assembled individually, whereas this work demonstrates a way of easily mass-producing such devices.

Apart from the syncells’ potential uses for industrial or biomedical monitoring, the way the tiny devices are made is itself an innovation with great potential, according to Albert Liu. “This general procedure of using controlled fracture as a production method can be extended across many length scales,” he says. “[It could potentially be used with] essentially any 2-D materials of choice, in principle allowing future researchers to tailor these atomically thin surfaces into any desired shape or form for applications in other disciplines.”

This is, Albert Liu says, “one of the only ways available right now to produce stand-alone integrated microelectronics on a large scale” that can function as independent, free-floating devices. Depending on the nature of the electronics inside, the devices could be provided with capabilities for movement, detection of various chemicals or other parameters, and memory storage.

There are a wide range of potential new applications for such cell-sized robotic devices, says Strano, who details many such possible uses in a book he co-authored with Shawn Walsh, an expert at Army Research Laboratories, on the subject, called “Robotic Systems and Autonomous Platforms,” which is being published this month by Elsevier Press.

As a demonstration, the team “wrote” the letters M, I, and T into a memory array within a syncell, which stores the information as varying levels of electrical conductivity. This information can then be “read” using an electrical probe, showing that the material can function as a form of electronic memory into which data can be written, read, and erased at will.

It can also retain the data without the need for power, allowing information to be collected at a later time. The researchers have demonstrated that the particles are stable over a period of months even when floating around in water, which is a harsh solvent for electronics, according to Strano.

“I think it opens up a whole new toolkit for micro- and nanofabrication,” he says.

More information: 
Pingwei Liu et al, Autoperforation of 2D materials for generating two-terminal memristive Janus particles, Nature Materials(2018).  DOI: 10.1038/s41563-018-0197-z

Provided by: Massachusetts Institute of Technology

Advertisements

Are Electric Vehicles at a Tipping Point? A Distinguished Panel Discussion Tackling the Tough Questions


Top EVs

Electric vehicles are set to overcome historic and significant hurdles: sticker price, range anxiety and limited model options. Annual sales are forecasted to jump from 1% today to 25% in 2030 and cross 50% by 2040.

Nearly every major car maker has announced new models for EVs. By 2020, there will be 44 models of EVs available in North America.

Watch the Video Discussion with Panelists from Daimler Benz, Chargepoint and Lucid

Please join us for a lively panel discussion with diverse electric vehicle experts as they provide their take on the future of the industry and tackle tough questions like:

  • What are the remaining technical, economic and political hurdles that will impact the mass adoption of EVs?
  • Charging infrastructure vs EVs – the chicken and the egg problem.
  • What’s the right amount and mix of charging infrastructure?
  • Connected, Autonomous, Shared and Electric – How important is “electric” to this futuristic concept?
  • When will EVs be cheaper to own than conventional internal combustion engine vehicles?
  • Battery costs have fallen 74% since 2010 – what other technology opportunities exist i.e. new battery chemistry, economies of scale?
  • China’s EV targets outpace Europe and the US. What are the implications for traditional automakers and Silicon Valley startups?
  • California’s latest Executive Order targets 5 million EVs on the road by 2030. How do we get there?all-ev-models-list-500

Panelists:

Sven Beiker – Moderator & Keynote Speaker, Stanford GSB

Pat Romano – CEO Chargepoint

Fred Kim – R&D Group Manager – Daimler Benz

Albert Liu – Director of Battery Technology, Lucid Motor

Presented By:

mit_logoMIT Club of Northern California

 

 

You might also enjoy watching a Presentation on ‘Mobility Disruption by Tony Seba:

Mobility Disruption | Tony Seba, Silicon Valley Entrepreneur and Lecturer at Stanford University

 

Watch Our Video Presentation – Tenka Energy, Inc.

“Tenka Energy, Inc. Building Ultra-Thin Energy Dense SuperCaps and NexGen Nano-Enabled Pouch & Cylindrical Batteries – Energy Storage Made Small and POWERFUL!”

Super Capacitor Assisted Silicon Nanowire and Graphene Batteries for EV and Small Form Factor Markets. A New Class of Battery /Energy Storage Materials is being developed to support the High Energy – High Capacity – High Performance and Cycle Battery Markets.

“Ultrathin Asymmetric Porous-Nickel Graphene-Based Supercapacitor with High Energy Density and Silicon Nanowire,” A New Generation Battery that is:

  •  Energy Dense
  •  High Specific Power
  •  Simple Manufacturing Process
  •  Low Manufacturing Cost
  •  Rapid Charge/ Re-Charge
  •  Flexible Form Factor
  •  Long Warranty Life
  •  Non-Toxic
  •  Highly Scalable

 

Key Markets & Commercial Applications

EV, (18650 & 21700); Drone and Marine Batteries

Wearable Electronics and The Internet of Things

Estimated $112B Market by 2025

MIT: Research opens route to flexible electronics made from exotic materials – Provides a cost-effective alternative that could perform better than current silicon-based devices


MIT-Transpararent-Graphene_0

MIT researchers have devised a way to grow single crystal GaN thin film on a GaN substrate through two-dimensional materials. The GaN thin film is then exfoliated by a flexible substrate, showing the rainbow color that comes from thin film interference. This technology will pave the way to flexible electronics and the reuse of the wafers.

Photo credits: Wei Kong and Kuan Qiao

Cost-effective method produces semiconducting films from materials that outperform silicon.

“In smart cities, where we might want to put small computers everywhere, we would need low power, highly sensitive computing and sensing devices, made from better materials,” Kim says. “This [study] unlocks the pathway to those devices.”

 

The vast majority of computing devices today are made from silicon, the second most abundant element on Earth, after oxygen. Silicon can be found in various forms in rocks, clay, sand, and soil. And while it is not the best semiconducting material that exists on the planet, it is by far the most readily available. As such, silicon is the dominant material used in most electronic devices, including sensors, solar cells, and the integrated circuits within our computers and smartphones.

Now MIT engineers have developed a technique to fabricate ultrathin semiconducting films made from a host of exotic materials other than silicon. To demonstrate their technique, the researchers fabricated flexible films made from gallium arsenide, gallium nitride, and lithium fluoride — materials that exhibit better performance than silicon but until now have been prohibitively expensive to produce in functional devices.

The new technique, researchers say, provides a cost-effective method to fabricate flexible electronics made from any combination of semiconducting elements, that could perform better than current silicon-based devices.

“We’ve opened up a way to make flexible electronics with so many different material systems, other than silicon,” says Jeehwan Kim, the Class of 1947 Career Development Associate Professor in the departments of Mechanical Engineering and Materials Science and Engineering. Kim envisions the technique can be used to manufacture low-cost, high-performance devices such as flexible solar cells, and wearable computers and sensors.

Details of the new technique are reported today in Nature Materials. In addition to Kim, the paper’s MIT co-authors include Wei Kong, Huashan Li, Kuan Qiao, Yunjo Kim, Kyusang Lee, Doyoon Lee, Tom Osadchy, Richard Molnar, Yang Yu, Sang-hoon Bae, Yang Shao-Horn, and Jeffrey Grossman, along with researchers from Sun Yat-Sen University, the University of Virginia, the University of Texas at Dallas, the U.S. Naval Research Laboratory, Ohio State University, and Georgia Tech.

mit_logoNow you see it, now you don’t

In 2017, Kim and his colleagues devised a method to produce “copies” of expensive semiconducting materials using graphene — an atomically thin sheet of carbon atoms arranged in a hexagonal, chicken-wire pattern. They found that when they stacked graphene on top of a pure, expensive wafer of semiconducting material such as gallium arsenide, then flowed atoms of gallium and arsenide over the stack, the atoms appeared to interact in some way with the underlying atomic layer, as if the intermediate graphene were invisible or transparent. As a result, the atoms assembled into the precise, single-crystalline pattern of the underlying semiconducting wafer, forming an exact copy that could then easily be peeled away from the graphene layer.

The technique, which they call “remote epitaxy,” provided an affordable way to fabricate multiple films of gallium arsenide, using just one expensive underlying wafer.

Soon after they reported their first results, the team wondered whether their technique could be used to copy other semiconducting materials. They tried applying remote epitaxy to silicon, and also germanium — two inexpensive semiconductors — but found that when they flowed these atoms over graphene they failed to interact with their respective underlying layers. It was as if graphene, previously transparent, became suddenly opaque, preventing atoms of silicon and germanium from “seeing” the atoms on the other side.

As it happens, silicon and germanium are two elements that exist within the same group of the periodic table of elements. Specifically, the two elements belong in group four, a class of materials that are ionically neutral, meaning they have no polarity.

“This gave us a hint,” says Kim.

Perhaps, the team reasoned, atoms can only interact with each other through graphene if they have some ionic charge. For instance, in the case of gallium arsenide, gallium has a negative charge at the interface, compared with arsenic’s positive charge. This charge difference, or polarity, may have helped the atoms to interact through graphene as if it were transparent, and to copy the underlying atomic pattern.

“We found that the interaction through graphene is determined by the polarity of the atoms. For the strongest ionically bonded materials, they interact even through three layers of graphene,” Kim says. “It’s similar to the way two magnets can attract, even through a thin sheet of paper.”

Flexible Electronics MARKET_1_9

Opposites attract

The researchers tested their hypothesis by using remote epitaxy to copy semiconducting materials with various degrees of polarity, from neutral silicon and germanium, to slightly polarized gallium arsenide, and finally, highly polarized lithium fluoride — a better, more expensive semiconductor than silicon.

They found that the greater the degree of polarity, the stronger the atomic interaction, even, in some cases, through multiple sheets of graphene. Each film they were able to produce was flexible and merely tens to hundreds of nanometers thick.

The material through which the atoms interact also matters, the team found. In addition to graphene, they experimented with an intermediate layer of hexagonal boron nitride (hBN), a material that resembles graphene’s atomic pattern and has a similar Teflon-like quality, enabling overlying materials to easily peel off once they are copied.

However, hBN is made of oppositely charged boron and nitrogen atoms, which generate a polarity within the material itself. In their experiments, the researchers found that any atoms flowing over hBN, even if they were highly polarized themselves, were unable to interact with their underlying wafers completely, suggesting that the polarity of both the atoms of interest and the intermediate material determines whether the atoms will interact and form a copy of the original semiconducting wafer.

“Now we really understand there are rules of atomic interaction through graphene,” Kim says.

With this new understanding, he says, researchers can now simply look at the periodic table and pick two elements of opposite charge. Once they acquire or fabricate a main wafer made from the same elements, they can then apply the team’s remote epitaxy techniques to fabricate multiple, exact copies of the original wafer.

flexiblecircuitAlso Read About: Chinese Researchers Develop Non-Toxic, Flexible Material for Circuits

“People have mostly used silicon wafers because they’re cheap,” Kim says. “Now our method opens up a way to use higher-performing, nonsilicon materials. You can just purchase one expensive wafer and copy it over and over again, and keep reusing the wafer. And now the material library for this technique is totally expanded.”

Kim envisions that remote epitaxy can now be used to fabricate ultrathin, flexible films from a wide variety of previously exotic, semiconducting materials — as long as the materials are made from atoms with a degree of polarity. Such ultrathin films could potentially be stacked, one on top of the other, to produce tiny, flexible, multifunctional devices, such as wearable sensors, flexible solar cells, and even, in the distant future, “cellphones that attach to your skin.”

“In smart cities, where we might want to put small computers everywhere, we would need low power, highly sensitive computing and sensing devices, made from better materials,” Kim says. “This [study] unlocks the pathway to those devices.”

This research was supported in part by the Defense Advanced Research Projects Agency, the Department of Energy, the Air Force Research Laboratory, LG Electronics, Amore Pacific, LAM Research, and Analog Devices.

 

Jennifer Chu | MIT News Office

MIT: A big new home for the ultrasmall


MIT-Nano-01_0

The MIT.nano building, at the center of campus adjacent to the Great Dome, has expansive glass facades that allow natural light into the labs while giving visitors a clear view of the research in action.

MIT.nano building, the largest of its kind, will usher in a new age of nanoscale advancements.

Nanotechnology, the cutting-edge research field that explores ultrasmall materials, organisms, and devices, has now been graced with the largest, most sophisticated, and most accessible university research facility of its kind in the U.S.: It is the new $400 million MIT.nano building, which will have its official opening ceremonies next week.

The state-of-the-art facility includes two large floors of connected clean-room spaces that are open to view from the outside and available for use by an extraordinary number and variety of researchers across the Institute. It also features a whole floor of undergraduate chemistry teaching labs, and an ultrastable basement level dedicated to electron microscopes and other exquisitely sensitive imaging and measurement tools.

“In recent decades, we have gained the ability to see into the nanoscale with breathtaking precision. This insight has led to the development of tools and instruments that allow us to design and manipulate matter like nature does, atom by atom and molecule by molecule,” says Vladimir Bulović, the Fariborz Maseeh Professor in Emerging Technology and founding director of MIT.nano. “MIT.nano has arrived on campus at the dawn of the Nano Age. In the decades ahead, its open-access facilities for nanoscience and nanoengineering will equip our community with instruments and processes that can further harness the power of nanotechnology in service to humanity’s greatest challenges.”

“In terms of vibrations and electromagnetic noise, MIT.nano may be the quietest space on campus. But in a community where more than half of recently tenured faculty do work at the nanoscale, MIT.nano’s superb shared facilities guarantee that it will become a lively center of community and collaboration, says MIT President L. Rafael Reif. “I am grateful to the exceptional team — including Provost Martin Schmidt, Founding Director Vladimir Bulovic, and many others — that delivered this extraordinarily sophisticated building on an extraordinarily inaccessible construction site, making a better MIT so we can help to make a better world.”

Accessible and flexible

The 214,000-square-foot building, with its soaring glass facades, sophisticated design and instrumentation, and powerful air-exchange systems, lies at the heart of campus and just off the Infinite Corridor. It took shape during six years of design and construction, and was delivered exactly on schedule and on budget, a rare achievement for such a massive and technologically complex construction project.

“MIT.nano is a game-changer for the MIT research enterprise,” says Vice President for Research Maria Zuber.

“It will provide measurement, imaging, and fabrication capabilities that will dramatically advance science and technology in disciplines across the Institute,” adds Provost Martin Schmidt.

At the heart of the building are two levels of clean rooms — research environments in which the air is continuously scrubbed and replaced to maintain a standard that allows no more than 100 particles of  0.5 microns or larger within a cubic foot of air. To achieve such cleanliness, work on the building has included strict filtration measures and access restrictions for more than a year, and at the moment, with the spaces not yet in full use, they far exceed that standard.

All of the lab and instrumentation spaces in the building will be used as shared facilities, accessible to any MIT researcher who needs the specialized tools that will be installed there over the coming months and years. The tools will be continually upgraded, as the building is designed to be flexible and ready for the latest advances in equipment for making, studying, measuring, and manipulating nanoscale objects — things measured in billionths of a meter, whether they be technological, biological, or chemical.

Many of the tools and instruments to be installed in MIT.nano are so costly and require so much support in services and operations that they would likely be out of reach for a single researcher or team. One of the instruments now installed and being calibrated in the basement imaging and metrology suites — sitting atop a 5-million-pound slab of concrete to provide the steadiest base possible — is a cryogenic transmission electron microscope. This multimillion dollar instrument is hosted in an equally costly room with fine-tuned control of temperature and humidity, specialized features to minimize the mechanical and electromagnetic interference, and a technical support team. The device, one of two currently being installed in MIT.nano, will enable detailed 3-D observations of cells or materials held at very low liquid-nitrogen temperatures, giving a glimpse into the exquisite nanoscale features of the soft-matter world.

Almost half of the MIT.nano’s footage is devoted to lab space — 100,000 square feet of it — which is about 100 times larger in size than the typical private lab space of a young experimental research group at MIT, Bulović says. Private labs typically take a few years to build out, and once in place often house valuable equipment that is idle for at least part of the time. It will similarly take a few years to fully build out MIT.nano’s shared labs, but Bulović expects that the growing collection of advanced instruments will rarely be idle. The instrument sets will be selected and designed to drastically improve a researcher’s ability to hit the ground running with access to the best tools from the start, he says.

Principal investigators often “find there’s a benefit to contributing tools to the community so they can be shared and perfected through their use,” Bulović says. “They recognize that as these tools are not needed for their own work 24/7, attracting additional instrument users can generate a revenue stream for the tool, which supports maintenance and future upgrades while also enhancing the research output of labs that would not have access to those tools otherwise.”

A facility sized to meet demand

Once MIT.nano is fully outfitted, over 2,000 MIT faculty and researchers are expected to use the new facilities every year, according to Bulović. Besides its clean-room floors, instrumentation floor, chemistry labs, and the top-floor prototyping labs, the new building also houses a unique facility at MIT: a two-story virtual-reality and visualization space called the Immersion Lab. It could be used by researchers studying subcellular-resolution images of biological tissues or complex computer simulations, or planetary scientists walking through a reproduced Martian surface looking for geologically interesting sites; it may even lend itself to artistic creations or performances, he says. “It’s a unique space. The beauty of it is it will connect to the huge datasets” coming from instruments such as the cryoelectron microscopes, or from simulations generated by artificial intelligence labs, or from other external datasets.

The chemistry labs on the building’s fifth floor, which can accommodate a dozen classes of a dozen students each, are already fully outfitted and in full use for this fall. The labs allow undergraduate chemistry students an exceptionally full and up-to-date experience of lab processes and tools.

“The Department of Chemistry is delighted to move into our new state-of-the-art Undergraduate Teaching Laboratories (UGTL) in MIT.nano,” says department head Timothy Jamison. “The synergy between our URIECA curriculum and this new space enables us to provide an even stronger educational foundation in experimental chemistry to our students. Vladimir Bulović and the MIT.nano team have been wonderful partners at all stages — throughout the design, construction, and move — and we look forward to other opportunities resulting from this collaboration and the presence of our UGTL in MIT.nano.”

The building itself was designed to be far more open and accessible than any comparable clean-room facility in the world. Those outside the labs can watch through MIT.nano’s many windows and see the use of these specialized devices and how such labs work. Meanwhile, researchers themselves can more easily interact with each other and see the sunshine and the gently waving bamboo plants outdoors as a reminder of the outside world that they are working to benefit.

A courtyard path on the south side of the building is named the Improbability Walk, in honor of the late MIT Institute Professor Emerita Mildred “Millie” Dresselhaus. The name is a nod to a statement by the beloved mentor, collaborator, teacher, and world-renowned pioneer in solid-state physics and nanoscale engineering, who once said, “My background is so improbable — that I’d be here from where I started.”

Those who walk through the building’s sunlight-soaked corridors and galleries will notice walls surfaced with panels of limestone from the Yangtze Platform of southwestern China. The limestone’s delicate patterns of fine horizontal lines are made up of tiny microparticles, such as bits of ancient microorganisms, laid down at the bottom of primeval waters before dinosaurs roamed the Earth. The very newest marvels to emerge in nanotechnology will thus be coming into existence right within view of some of their most ancient minuscule precursors.

Read Genesis Nanotech News Online: Our Latest Edition


Genesis Nanotech News Online: Our Latest Edition with Articles Like –

Australian researchers design a rapid nano-filter that cleans dirty water 100X faster than current technology

Zombie Brain Cells Found in Mice

Energy Storage Technologies vie for Investmemt and Market Share

… AND …

Breakthrough Discovery: How groups of cells are able to build our tissues and organs while we are still embryos +

… 15 More Contributing Authors & Articles

Read Genesis Nanotech Online Here

#greatthingsfromsmallthings

MIT: New battery technology gobbles up carbon dioxide – Ultimately may help reduce the emission of the greenhouse gas to the atmosphere + Could Carbon Dioxide Capture Batteries Replace Phone and EV Batteries?


MIT-CO2_0

This scanning electron microscope image shows the carbon cathode of a carbon-dioxide-based battery made by MIT researchers, after the battery was discharged. It shows the buildup of carbon compounds on the surface, composed of carbonate material that could be derived from power plant emissions, compared to the original pristine surface (inset) Courtesy of the researchers

Lithium-based battery could make use of greenhouse gas before it ever gets into the atmosphere.

A new type of battery developed by researchers at MIT could be made partly from carbon dioxide captured from power plants. Rather than attempting to convert carbon dioxide to specialized chemicals using metal catalysts, which is currently highly challenging, this battery could continuously convert carbon dioxide into a solid mineral carbonate as it discharges.

convertingat

While still based on early-stage research and far from commercial deployment, the new battery formulation could open up new avenues for tailoring electrochemical carbon dioxide conversion reactions, which may ultimately help reduce the emission of the greenhouse gas to the atmosphere.

battery-atmosphereRead Also:  Scientists Have Created Batteries Using Carbon Dioxide From The Atmosphere Which Could Replace Phone And Electric Car Batteries

 

 

 

The battery is made from lithium metal, carbon, and an electrolyte that the researchers designed. The findings are described today in the journal Joule, in a paper by assistant professor of mechanical engineering Betar Gallant, doctoral student Aliza Khurram, and postdoc Mingfu He.

Currently, power plants equipped with carbon capture systems generally use up to 30 percent of the electricity they generate just to power the capture, release, and storage of carbon dioxide. Anything that can reduce the cost of that capture process, or that can result in an end product that has value, could significantly change the economics of such systems, the researchers say.

However, “carbon dioxide is not very reactive,” Gallant explains, so “trying to find new reaction pathways is important.” Generally, the only way to get carbon dioxide to exhibit significant activity under electrochemical conditions is with large energy inputs in the form of high voltages, which can be an expensive and inefficient process. Ideally, the gas would undergo reactions that produce something worthwhile, such as a useful chemical or a fuel. However, efforts at electrochemical conversion, usually conducted in water, remain hindered by high energy inputs and poor selectivity of the chemicals produced.

Gallant and her co-workers, whose expertise has to do with nonaqueous (not water-based) electrochemical reactions such as those that underlie lithium-based batteries, looked into whether carbon-dioxide-capture chemistry could be put to use to make carbon-dioxide-loaded electrolytes — one of the three essential parts of a battery — where the captured gas could then be used during the discharge of the battery to provide a power output.

This approach is different from releasing the carbon dioxide back to the gas phase for long-term storage, as is now used in carbon capture and sequestration, or CCS. That field generally looks at ways of capturing carbon dioxide from a power plant through a chemical absorption process and then either storing it in underground formations or chemically altering it into a fuel or a chemical feedstock.

Instead, this team developed a new approach that could potentially be used right in the power plant waste stream to make material for one of the main components of a battery.

While interest has grown recently in the development of lithium-carbon-dioxide batteries, which use the gas as a reactant during discharge, the low reactivity of carbon dioxide has typically required the use of metal catalysts. Not only are these expensive, but their function remains poorly understood, and reactions are difficult to control.

By incorporating the gas in a liquid state, however, Gallant and her co-workers found a way to achieve electrochemical carbon dioxide conversion using only a carbon electrode. The key is to pre-activate the carbon dioxide by incorporating it into an amine solution.

“What we’ve shown for the first time is that this technique activates the carbon dioxide for more facile electrochemistry,” Gallant says. “These two chemistries — aqueous amines and nonaqueous battery electrolytes — are not normally used together, but we found that their combination imparts new and interesting behaviors that can increase the discharge voltage and allow for sustained conversion of carbon dioxide.”

They showed through a series of experiments that this approach does work, and can produce a lithium-carbon dioxide battery with voltage and capacity that are competitive with that of state-of-the-art lithium-gas batteries. Moreover, the amine acts as a molecular promoter that is not consumed in the reaction.

The key was developing the right electrolyte system, Khurram explains. In this initial proof-of-concept study, they decided to use a nonaqueous electrolyte because it would limit the available reaction pathways and therefore make it easier to characterize the reaction and determine its viability. The amine material they chose is currently used for CCS applications, but had not previously been applied to batteries.

factory-air-pollution-environment-smoke-shutterstock_130778315-34gj4r8xdrgg8mj9r25a0wThis early system has not yet been optimized and will require further development, the researchers say. For one thing, the cycle life of the battery is limited to 10 charge-discharge cycles, so more research is needed to improve rechargeability and prevent degradation of the cell components. “Lithium-carbon dioxide batteries are years away” as a viable product, Gallant says, as this research covers just one of several needed advances to make them practical.

But the concept offers great potential, according to Gallant. Carbon capture is widely considered essential to meeting worldwide goals for reducing greenhouse gas emissions, but there are not yet proven, long-term ways of disposing of or using all the resulting carbon dioxide. Underground geological disposal is still the leading contender, but this approach remains somewhat unproven and may be limited in how much it can accommodate. It also requires extra energy for drilling and pumping.

The researchers are also investigating the possibility of developing a continuous-operation version of the process, which would use a steady stream of carbon dioxide under pressure with the amine material, rather than a preloaded supply the material, thus allowing it to deliver a steady power output as long as the battery is supplied with carbon dioxide. Ultimately, they hope to make this into an integrated system that will carry out both the capture of carbon dioxide from a power plant’s emissions stream, and its conversion into an electrochemical material that could then be used in batteries. “It’s one way to sequester it as a useful product,” Gallant says.

“It was interesting that Gallant and co-workers cleverly combined the prior knowledge from two different areas, metal-gas battery electrochemistry and carbon-dioxide capture chemistry, and succeeded in increasing both the energy density of the battery and the efficiency of the carbon-dioxide capture,” says Kisuk Kang, a professor at Seoul National University in South Korea, who was not associated with this research.

“Even though more precise understanding of the product formation from carbon dioxide may be needed in the future, this kind of interdisciplinary approach is very exciting and often offers unexpected results, as the authors elegantly demonstrated here,” Kang adds.

MIT’s Department of Mechanical Engineering provided support for the project.

MIT Study: Adding power choices reduces cost and risk of carbon-free electricity


52377624-renewable-energy-sources-vector-infographics-solar-wind-tidal-hydroelectric-geothermal-power-biofuel

New MIT research shows that, unless steady, continuous carbon-free sources of electricity are included in the mix, costs of decarbonizing the electrical system could be prohibitive and end up derailing attempts to mitigate the most severe effects of global climate change. Image: Chelsea Turner

To curb greenhouse gas emissions, nations, states, and cities should aim for a mix of fuel-saving, flexible, and highly reliable sources.

In major legislation passed at the end of August, California committed to creating a 100 percent carbon-free electricity grid — once again leading other nations, states, and cities in setting aggressive policies for slashing greenhouse gas emissions. Now, a study by MIT researchers provides guidelines for cost-effective and reliable ways to build such a zero-carbon electricity system.

MIT-Energy-Mix-01_0The best way to tackle emissions from electricity, the study finds, is to use the most inclusive mix of low-carbon electricity sources.

Costs have declined rapidly for wind power, solar power, and energy storage batteries in recent years, leading some researchers, politicians, and advocates to suggest that these sources alone can power a carbon-free grid. But the new study finds that across a wide range of scenarios and locations, pairing these sources with steady carbon-free resources that can be counted on to meet demand in all seasons and over long periods — such as nuclear, geothermal, bioenergy, and natural gas with carbon capture — is a less costly and lower-risk route to a carbon-free grid.

The new findings are described in a paper published today in the journal Joule, by MIT doctoral student Nestor Sepulveda, Jesse Jenkins PhD ’18, Fernando de Sisternes PhD ’14, and professor of nuclear science and engineering and Associate Provost Richard Lester.

The need for cost effectiveness

“In this paper, we’re looking for robust strategies to get us to a zero-carbon electricity supply, which is the linchpin in overall efforts to mitigate climate change risk across the economy,” Jenkins says. To achieve that, “we need not only to get to zero emissions in the electricity sector, but we also have to do so at a low enough cost that electricity is an attractive substitute for oil, natural gas, and coal in the transportation, heat, and industrial sectors, where decarbonization is typically even more challenging than in electricity. ”

Sepulveda also emphasizes the importance of cost-effective paths to carbon-free electricity, adding that in today’s world, “we have so many problems, and climate change is a very complex and important one, but not the only one. So every extra dollar we spend addressing climate change is also another dollar we can’t use to tackle other pressing societal problems, such as eliminating poverty or disease.” Thus, it’s important for research not only to identify technically achievable options to decarbonize electricity, but also to find ways to achieve carbon reductions at the most reasonable possible cost.

To evaluate the costs of different strategies for deep decarbonization of electricity generation, the team looked at nearly 1,000 different scenarios involving different assumptions about the availability and cost of low-carbon technologies, geographical variations in the availability of renewable resources, and different policies on their use.

Regarding the policies, the team compared two different approaches. The “restrictive” approach permitted only the use of solar and wind generation plus battery storage, augmented by measures to reduce and shift the timing of demand for electricity, as well as long-distance transmission lines to help smooth out local and regional variations. The  “inclusive” approach used all of those technologies but also permitted the option of using  continual carbon-free sources, such as nuclear power, bioenergy, and natural gas with a system for capturing and storing carbon emissions. Under every case the team studied, the broader mix of sources was found to be more affordable.

The cost savings of the more inclusive approach relative to the more restricted case were substantial. Including continual, or “firm,” low-carbon resources in a zero-carbon resource mix lowered costs anywhere from 10 percent to as much as 62 percent, across the many scenarios analyzed. That’s important to know, the authors stress, because in many cases existing and proposed regulations and economic incentives favor, or even mandate, a more restricted range of energy resources.

“The results of this research challenge what has become conventional wisdom on both sides of the climate change debate,” Lester says. “Contrary to fears that effective climate mitigation efforts will be cripplingly expensive, our work shows that even deep decarbonization of the electric power sector is achievable at relatively modest additional cost. But contrary to beliefs that carbon-free electricity can be generated easily and cheaply with wind, solar energy, and storage batteries alone, our analysis makes clear that the societal cost of achieving deep decarbonization that way will likely be far more expensive than is necessary.”

Light bulb RE images

A new taxonomy for electricity sources

In looking at options for new power generation in different scenarios, the team found that the traditional way of describing different types of power sources in the electrical industry — “baseload,” “load following,” and “peaking” resources — is outdated and no longer useful, given the way new resources are being used.

Rather, they suggest, it’s more appropriate to think of power sources in three new categories: “fuel-saving” resources, which include solar, wind and run-of-the-river (that is, without dams) hydropower; “fast-burst” resources, providing rapid but short-duration responses to fluctuations in electricity demand and supply, including battery storage and technologies and pricing strategies to enhance the responsiveness of demand; and “firm” resources, such as nuclear, hydro with large reservoirs, biogas, and geothermal.

“Because we can’t know with certainty the future cost and availability of many of these resources,” Sepulveda notes, “the cases studied covered a wide range of possibilities, in order to make the overall conclusions of the study robust across that range of uncertainties.”

Range of scenarios

The group used a range of projections, made by agencies such as the National Renewable Energy Laboratory, as to the expected costs of different power sources over the coming decades, including costs similar to today’s and anticipated cost reductions as new or improved systems are developed and brought online. For each technology, the researchers chose a projected mid-range cost, along with a low-end and high-end cost estimate, and then studied many combinations of these possible future costs.

Under every scenario, cases that were restricted to using fuel-saving and fast-burst technologies had a higher overall cost of electricity than cases using firm low-carbon sources as well, “even with the most optimistic set of assumptions about future cost reductions,” Sepulveda says.

That’s true, Jenkins adds, “even when we assume, for example, that nuclear remains as expensive as it is today, and wind and solar and batteries get much cheaper.”

The authors also found that across all of the wind-solar-batteries-only cases, the cost of electricity rises rapidly as systems move toward zero emissions, but when firm power sources are also available, electricity costs increase much more gradually as emissions decline to zero.

“If we decide to pursue decarbonization primarily with wind, solar, and batteries,” Jenkins says, “we are effectively ‘going all in’ and betting the planet on achieving very low costs for all of these resources,” as well as the ability to build out continental-scale  high-voltage transmission lines and to induce much more flexible electricity demand.

In contrast, “an electricity system that uses firm low-carbon resources together with solar, wind, and storage can achieve zero emissions with only modest increases in cost even under pessimistic assumptions about how cheap these carbon-free resources become or our ability to unlock flexible demand or expand the grid,” says Jenkins. This shows how the addition of firm low-carbon resources “is an effective hedging strategy that reduces both the cost and risk” for fully decarbonizing power systems, he says.

Even though a fully carbon-free electricity supply is years away in most regions, it is important to do this analysis today, Sepulveda says, because decisions made now about power plant construction, research investments, or climate policies have impacts that can last for decades.

“If we don’t start now” in developing and deploying the widest range of carbon-free alternatives, he says, “that could substantially reduce the likelihood of getting to zero emissions.”

David Victor, a professor of international relations at the University of California at San Diego, who was not involved in this study, says, “After decades of ignoring the problem of climate change, finally policymakers are grappling with how they might make deep cuts in emissions. This new paper in Joule shows that deep decarbonization must include a big role for reliable, firm sources of electric power. The study, one of the few rigorous numerical analyses of how the grid might actually operate with low-emission technologies, offers some sobering news for policymakers who think they can decarbonize the economy with wind and solar alone.”

The research received support from the MIT Energy Initiative, the Martin Family Trust, and the Chilean Navy.

MIT researchers 3-D print colloidal crystals – For the Scale-Up of optical sensors, color displays, and light-guided electronics + YouTube Video


3D MIT-Free-Form-Printing_0

3-D-printed colloidal crystals viewed under a light microscope. Image: Felice Franke

Technique could be used to scale-up self-assembled materials for use as optical sensors, color displays, and light-guided electronics.

MIT engineers have united the principles of self-assembly and 3-D printing using a new technique, which they highlight today in the journal Advanced Materials.

By their direct-write colloidal assembly process, the researchers can build centimeter-high crystals, each made from billions of individual colloids, defined as particles that are between 1 nanometer and 1 micrometer across.

“If you blew up each particle to the size of a soccer ball, it would be like stacking a whole lot of soccer balls to make something as tall as a skyscraper,” says study co-author Alvin Tan, a graduate student in MIT’s Department of Materials Science and Engineering. “That’s what we’re doing at the nanoscale.”

The researchers found a way to print colloids such as polymer nanoparticles in highly ordered arrangements, similar to the atomic structures in crystals. They printed various structures, such as tiny towers and helices, that interact with light in specific ways depending on the size of the individual particles within each structure.

Nanoparticles dispensed from a needle onto a rotating stage, creating a helical crystal containing billions of nanoparticles. (Credit: Alvin Tan)

The team sees the 3-D printing technique as a new way to build self-asssembled materials that leverage the novel properties of nanocrystals, at larger scales, such as optical sensors, color displays, and light-guided electronics.

“If you could 3-D print a circuit that manipulates photons instead of electrons, that could pave the way for future applications in light-based computing, that manipulate light instead of electricity so that devices can be faster and more energy efficient,” Tan says.

Tan’s co-authors are graduate student Justin Beroz, assistant professor of mechanical engineering Mathias Kolle, and associate professor of mechanical engineering A. John Hart.

Out of the fog

Colloids are any large molecules or small particles, typically measuring between 1 nanometer and 1 micrometer in diameter, that are suspended in a liquid or gas. Common examples of colloids are fog, which is made up of soot and other ultrafine particles dispersed in air, and whipped cream, which is a suspension of air bubbles in heavy cream. The particles in these everyday colloids are completely random in their size and the ways in which they are dispersed through the solution.

If uniformly sized colloidal particles are driven together via evaporation of their liquid solvent, causing them to assemble into ordered crystals, it is possible to create structures that, as a whole, exhibit unique optical, chemical, and mechanical properties. These crystals can exhibit properties similar to interesting structures in nature, such as the iridescent cells in butterfly wings, and the microscopic, skeletal fibers in sea sponges.

So far, scientists have developed techniques to evaporate and assemble colloidal particles into thin films to form displays that filter light and create colors based on the size and arrangement of the individual particles. But until now, such colloidal assemblies have been limited to thin films and other planar structures.

“For the first time, we’ve shown that it’s possible to build macroscale self-assembled colloidal materials, and we expect this technique can build any 3-D shape, and be applied to an incredible variety of materials,” says Hart, the senior author of the paper.

Building a particle bridge

The researchers created tiny three-dimensional towers of colloidal particles using a custom-built 3-D-printing apparatus consisting of a glass syringe and needle, mounted above two heated aluminum plates. The needle passes through a hole in the top plate and dispenses a colloid solution onto a substrate attached to the bottom plate.

The team evenly heats both aluminum plates so that as the needle dispenses the colloid solution, the liquid slowly evaporates, leaving only the particles. The bottom plate can be rotated and moved up and down to manipulate the shape of the overall structure, similar to how you might move a bowl under a soft ice cream dispenser to create twists or swirls.

Beroz says that as the colloid solution is pushed through the needle, the liquid acts as a bridge, or mold, for the particles in the solution. The particles “rain down” through the liquid, forming a structure in the shape of the liquid stream. After the liquid evaporates, surface tension between the particles holds them in place, in an ordered configuration.

As a first demonstration of their colloid printing technique, the team worked with solutions of polystyrene particles in water, and created centimeter-high towers and helices. Each of these structures contains 3 billion particles. In subsequent trials, they tested solutions containing different sizes of polystyrene particles and were able to print towers that reflected specific colors, depending on the individual particles’ size.

“By changing the size of these particles, you drastically change the color of the structure,” Beroz says. “It’s due to the way the particles are assembled, in this periodic, ordered way, and the interference of light as it interacts with particles at this scale. We’re essentially 3-D-printing crystals.”

The team also experimented with more exotic colloidal particles, namely silica and gold nanoparticles, which can exhibit unique optical and electronic properties. They printed millimeter-tall towers made from 200-nanometer diameter silica nanoparticles, and 80-nanometer gold nanoparticles, each of which reflected light in different ways.

“There are a lot of things you can do with different kinds of particles ranging from conductive metal particles to semiconducting quantum dots, which we are looking into,” Tan says. “Combining them into different crystal structures and forming them into different geometries for novel device architectures, I think that would be very effective in fields including sensing, energy storage, and photonics.”

This work was supported, in part, by the National Science Foundation, the Singapore Defense Science Organization Postgraduate Fellowship, and the National Defense Science and Engineering Graduate Fellowship Program.

 

MIT: Fish-eye lens may entangle pairs of atoms – may be a promising vehicle for necessary building blocks in designing quantum computers


MIT-Fish-Eye_0

James Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges.

Nearly 150 years ago, the physicist James Maxwell proposed that a circular lens that is thickest at its center, and that gradually thins out at its edges, should exhibit some fascinating optical behavior. Namely, when light is shone through such a lens, it should travel around in perfect circles, creating highly unusual, curved paths of light.

He also noted that such a lens, at least broadly speaking, resembles the eye of a fish. The lens configuration he devised has since been known in physics as Maxwell’s fish-eye lens — a theoretical construct that is only slightly similar to commercially available fish-eye lenses for cameras and telescopes.

Now scientists at MIT and Harvard University have for the first time studied this unique, theoretical lens from a quantum mechanical perspective, to see how individual atoms and photons may behave within the lens. In a study published Wednesday in Physical Review A, they report that the unique configuration of the fish-eye lens enables it to guide single photons through the lens, in such a way as to entangle pairs of atoms, even over relatively long distances.

Entanglement is a quantum phenomenon in which the properties of one particle are linked, or correlated, with those of another particle, even over vast distances. The team’s findings suggest that fish-eye lenses may be a promising vehicle for entangling atoms and other quantum bits, which are the necessary building blocks for designing quantum computers.

“We found that the fish-eye lens has something that no other two-dimensional device has, which is maintaining this entangling ability over large distances, not just for two atoms, but for multiple pairs of distant atoms,” says first author Janos Perczel, a graduate student in MIT’s Department of Physics. “Entanglement and connecting these various quantum bits can be really the name of the game in making a push forward and trying to find applications of quantum mechanics.”

The team also found that the fish-eye lens, contrary to recent claims, does not produce a perfect image. Scientists have thought that Maxwell’s fish-eye may be a candidate for a “perfect lens” — a lens that can go beyond the diffraction limit, meaning that it can focus light to a point that is smaller than the light’s own wavelength. This perfect imaging, scientist predict, should produce an image with essentially unlimited resolution and extreme clarity.

However, by modeling the behavior of photons through a simulated fish-eye lens, at the quantum level, Perczel and his colleagues concluded that it cannot produce a perfect image, as originally predicted.

“This tells you that there are these limits in physics that are really difficult to break,” Perczel says. “Even in this system, which seemed to be a perfect candidate, this limit seems to be obeyed. Perhaps perfect imaging may still be possible with the fish eye in some other, more complicated way, but not as originally proposed.”

Perczel’s co-authors on the paper are Peter Komar and Mikhail Lukin from Harvard University.

A circular path

Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. The denser a material, the slower light moves through it. This explains the optical effect when a straw is placed in a glass half full of water. Because the water is so much denser than the air above it, light suddenly moves more slowly, bending as it travels through water and creating an image that looks as if the straw is disjointed.

In the theoretical fish-eye lens, the differences in density are much more gradual and are distributed in a circular pattern, in such a way that it curves rather bends light, guiding light in perfect circles within the lens.

In 2009, Ulf Leonhardt, a physicist at the Weizmann Institute of Science in Israel was studying the optical properties of Maxwell’s fish-eye lens and observed that, when photons are released through the lens from a single point source, the light travels in perfect circles through the lens and collects at a single point at the opposite end, with very little loss of light.

“None of the light rays wander off in unwanted directions,” Perczel says. “Everything follows a perfect trajectory, and all the light will meet at the same time at the same spot.”

Leonhardt, in reporting his results, made a brief mention as to whether the fish-eye lens’ single-point focus might be useful in precisely entangling pairs of atoms at opposite ends of the lens.

“Mikhail [Lukin] asked him whether he had worked out the answer, and he said he hadn’t,” Perczel says. “That’s how we started this project and started digging deeper into how well this entangling operation works within the fish-eye lens.”

Playing photon ping-pong

To investigate the quantum potential of the fish-eye lens, the researchers modeled the lens as the simplest possible system, consisting of two atoms, one at either end of a two-dimensional fish-eye lens, and a single photon, aimed at the first atom. Using established equations of quantum mechanics, the team tracked the photon at any given point in time as it traveled through the lens, and calculated the state of both atoms and their energy levels through time.

They found that when a single photon is shone through the lens, it is temporarily absorbed by an atom at one end of the lens. It then circles through the lens, to the second atom at the precise opposite end of the lens. This second atom momentarily absorbs the photon before sending it back through the lens, where the light collects precisely back on the first atom.

“The photon is bounced back and forth, and the atoms are basically playing ping pong,” Perczel says. “Initially only one of the atoms has the photon, and then the other one. But between these two extremes, there’s a point where both of them kind of have it. It’s this mind-blowing quantum mechanics idea of entanglement, where the photon is completely shared equally between the two atoms.”

Perczel says that the photon is able to entangle the atoms because of the unique geometry of the fish-eye lens. The lens’ density is distributed in such a way that it guides light in a perfectly circular pattern and can cause even a single photon to bounce back and forth between two precise points along a circular path.

“If the photon just flew away in all directions, there wouldn’t be any entanglement,” Perczel says. “But the fish-eye gives this total control over the light rays, so you have an entangled system over long distances, which is a precious quantum system that you can use.”

As they increased the size of the fish-eye lens in their model, the atoms remained entangled, even over relatively large distances of tens of microns. They also observed that, even if some light escaped the lens, the atoms were able to share enough of a photon’s energy to remain entangled. Finally, as they placed more pairs of atoms in the lens, opposite to one another, along with corresponding photons, these atoms also became simultaneously entangled.

“You can use the fish eye to entangle multiple pairs of atoms at a time, which is what makes it useful and promising,” Perczel says.

Fishy secrets

In modeling the behavior of photons and atoms in the fish-eye lens, the researchers also found that, as light collected on the opposite end of the lens, it did so within an area that was larger than the wavelength of the photon’s light, meaning that the lens likely cannot produce a perfect image.

“We can precisely ask the question during this photon exchange, what’s the size of the spot to which the photon gets recollected? And we found that it’s comparable to the wavelength of the photon, and not smaller,” Perczel says. “Perfect imaging would imply it would focus on an infinitely sharp spot. However, that is not what our quantum mechanical calculations showed us.”

Going forward, the team hopes to work with experimentalists to test the quantum behaviors they observed in their modeling. In fact, in their paper, the team also briefly proposes a way to design a fish-eye lens for quantum entanglement experiments.

“The fish-eye lens still has its secrets, and remarkable physics buried in it,” Perczel says. “But now it’s making an appearance in quantum technologies where it turns out this lens could be really useful for entangling distant quantum bits, which is the basic building block for building any useful quantum computer or quantum information processing device.”

MIT: Introducing the latest in textiles: Soft hardware


MIT-Fiber-Diodes-01_0For the first time, the researchers from MIT and AFFOA have produced fibers with embedded electronics that are so flexible they can be woven into soft fabrics and made into wearable clothing.: Courtesy of the researchers

Researchers incorporate optoelectronic diodes into fibers and weave them into washable fabrics.

The latest development in textiles and fibers is a kind of soft hardware that you can wear: cloth that has electronic devices built right into it.

Researchers at MIT have now embedded high speed optoelectronic semiconductor devices, including light-emitting diodes (LEDs) and diode photodetectors, within fibers that were then woven at Inman Mills, in South Carolina, into soft, washable fabrics and made into communication systems. This marks the achievement of a long-sought goal of creating “smart” fabrics by incorporating semiconductor devices — the key ingredient of modern electronics — which until now was the missing piece for making fabrics with sophisticated functionality.

This discovery, the researchers  say, could unleash a new “Moore’s Law” for fibers — in other words, a rapid progression in which the capabilities of fibers would grow rapidly and exponentially over time, just as the capabilities of microchips have grown over decades.

The findings are described this week in the journal Nature in a paper by former MIT graduate student Michael Rein; his research advisor Yoel Fink, MIT professor of materials science and electrical engineering and CEO of AFFOA (Advanced Functional Fabrics of America); along with a team from MIT, AFFOA, Inman Mills, EPFL in Lausanne, Switzerland, and Lincoln Laboratory.

A spool of fine, soft fiber made using the new process shows the embedded LEDs turning on and off to demonstrate their functionality. The team has used similar fibers to transmit music to detector fibers, which work even when underwater. (Courtesy of the researchers)

Optical fibers have been traditionally produced by making a cylindrical object called a “preform,” which is essentially a scaled-up model of the fiber, then heating it. Softened material is then drawn or pulled downward under tension and the resulting fiber is collected on a spool.

The key breakthrough for producing  these new fibers was to add to the preform light-emitting semiconductor diodes the size of a grain of sand, and a pair of copper wires a fraction of a hair’s width. When heated in a furnace during the fiber-drawing process, the polymer preform partially liquified, forming a long fiber with the diodes lined up along its center and connected by the copper wires.

In this case, the solid components were two types of electrical diodes made using standard microchip technology: light-emitting diodes (LEDs) and photosensing diodes. “Both the devices and the wires maintain their dimensions while everything shrinks around them” in the drawing process, Rein says. The resulting fibers were then woven into fabrics, which were laundered 10 times to demonstrate their practicality as possible material for clothing.

“This approach adds a new insight into the process of making fibers,” says Rein, who was the paper’s lead author and developed the concept that led to the new process. “Instead of drawing the material all together in a liquid state, we mixed in devices in particulate form, together with thin metal wires.”

One of the advantages of incorporating function into the fiber material itself is that the resulting  fiber is inherently waterproof. To demonstrate this, the team placed some of the photodetecting fibers inside a fish tank. A lamp outside the aquarium transmitted music (appropriately, Handel’s “Water Music”) through the water to the fibers in the form of rapid optical signals. The fibers in the tank converted the light pulses — so rapid that the light appears steady to the naked eye — to electrical signals, which were then converted into music. The fibers survived in the water for weeks.

Though the principle sounds simple, making it work consistently, and making sure that the fibers could be manufactured reliably and in quantity, has been a long and difficult process. Staff at the Advanced Functional Fabric of America Institute, led by Jason Cox and Chia-Chun Chung, developed the pathways to increasing yield, throughput, and overall reliability, making these fibers ready for transitioning to industry. At the same time, Marty Ellis from Inman Mills developed techniques for weaving these fibers into fabrics using a conventional industrial manufacturing-scale loom.

“This paper describes a scalable path for incorporating semiconductor devices into fibers. We are anticipating the emergence of a ‘Moore’s law’ analog in fibers in the years ahead,” Fink says. “It is already allowing us to expand the fundamental capabilities of fabrics to encompass communications, lighting, physiological monitoring, and more. In the years ahead fabrics will deliver value-added services and will no longer just be selected for aesthetics and comfort.”

He says that the first commercial products incorporating this technology will be reaching the marketplace as early as next year — an extraordinarily short progression from laboratory research to commercialization. Such rapid lab-to-market development was a key part of the reason for creating an academic-industry-government collaborative such as AFFOA in the first place, he says. These initial applications will be specialized products involving communications and safety. “It’s going to be the first fabric communication system. We are right now in the process of transitioning the technology to domestic manufacturers and industry at an unprecendented speed and scale,” he says.

In addition to commercial applications, Fink says the U.S. Department of Defense — one of AFFOA’s major supporters — “is exploring applications of these ideas to our women and men in uniform.”

Beyond communications, the fibers could potentially have significant applications in the biomedical field, the researchers say. For example, devices using such fibers might be used to make a wristband that could measure pulse or blood oxygen levels, or be woven into a bandage to continuously monitor the healing  process.

The research was supported in part by the MIT Materials Research Science and Engineering Center (MRSEC) through the MRSEC Program of the National Science Foundation, by the U.S. Army Research Laboratory and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies. This work was also supported by the Assistant Secretary of Defense for Research and Engineering.