DNA Nanotechnology Tools: From Design to Applications: Current Opportunities and Collaborations – Wyss Institute – Harvard University


Suite of DNA nanotechnology devices engineered to overcome specific bottlenecks in the development of new therapies, diagnostics, and understanding of molecular structures

Lead Inventors

William Shih Wesley Wong

Advantages

  • DNA as building blocks
  • Broad applications
  • Low cost with big potential
DNA Nanotechnology Tools: From Design to Applications

DNA nanostructures with their potential for cell and tissue permeability, biocompatibility, and high programmability at the nanoscale level are promising candidates as new types of drug delivery vehicles, highly specific diagnostic devices, and tools to decipher how biomolecules dynamically change their shapes, and interact with each other and with candidate drugs. Wyss Institute researchers are providing a suite of diverse, multifunctional DNA nanotechnological tools with unique capabilities and potential for a broad range of clinical and biomedical research areas.

DNA nanotechnological devices for therapeutic drug delivery

DNA nanostructures have future potential to be widely used to transport and present a variety of biologically active molecules such as drugs and immune-enhancing antigens and adjuvants to target cells and tissues in the human body.

DNA origami as high-precision delivery components of cancer vaccines


The Wyss Institute has developed cancer vaccines to improve immunotherapies. These approaches use implantable or injectable biomaterial-based scaffolds that present tumor-specific antigens, and biomolecules that attract dendritic immune cells (DCs) into the scaffold, and activate them so that after their release they can orchestrate anti-tumor T cell responses against tumors carrying the same antigens. To be activated most effectively, DCs likely need to experience tumor antigens and immune-boosting CpG adjuvant molecules at particular ratios (stoichiometries) and configurations that register with the density and distribution of receptor molecules on their cell surface.

Specifically developed DNA origami, programmed to assemble into rigid square-lattice blocks that co-present tumor antigens and adjuvants to DCs within biomaterial scaffolds with nanoscale precision have the potential to boost the efficacy of therapeutic cancer vaccines, and can be further functionalized with anti-cancer drugs.

Chemical modification strategy to protect drug-delivering DNA nanostructures


DNA nanostructures such as self-assembling DNA origami are promising vehicles for the delivery of drugs and diagnostics. They can be flexibly functionalized with small molecule and protein drugs, as well as features that facilitate their delivery to specific target cells and tissues. However, their potential is hampered by their limited stability in the body’s tissues and blood. To help fulfill the extraordinary promise of DNA nanostructures, Wyss researchers developed an easy, effective and scalable chemical cross-linking approach that can provide DNA nanostructures with the stability they need as effective vehicles for drugs and diagnostics.

In two simple cost-effective steps, the Wyss’ approach first uses a small-molecule, unobtrusive neutralizing agent, PEG-oligolysine, that carries multiple positive charges, to cover DNA origami structures. In contrast to commonly used Mg2+ions that each neutralize only two negative changes in DNA structures, PEG-oligolysine covers multiple negative charges at one, thus forming a stable “electrostatic net,” which increases the stability of DNA nanostructures about 400-fold. Then, by applying a chemical cross-linking reagent known as glutaraldehyde, additional stabilizing bonds are introduced into the electrostatic net, which increases the stability of DNA nanostructures by another 250-fold, extending their half-life into a range that is compatible with a broad range of clinical applications.

DNA nanotechnological devices as ultrasensitive diagnostic and analytical tools

The generation of detectable DNA nanostructures in response to a disease or pathogen-specific nucleic acids, in principle, offers a means for highly effective biomarker detection in diverse samples. A single molecule binding event of a synthetic oligonucleotide to a target nucleic acid can nucleate the creation of much larger structures by the cooperative assembly of smaller synthetic DNA units like DNA tiles or bricks into larger structures that then can be visualized in simple laboratory assays. However, a central obstacle to these approaches is the occurrence of (1) non-specific binding and (2) non-specific nucleation events in the absence of a specific target nucleic acid which can lead to false-positive results. Wyss DNA nanotechnologists have developed two separately applicable but combinable solutions for these problems.

Digital counting of biomarker molecules with DNA nanoswitch catenanes


To enable the initial detection (binding) of biomarkers with ultra-high sensitivity and specificity, Wyss researchers have developed a type of DNA nanoswitch that, designed as a larger catenane (Latin catenameaning chain), is assembled from mechanically interlocked ring-shaped substructures with specific functionalities that together enable the detection and counting of single biomarker molecules. In the “DNA Nanoswitch Catenane” structure, both ends of a longer synthetic DNA strand are linked to two antibody fragments that each specifically bind different parts of the same biomarker molecule of interest, thus allowing for high target specificity and sensitivity.

This bridging-event causes the strand to close into a “host ring,” which it is interlocked at different regions with different “guest rings.” Closing of the host ring switches the guest rings into a configuration that allows the synthesis of a new DNA strand. The newly synthesized diagnostic strand then can be unambiguously detected as a single digital molecule count, while disrupting the antibody fragment/biomarker complex starts a new biomarker counting cycle. Both, the target binding specificity and the synthesis of a target-specific DNA strand also enable the combination of multiple DNA nanoswitch catenanes to simultaneously count different biomarker molecules in a single multiplexed reaction.

For ultrasensitive diagnostics, it is desirable to have the fastest amplification and the lowest rate of spurious nucleation. DNA nanotechnology approaches have the potential to deliver this in an enzyme-free, low-cost manner.

WILLIAM SHIH

A rapid amplification platform for diverse biomarkers


A rapid, low-cost and enzyme-free detection and amplification platform avoids non-specific nucleation and amplification and allows the self-assembly of much larger micron-scale structures from a single seed in just minutes. The method, called “Crisscross Nanoseed Detection” enables the ultra-cooperative assembly of ribbons starting from a single biomarker binding event. The micron-scale structures are densely woven from single-stranded “DNA slats,” whereby an inbound slat snakes over and under six or more previously captured slats on a growing ribbon end in a “crisscross” manner, forming weak but highly-specific interactions with its interacting DNA slats. The nucleation of the assembly process is strictly target-seed specific and the assembly can be carried out in a one-step reaction in about 15 minutes without the addition of further reagents, and over a broad range of temperatures. Using standard laboratory equipment, the assembled structures then can be rapidly visualized or otherwise detected, for example, using high-throughput fluorescence plate reader assays.

null

CURRENT OPPORTUNITY – STARTUP

Crisscross Nanoseed Detection: Nanotechnology-Powered Infectious Disease Diagnostics

Enzyme-free DNA nanotechnology for rapid, ultrasensitive, and low-cost detection of infectious disease biomarkers with broad accessibility in point-of-care settings.

The DNA assembly process in the Crisscross Nanoseed Detection method can also be linked to the action of DNA nanoswitch catenanes that highly specifically detect a biomarker molecule leading to preservation of a molecular record. Each surviving record can nucleate the assembly of a crisscross nanostructure, combining high-specificity binding with amplification for biomarker detection.

Wyss researchers are currently developing the approach as a multiplexable low-cost diagnostic for the COVID-19 causing SARS-CoV-2 virus and other pathogens that could give accurate results faster and at lower costs than currently used techniques.

Nanoscale devices for determining the structure and identity of proteins at the single-molecule level

The ability to identify and quantify proteins from trace biological samples would have a profound impact on both basic research and clinical practice, from monitoring changes in protein expression within individual cells, to enabling the discovery of new biomarkers of disease. Furthermore, the ability to also determine their structures and interactions would open up new avenues for drug discovery and characterization. Over the past decades, developments in DNA analysis and sequencing have unquestionably revolutionized medicine – yet equivalent developments for protein analysis have remained a challenge. While methods such as mass spectrometry for protein identification, and cryoEM for structure determination have rapidly advanced, challenges remain regarding resolution and the ability to work with trace heterogeneous samples.

To help meet this challenge, researchers at the Wyss Institute have developed a new approach that combines DNA nanotechnology with single-molecule manipulation to enable the structural identification and analysis of proteins and other macromolecules. “DNA Nanoswitch Calipers” (DNCs) offer a high-resolution approach to “fingerprint proteins” by measuring distances and determining geometries within single proteins in solution. DNCs are nanodevices designed to measure distances between DNA handles that have been attached to target molecules of interest. DNC states can be actuated and read out using single-molecule force spectroscopy, enabling multiple absolute distance measurements to be made on each single-molecule.

DNCs could be widely adapted to advance research in different areas, including structural biology, proteomics, diagnostics and drug discovery.

All technologies are in development and available for industry collaborations.

MIT: Biotech labs are using AI Inspired by DALL-E to Invent New Drugs


The explosion in AI models like OpenAI’s DALL-E 2—programs trained to generate pictures of almost anything you ask for—has sent ripples through the creative industries, from fashion to filmmaking, by providing weird and wonderful images on demand.

The same technology behind these programs is also making a splash in biotech labs, which have started using this type of generative AI, known as a diffusion model, to conjure up designs for new types of protein never seen in nature.

Related Story

DeepMind’s protein-folding AI has solved a 50-year-old grand challenge of biology

AlphaFold can predict the shape of proteins to within the width of an atom. The breakthrough will help scientists design drugs and understand disease.

Today, two labs separately announced programs that use diffusion models to generate designs for novel proteins with more precision than ever before. Generate Biomedicines, a Boston-based startup, revealed a program called Chroma, which the company describes as the “DALL-E 2 of biology.”

At the same time, a team at the University of Washington led by biologist David Baker has built a similar program called RoseTTAFold Diffusion. In a preprint paper posted online today, Baker and his colleagues show that their model can generate precise designs for novel proteins that can then be brought to life in the lab. “We’re generating proteins with really no similarity to existing ones,” says Brian Trippe, one of the co-developers of RoseTTAFold.

These protein generators can be directed to produce designs for proteins with specific properties, such as shape or size or function. In effect, this makes it possible to come up with new proteins to do particular jobs on demand. Researchers hope that this will eventually lead to the development of new and more effective drugs. “We can discover in minutes what took evolution millions of years,” says Gevorg Grigoryan, CTO of Generate Biomedicines.

“What is notable about this work is the generation of proteins according to desired constraints,” says Ava Amini, a biophysicist at Microsoft Research in Cambridge, Massachusetts. 

Symmetrical protein structures generated by Chroma

Proteins are the fundamental building blocks of living systems. In animals, they digest food, contract muscles, detect light, drive the immune system, and so much more. When people get sick, proteins play a part. 

Proteins are thus prime targets for drugs. And many of today’s newest drugs are protein based themselves. “Nature uses proteins for essentially everything,” says Grigoryan. “The promise that offers for therapeutic interventions is really immense.”

But drug designers currently have to draw on an ingredient list made up of natural proteins. The goal of protein generation is to extend that list with a nearly infinite pool of computer-designed ones.

Computational techniques for designing proteins are not new. But previous approaches have been slow and not great at designing large proteins or protein complexes—molecular machines made up of multiple proteins coupled together. And such proteins are often crucial for treating diseases.  

A protein structure generated by RoseTTAFold Diffusion (left) and the same structure created in the lab (right)

The two programs announced today are also not the first use of diffusion models for protein generation. A handful of studies in the last few months from Amini and others have shown that diffusion models are a promising technique, but these were proof-of-concept prototypes. Chroma and RoseTTAFold Diffusion build on this work and are the first full-fledged programs that can produce precise designs for a wide variety of proteins.

Namrata Anand, who co-developed one of the first diffusion models for protein generation in May 2022, thinks the big significance of Chroma and RoseTTAFold Diffusion is that they have taken the technique and supersized it, training on more data and more computers. “It may be fair to say that this is more like DALL-E because of how they’ve scaled things up,” she says.

Diffusion models are neural networks trained to remove “noise”—random perturbations added to data—from their input. Given a random mess of pixels, a diffusion model will try to turn it into a recognizable image.

In Chroma, noise is added by unraveling the amino acid chains that a protein is made from. Given a random clump of these chains, Chroma tries to put them together to form a protein. Guided by specified constraints on what the result should look like, Chroma can generate novel proteins with specific properties.

Baker’s team takes a different approach, though the end results are similar. Its diffusion model starts with an even more scrambled structure. Another key difference is that RoseTTAFold Diffusion uses information about how the pieces of a protein fit together provided by a separate neural network trained to predict protein structure (as DeepMind’s AlphaFold does). This guides the overall generative process. 

Generate Biomedicines and Baker’s team both show off an impressive array of results. They are able to generate proteins with multiple degrees of symmetry, including proteins that are circular, triangular, or hexagonal. To illustrate the versatility of their program, Generate Biomedicines generated proteins shaped like the 26 letters of the Latin alphabet and the numerals 0 to 10. Both teams can also generate pieces of proteins, matching new parts to existing structures.

Related Story

I Was There When: AI helped create a vaccine

Most of these demonstrated structures would serve no purpose in practice. But because a protein’s function is determined by its shape, being able to generate different structures on demand is crucial.

Generating strange designs on a computer is one thing. But the goal is to turn these designs into real proteins. To test whether Chroma produced designs that could be made, Generate Biomedicines took the sequences for some of its designs—the amino acid strings that make up the protein—and ran them through another AI program. They found that 55% of them would be predicted to fold into the structure generated by Chroma, which suggests that these are designs for viable protein.

Baker’s team ran a similar test. But Baker and his colleagues have gone a lot further than Generate Biomedicines in evaluating their model. They have created some of RoseTTAFold Diffusion’s designs in their lab. (Generate Biomedicines says that it is also doing lab tests but is not yet ready to share results.) “This is more than just proof of concept,” says Trippe. “We’re actually using this to make really great proteins.”

IAN C HAYDON / UW INSTITUTE FOR PROTEIN DESIGN

For Baker, the headline result is the generation of a new protein that attaches to the parathyroid hormone, which controls calcium levels in the blood. “We basically gave the model the hormone and nothing else and told it to make a protein that binds to it,” he says. When they tested the novel protein in the lab, they found that it attached to the hormone more tightly than anything that could have been generated using other computational methods—and more tightly than existing drugs. “It came up with this protein design out of thin air,” says Baker. 

Grigoryan acknowledges that inventing new proteins is just the first step of many. We’re a drug company, he says. “At the end of the day what matters is whether we can make medicines that work or not.” Protein based drugs need to be manufactured in large numbers, then tested in the lab and finally in humans. This can take years. But he thinks that his company and others will find ways to speed up those steps up as well.

“The rate of scientific progress comes in fits and starts,” says Baker. “But right now we’re in the middle of what can only be called a technological revolution.”

From seawater to drinking water – With just a push of a button!


MIT Researchers build a portable desalination unit that generates clear, clean drinking water without the need for filters or high-pressure pumps.

MIT researchers have developed a portable desalination unit, weighing less than 10 kilograms, that can remove particles and salts to generate drinking water.

The suitcase-sized device, which requires less power to operate than a cell phone charger, can also be driven by a small, portable solar panel, which can be purchased online for around $50. It automatically generates drinking water that exceeds World Health Organization quality standards. The technology is packaged into a user-friendly device that runs with the push of one button.

Unlike other portable desalination units that require water to pass through filters, this device utilizes electrical power to remove particles from drinking water. Eliminating the need for replacement filters greatly reduces the long-term maintenance requirements.

This could enable the unit to be deployed in remote and severely resource-limited areas, such as communities on small islands or aboard seafaring cargo ships. It could also be used to aid refugees fleeing natural disasters or by soldiers carrying out long-term military operations.

“This is really the culmination of a 10-year journey that I and my group have been on. We worked for years on the physics behind individual desalination processes, but pushing all those advances into a box, building a system, and demonstrating it in the ocean, that was a really meaningful and rewarding experience for me,” says senior author Jongyoon Han, a professor of electrical engineering and computer science and of biological engineering, and a member of the Research Laboratory of Electronics (RLE).

Joining Han on the paper are first author Junghyo Yoon, a research scientist in RLE; Hyukjin J. Kwon, a former postdoc; SungKu Kang, a postdoc at Northeastern University; and Eric Brack of the U.S. Army Combat Capabilities Development Command (DEVCOM). The research has been published online in Environmental Science and Technology.

Watch: YouTube Videp

Filter-free technology

Commercially available portable desalination units typically require high-pressure pumps to push water through filters, which are very difficult to miniaturize without compromising the energy-efficiency of the device, explains Yoon.

Instead, their unit relies on a technique called ion concentration polarization (ICP), which was pioneered by Han’s group more than 10 years ago. Rather than filtering water, the ICP process applies an electrical field to membranes placed above and below a channel of water. The membranes repel positively or negatively charged particles — including salt molecules, bacteria, and viruses — as they flow past. The charged particles are funneled into a second stream of water that is eventually discharged.

The process removes both dissolved and suspended solids, allowing clean water to pass through the channel. Since it only requires a low-pressure pump, ICP uses less energy than other techniques.

But ICP does not always remove all the salts floating in the middle of the channel. So the researchers incorporated a second process, known as electrodialysis, to remove remaining salt ions.

Yoon and Kang used machine learning to find the ideal combination of ICP and electrodialysis modules. The optimal setup includes a two-stage ICP process, with water flowing through six modules in the first stage then through three in the second stage, followed by a single electrodialysis process. This minimized energy usage while ensuring the process remains self-cleaning.

“While it is true that some charged particles could be captured on the ion exchange membrane, if they get trapped, we just reverse the polarity of the electric field and the charged particles can be easily removed,” Yoon explains.

They shrunk and stacked the ICP and electrodialysis modules to improve their energy efficiency and enable them to fit inside a portable device. The researchers designed the device for nonexperts, with just one button to launch the automatic desalination and purification process. Once the salinity level and the number of particles decrease to specific thresholds, the device notifies the user that the water is drinkable.

The researchers also created a smartphone app that can control the unit wirelessly and report real-time data on power consumption and water salinity.

Beach tests

After running lab experiments using water with different salinity and turbidity (cloudiness) levels, they field-tested the device at Boston’s Carson Beach.

Yoon and Kwon set the box near the shore and tossed the feed tube into the water. In about half an hour, the device had filled a plastic drinking cup with clear, drinkable water.

“It was successful even in its first run, which was quite exciting and surprising. But I think the main reason we were successful is the accumulation of all these little advances that we made along the way,” Han says.

The resulting water exceeded World Health Organization quality guidelines, and the unit reduced the amount of suspended solids by at least a factor of 10. Their prototype generates drinking water at a rate of 0.3 liters per hour, and requires only 20 watts of power per liter.

“Right now, we are pushing our research to scale up that production rate,” Yoon says.

One of the biggest challenges of designing the portable system was engineering an intuitive device that could be used by anyone, Han says.

Yoon hopes to make the device more user-friendly and improve its energy efficiency and production rate through a startup he plans to launch to commercialize the technology.

In the lab, Han wants to apply the lessons he’s learned over the past decade to water-quality issues that go beyond desalination, such as rapidly detecting contaminants in drinking water.

“This is definitely an exciting project, and I am proud of the progress we have made so far, but there is still a lot of work to do,” he says.

For example, while “development of portable systems using electro-membrane processes is an original and exciting direction in off-grid, small-scale desalination,” the effects of fouling, especially if the water has high turbidity, could significantly increase maintenance requirements and energy costs, notes Nidal Hilal, professor of engineering and director of the New York University Abu Dhabi Water research center, who was not involved with this research.

“Another limitation is the use of expensive materials,” he adds. “It would be interesting to see similar systems with low-cost materials in place.”

The research was funded, in part, by the DEVCOM Soldier Center, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), the Experimental AI Postdoc Fellowship Program of Northeastern University, and the Roux AI Institute.

MIT Creates Waterless Cleaning System to Remove Dust on Solar Panels: Maintains Peak Efficiency and Service Longevity


The accumulation of dust on solar panels or mirrors is already a significant issue – it can reduce the output of photovoltaic panels. So regular cleaning is essential for such installations to maintain their peak efficiency. However, cleaning solar panels is currently estimated to use billions of gallons of water per year, and attempts at waterless cleaning are labor-intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Robots can be useful; recently, a Belgian startup developed HELIOS, an automated cleaning service for solar panels.

Now, a team of researchers at MIT has now developed a waterless cleaning method to remove dust on solar installations in water-limited regions, improving overall efficiency.

The waterless, no-contact system uses electrostatic repulsion to cause dust particles to detach without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel‘s surface. The electrical charge it releases repels dust particles from the panels. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel.

The team designed and fabricated an electrostatic dust removal system for a lab-scale solar panel. The glass plate on top of the solar panel was coated with a 5-nm-thick transparent and conductive layer of aluminum-doped zinc oxide (AZO) using atomic layer deposition (ALD) and formed the bottom electrode. The top electrode is mobile to avoid shading and moves along the panel during cleaning with a linear guide stepper motor mechanism. The system can be operated at a voltage of around 12V and can recover 95% of the lost power after cleaning for particle sizes greater than around 30 μm.

“We performed experiments at varying humidities from 5% to 95%,” says MIT graduate student Sreedath Panat. “As long as the ambient humidity is greater than 30%, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.”

By eliminating the dependency on trucked-in water, by eliminating the build-up of dust that can contain corrosive compounds, and by lowering the overall operational costs, such cleaning systems have the potential to significantly improve the overall efficiency and reliability of solar installations Kripa Varanasi says.

MIT’s Solar-Powered Desalination System More Efficient, Less Expensive


A team of researchers at MIT and in China has developed a new solar-powered desalination system that is both more efficient and less expensive than previous solar desalination methods. The process could be used to treat contaminated wastewater or to generate steam for sterilizing medical instruments, all without requiring any power source other than sunlight itself.

Many attempts at solar desalination systems rely on some kind of wick to draw the saline water through the device, but these wicks are vulnerable to salt accumulation and relatively difficult to clean. The MIT team focused on developing a wick-free system instead.

The system is comprised of several layers with dark material at the top to absorb the sun’s heat, then a thin layer of water above a perforated layer of material, sitting atop a deep reservoir of the salty water such as a tank or a pond. The researchers determined the optimal size for the holes drilled through the perforated material, which in their tests was made of polyurethane. At 2.5 millimeters across, these holes can be easily made using commonly available waterjets.

In this schematic, a confined water layer above the floating thermal insulation enables the simultaneous thermal localization and salt rejection.
In this schematic, a confined water layer above the floating thermal insulation enables the simultaneous thermal localization and salt rejection. Credit: MIT

With the help of dark material, the thin layer of water is heated until it evaporates, which can then be condensed onto a sloped surface for collection as pure water. The holes in the perforated material are large enough to allow for a natural convective circulation between the warmer upper layer of water and the colder reservoir below. That circulation naturally draws the salt from the thin layer above down into the much larger body of water below, where it becomes well-diluted and no longer a problem.

During the experiments, the team says their new technique achieved over 80% efficiency in converting solar energy to water vapor and salt concentrations up to 20% by weight. Their test apparatus operated for a week with no signs of any salt accumulation.

MIT-experimental solar desalResearchers test two identical outdoor experimental setups placed next to each other. Credit: MIT

So far, the team has proven the concept using small benchtop devices, so the next step will be starting to scale up to devices that could have practical applications. According to the researchers, their system with just 1 square meter (about a square yard) of collecting area should be sufficient to provide a family’s daily needs for drinking water. They calculated that the necessary materials for a 1-square-meter device would cost only about $4.

Off Grid Solar Desal

The team says the first applications are likely to be providing safe water in remote off-grid locations or for disaster relief after hurricanes, earthquakes, or other disruptions of normal water supplies. MIT graduate student Lenan Zhang adds that “if we can concentrate the sunlight a little bit, we could use this passive device to generate high-temperature steam to do medical sterilization” for off-grid rural areas.

MIT – A New Language for Quantum Computing


While the nascent field of quantum computing can feel flashy and futuristic, quantum computers have the potential for computational breakthroughs in classically unsolvable tasks, like cryptographic and communication protocols, search, and computational physics and chemistry. Credits: Photo: Graham Carlow/IBM

Twist is an MIT-developed programming language that can describe and verify which pieces of data are entangled to prevent bugs in a quantum program.

Time crystals. Microwaves. Diamonds. What do these three disparate things have in common? 

Quantum computing. Unlike traditional computers that use bits, quantum computers use qubits to encode information as zeros or ones, or both at the same time. Coupled with a cocktail of forces from quantum physics, these refrigerator-sized machines can process a whole lot of information — but they’re far from flawless. Just like our regular computers, we need to have the right programming languages to properly compute on quantum computers. 

Programming quantum computers requires awareness of something called “entanglement,”computational multiplier for qubits of sorts, which translates to a lot of power. When two qubits are entangled, actions on one qubit can change the value of the other, even when they are physically separated, giving rise to Einstein’s characterization of “spooky action at a distance.” But that potency is equal parts a source of weakness. When programming, discarding one qubit without being mindful of its entanglement with another qubit can destroy the data stored in the other, jeopardizing the correctness of the program. 

Scientists from MIT’s Computer Science and Artificial Intelligence (CSAIL) aimed to do some unraveling by creating their own programming language for quantum computing called Twist. Twist can describe and verify which pieces of data are entangled in a quantum program, through a language a classical programmer can understand. The language uses a concept called purity, which enforces the absence of entanglement and results in more intuitive programs, with ideally fewer bugs. For example, a programmer can use Twist to say that the temporary data generated as garbage by a program is not entangled with the program’s answer, making it safe to throw away.

While the nascent field can feel a little flashy and futuristic, with images of mammoth wiry gold machines coming to mind, quantum computers have potential for computational breakthroughs in classically unsolvable tasks, like cryptographic and communication protocols, search, and computational physics and chemistry. One of the key challenges in computational sciences is dealing with the complexity of the problem and the amount of computation needed. Whereas a classical digital computer would need a very large exponential number of bits to be able to process such a simulation, a quantum computer could do it, potentially, using a very small number of qubits — if the right programs are there. 

“Our language Twist allows a developer to write safer quantum programs by explicitly stating when a qubit must not be entangled with another,” says Charles Yuan, an MIT PhD student in electrical engineering and computer science and the lead author on a new paper about Twist. “Because understanding quantum programs requires understanding entanglement, we hope that Twist paves the way to languages that make the unique challenges of quantum computing more accessible to programmers.” 

Yuan wrote the paper alongside Chris McNally, a PhD student in electrical engineering and computer science who is affiliated with the MIT Research Laboratory of Electronics, as well as MIT Assistant Professor Michael Carbin. They presented the research at last week’s 2022 Symposium on Principles of Programming conference in Philadelphia.

Untangling quantum entanglement 

Imagine a wooden box that has a thousand cables protruding out from one side. You can pull any cable all the way out of the box, or push it all the way in.

After you do this for a while, the cables form a pattern of bits — zeros and ones — depending on whether they’re in or out. This box represents the memory of a classical computer. A program for this computer is a sequence of instructions for when and how to pull on the cables.

Now imagine a second, identical-looking box. This time, you tug on a cable, and see that as it emerges, a couple of other cables are pulled back inside. Clearly, inside the box, these cables are somehow entangled with each other. 

The second box is an analogy for a quantum computer, and understanding the meaning of a quantum program requires understanding the entanglement present in its data. But detecting entanglement is not straightforward. You can’t see into the wooden box, so the best you can do is try pulling on cables and carefully reason about which are entangled. In the same way, quantum programmers today have to reason about entanglement by hand. This is where the design of Twist helps massage some of those interlaced pieces. 

The scientists designed Twist to be expressive enough to write out programs for well-known quantum algorithms and identify bugs in their implementations. To evaluate Twist’s design, they modified the programs to introduce some kind of bug that would be relatively subtle for a human programmer to detect, and showed that Twist could automatically identify the bugs and reject the programs.

They also measured how well the programs performed in practice in terms of runtime, which had less than 4 percent overhead over existing quantum programming techniques.

For those wary of quantum’s “seedy” reputation in its potential to break encryption systems, Yuan says it’s still not very well known to what extent quantum computers will actually be able to reach their performance promises in practice. “There’s a lot of research that’s going on in post-quantum cryptography, which exists because even quantum computing is not all-powerful. So far, there’s a very specific set of applications in which people have developed algorithms and techniques where a quantum computer can outperform classical computers.” 

An important next step is using Twist to create higher-level quantum programming languages. Most quantum programming languages today still resemble assembly language, stringing together low-level operations, without mindfulness towards things like data types and functions, and what’s typical in classical software engineering.

“Quantum computers are error-prone and difficult to program. By introducing and reasoning about the ‘purity’ of program code, Twist takes a big step towards making quantum programming easier by guaranteeing that the quantum bits in a pure piece of code cannot be altered by bits not in that code,” says Fred Chong, the Seymour Goodman Professor of Computer Science at the University of Chicago and chief scientist at Super.tech. 

The work was supported, in part, by the MIT-IBM Watson AI Lab, the National Science Foundation, and the Office of Naval Research.

Samsung and IBM Could Break the Nanosheet Threshold in Chips With ‘Vertically Stacked Transistors’ – IBM & Samsung Indicate This Could DOUBLE Processor Performance (MIT/ NTU)


This design can either double the performance of chips or reduce power use by 85%.

In May of 2021, we brought you a breakthrough in semiconductor materials that saw the creation of a chip that could push back the “end” of Moore’s Law and further widen the capability gap between China and U.S.-adjacent efforts in the field of 1-nanometer chips.

Now, IBM and Samsung claim they have also made a breakthrough in semiconductor design, revealing a new concept for stacking transistors vertically on a chip, according to a press release acquired by . It’s called Vertical Transport Field Effect Transistors (VTFET) and it sees transistors lie perpendicular to one another while current flows vertically.

The breakthrough was accomplished in a joint effort, involving the Massachusetts Institute of Technology (MIT), National Taiwan University (NTU), and the Taiwan Semiconductor Manufacturing Co (TSMC), which is the world’s largest contract manufacturer of advanced chips. At the core of the breakthrough was a process that employs semi-metal bismuth to allow for the manufacture of semiconductors below the 1-nanometer (nm) level.

This is a drastic change from today’s models where transistors lie flat on the surface of the silicon, and then electric current flows from side to side. By doing this, IBM and Samsung hope to extend Moore’s Law beyond the nanosheet threshold and waste less energy.

What will that look like in terms of processors? Well, IBM and Samsung state that these features will double the performance or use 85 percent less power than chips designed with FinFET transistors. But these two firms are not the only ones testing this type of technology.

Intel is also experimenting with chips stacked above each other, as reported by Reuters. “By stacking the devices directly on top of each other, we’re clearly saving area,” Paul Fischer, director and senior principal engineer of Intel’s Components Research Group told Reuters in an interview. “We’re reducing interconnect lengths and really saving energy, making this not only more cost efficient, but also better performing.”

All these advances are great for our cell phones who could one day go weeks without charging and for energy-intensive activities such as crypto mining. But then, we might also find ourselves in a Jevon’s paradox, which occurs when technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource also rises due to increasing demand. Isn’t that what’s going on with cryptocurrencies in a way?

The Rapid Cost Decline of lithium-ion batteries’ – Why?


Lithium-ion batteries, those marvels of lightweight power that have made possible today’s age of handheld electronics and electric vehicles, have plunged in cost since their introduction three decades ago at a rate similar to the drop in solar panel prices, as documented by a study published last March.

But what brought about such an astonishing cost decline, of about 97 percent?

Some of the researchers behind that earlier study have now analyzed what accounted for the extraordinary savings. They found that by far the biggest factor was work on research and development, particularly in chemistry and materials science. This outweighed the gains achieved through economies of scale, though that turned out to be the second-largest category of reductions.

The new findings are being published in the journal Energy and Environmental Science, in a paper by MIT postdoc Micah Ziegler, recent graduate student Juhyun Song Ph.D. ’19, and Jessika Trancik, a professor in MIT’s Institute for Data, Systems and Society.

The findings could be useful for policymakers and planners to help guide spending priorities in order to continue the pathway toward ever-lower costs for this and other crucial energy storage technologies, according to Trancik. Their work suggests that there is still considerable room for further improvement in electrochemical battery technologies, she says.

The analysis required digging through a variety of sources, since much of the relevant information consists of closely held proprietary business data. “The data collection effort was extensive,” Ziegler says. “We looked at academic articles, industry and government reports, press releases, and specification sheets. We even looked at some legal filings that came out. We had to piece together data from many different sources to get a sense of what was happening.” He says they collected “about 15,000 qualitative and quantitative data points, across 1,000 individual records from approximately 280 references.”

Data from the earliest times are hardest to access and can have the greatest uncertainties, Trancik says, but by comparing different data sources from the same period they have attempted to account for these uncertainties.

Overall, she says, “we estimate that the majority of the cost decline, more than 50 percent, came from research-and-development-related activities.” That included both private sector and government-funded research and development, and “the vast majority” of that cost decline within that R&D category came from chemistry and materials research.

That was an interesting finding, she says, because “there were so many variables that people were working on through very different kinds of efforts,” including the design of the battery cells themselves, their manufacturing systems, supply chains, and so on. “The cost improvement emerged from a diverse set of efforts and many people, and not from the work of only a few individuals.”

The findings about the importance of investment in R&D were especially significant, Ziegler says, because much of this investment happened after lithium-ion battery technology was commercialized, a stage at which some analysts thought the research contribution would become less significant. Over roughly a 20-year period starting five years after the batteries’ introduction in the early 1990s, he says, “most of the cost reduction still came from R&D. The R&D contribution didn’t end when commercialization began. In fact, it was still the biggest contributor to cost reduction.”

The study took advantage of an analytical approach that Trancik and her team initially developed to analyze the similarly precipitous drop in costs of silicon solar panels over the last few decades. They also applied the approach to understand the rising costs of nuclear energy. “This is really getting at the fundamental mechanisms of technological change,” she says. “And we can also develop these models looking forward in time, which allows us to uncover the levers that people could use to improve the technology in the future.”

One advantage of the methodology Trancik and her colleagues have developed, she says, is that it helps to sort out the relative importance of different factors when many variables are changing all at once, which typically happens as a technology improves. “It’s not simply adding up the cost effects of these variables,” she says, “because many of these variables affect many different cost components. There’s this kind of intricate web of dependencies.” But the team’s methodology makes it possible to “look at how that overall cost change can be attributed to those variables, by essentially mapping out that network of dependencies,” she says.

This can help provide guidance on public spending, private investments, and other incentives. “What are all the things that different decision makers could do?” she asks. “What decisions do they have agency over so that they could improve the technology, which is important in the case of low-carbon technologies, where we’re looking for solutions to climate change and we have limited time and limited resources? The new approach allows us to potentially be a bit more intentional about where we make those investments of time and money.”

David Chandler MIT Technology

More information: Determinants of lithium-ion battery technology cost decline, Energy and Environmental Science (2021). DOI: 10.1039/d1ee01313k

Journal information: Energy and Environmental Science

Provided by Massachusetts Institute of Technology

Making the case for hydrogen in a zero-carbon economy


Hydrogen Power
As the United States races to achieve its goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, they need backup from other energy sources, especially during seasons of high electric demand. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps.

“As we move to more and more renewable penetration, this intermittency will make a greater impact on the ,” says Emre Gençer, a research scientist at the MIT Energy Initiative (MITEI). That’s because grid operators will increasingly resort to fossil-fuel-based “peaker”  that compensate for the intermittency of the variable renewable  (VRE) sources of sun and wind. “If we’re to achieve zero-carbon electricity, we must replace all greenhouse gas-emitting sources,” Gençer says.

Low- and zero-carbon alternatives to greenhouse-gas emitting peaker plants are in development, such as arrays of lithium-ion batteries and  power generation. But each of these evolving technologies comes with its own set of advantages and constraints, and it has proven difficult to frame the debate about these options in a way that’s useful for policymakers, investors, and utilities engaged in the clean energy transition.

Now, Gençer and Drake D. Hernandez SM ’21 have come up with a model that makes it possible to pin down the pros and cons of these peaker-plant alternatives with greater precision. Their hybrid technological and , based on a detailed inventory of California’s power system, was published online last month in Applied Energy. While their work focuses on the most cost-effective solutions for replacing peaker power plants, it also contains insights intended to contribute to the larger conversation about transforming energy systems.

“Our study’s essential takeaway is that hydrogen-fired power generation can be the more economical option when compared to lithium-ion batteries—even today, when the costs of hydrogen production, transmission, and storage are very high,” says Hernandez, who worked on the study while a graduate research assistant for MITEI. Adds Gençer, “If there is a place for hydrogen in the cases we analyzed, that suggests there is a promising role for hydrogen to play in the energy transition.”

Adding up the costs

California serves as a stellar paradigm for a swiftly shifting power system. The state draws more than 20 percent of its electricity from solar and approximately 7 percent from wind, with more VRE coming online rapidly. This means its peaker plants already play a pivotal role, coming online each evening when the sun goes down or when events such as heat waves drive up electricity use for days at a time.

“We looked at all the peaker plants in California,” recounts Gençer. “We wanted to know the cost of electricity if we replaced them with hydrogen-fired turbines or with lithium-ion batteries.” The researchers used a core metric called the levelized cost of electricity (LCOE) as a way of comparing the costs of different technologies to each other. LCOE measures the average total cost of building and operating a particular energy-generating asset per unit of total electricity generated over the hypothetical lifetime of that asset.

Selecting 2019 as their base study year, the team looked at the costs of running natural gas-fired peaker plants, which they defined as plants operating 15 percent of the year in response to gaps in intermittent renewable electricity. In addition, they determined the amount of carbon dioxide released by these plants and the expense of abating these emissions. Much of this information was publicly available.

Coming up with prices for replacing peaker plants with massive arrays of lithium-ion batteries was also relatively straightforward: “There are no technical limitations to lithium-ion, so you can build as many as you want; but they are super expensive in terms of their footprint for energy storage and the mining required to manufacture them,” says Gençer.

But then came the hard part: nailing down the costs of hydrogen-fired electricity generation. “The most difficult thing is finding cost assumptions for new technologies,” says Hernandez. “You can’t do this through a literature review, so we had many conversations with equipment manufacturers and plant operators.”

The team considered two different forms of hydrogen fuel to replace natural gas, one produced through electrolyzer facilities that convert water and electricity into hydrogen, and another that reforms natural gas, yielding hydrogen and carbon waste that can be captured to reduce emissions. They also ran the numbers on retrofitting natural gas plants to burn hydrogen as opposed to building entirely new facilities. Their model includes identification of likely locations throughout the state and expenses involved in constructing these facilities.

The researchers spent months compiling a giant dataset before setting out on the task of analysis. The results from their modeling were clear: “Hydrogen can be a more cost-effective alternative to lithium-ion batteries for peaking operations on a power grid,” says Hernandez. In addition, notes Gençer, “While certain technologies worked better in particular locations, we found that on average, reforming hydrogen rather than electrolytic hydrogen turned out to be the cheapest option for replacing peaker plants.”

making-the-case-for-hy

Credit: DOI: 10.1016/j.apenergy.2021.117314

A tool for energy investors

When he began this project, Gençer admits he “wasn’t hopeful” about hydrogen replacing natural gas in peaker plants. “It was kind of shocking to see in our different scenarios that there was a place for hydrogen.” That’s because the overall price tag for converting a fossil-fuel based plant to one based on hydrogen is very high, and such conversions likely won’t take place until more sectors of the economy embrace hydrogen, whether as a fuel for transportation or for varied manufacturing and industrial purposes.

A nascent hydrogen production infrastructure does exist, mainly in the production of ammonia for fertilizer. But enormous investments will be necessary to expand this framework to meet grid-scale needs, driven by purposeful incentives. “With any of the climate solutions proposed today, we will need a carbon tax or carbon pricing; otherwise nobody will switch to new technologies,” says Gençer.

The researchers believe studies like theirs could help key energy stakeholders make better-informed decisions. To that end, they have integrated their analysis into SESAME, a life cycle and techno-economic assessment tool for a range of energy systems that was developed by MIT researchers. Users can leverage this sophisticated modeling environment to compare costs of energy storage and emissions from different technologies, for instance, or to determine whether it is cost-efficient to replace a -powered plant with one powered by hydrogen.

“As utilities, industry, and investors look to decarbonize and achieve zero-emissions targets, they have to weigh the costs of investing in low-carbon technologies today against the potential impacts of climate change moving forward,” says Hernandez, who is currently a senior associate in the energy practice at Charles River Associates. Hydrogen, he believes, will become increasingly cost-competitive as its production costs decline and markets expand.

A study group member of MITEI’s soon-to-be published Future of Storage study, Gençer knows that hydrogen alone will not usher in a zero-carbon future. But, he says, “Our research shows we need to seriously consider hydrogen in the energy transition, start thinking about key areas where hydrogen should be used, and start making the massive investments necessary.”


Explore further

Green hydrogen production from curtailed wind and solar power

An Alternative to Kevlar – MIT and Caltech Create Nanotech Carbon Materials – Can withstand supersonic microparticle impacts


So, nanotechnology. “Great Things from Small Things”. Really amazing stuff … really.

So amazing in fact, that some researchers and engineers at Caltech, MIT, and ETH Zurich have discovered how to make lighter than Kevlar materials that can withstand supersonic microparticle impacts.

What does all this mean for material science? A whole lot if you ask me. I mean, this is literally going to change to way we produced shielding of any kind, especially for law enforcement agencies. Hang on a second, I’m getting a little ahead of myself here. 

A new study by engineers at the above-mentioned institutes discovered that “nano-architected” materials are showing insane promise in use as armor. What are “nano-architected” materials? Simply put, they’re materials and structures that are designed from “precisely patterned nanoscale structures,” meaning that the entire thing is a pre-meditated and arranged structure; what you see is exactly what was desired. 

Not only this, but the material is completed from nanoscale carbon struts. Arranged much like rings in chainmail, these carbon struts are combined, layer upon layer to create the structure you see in the main photo. So yeah, medieval knights had it right all along, they just needed more layers of something that already weighed upwards of 40 lbs for a full body suit.

So now that the researchers had a structure, what to do with it. Why not shoot things at it? Well, like any scientists, pardon me, “researchers,” that have been cooped up in a lab for too long, that’s just what they did, in the process, documenting and recording all the results.

To do this, researchers shot laser-induced microparticles up to 1,100 meters per second at the nanostructure. A quick calculation and you’re looking at a particle that’s traveling at 3,608 feet per second. Want to know more? That’s 2,460 miles per hour! 

Two test structures were arranged, one with slightly looser struts, and the second with a tighter formation. The tighter formation kept the particle from tearing through and even embedded into the structure. 

If that’s not enough, and this is a big one, once the particle was removed and the underlying structure examined, researchers found that the surrounding structure remained intact. Yes, this means it can be reused.

The overall result? They found that shooting this structure with microparticles at supersonic speeds proved to offer a higher impact resistance and absorption effect than Kevlar, steel, aluminum, and a range of other impact-absorbing materials. The images in the gallery even show that particles didn’t even make it thirty percent of the way through the structure; I counted about six to seven deformed layers.

To get an idea of where this sort of tech will be taking things, co-author of the paper, Julia R. Greer of Caltech, whose lab led the material’s fabrication, says that “The knowledge from this work… could provide design principles for ultra-lightweight impact resistant materials [for use in] efficient armor materials, protective coatings, and blast-resistant shields desirable in defense and space applications.” 

Imagine for a second what this means once these structures are created on a larger scale. It will change the face of armor, be it destined for human or machine use, coatings, and downright clothing.

I’m not saying that suddenly we can stop bullets walking down the street, but it won’t be long until funding for large-scale production begins, and what I just said may become a reality. Maybe not for all people at first, but the military will definitely have their eye on this tech.

Submitted By Cristian Curmei