MIT: New Optical Imaging System could be Deployed to find Tiny Tumors and Detect Cancer Earlier – “A Game Changing Method”


 

MIT-Deep-Tissue-Imaging-01_0

Near-infrared technology pinpoints fluorescent probes deep within living tissue; may be used to detect cancer earlier. MIT researchers have devised a way to simultaneously image in multiple wavelengths of near-infrared light, allowing them to determine the depth of particles emitting different wavelengths. Image courtesy of the researchers

Many types of cancer could be more easily treated if they were detected at an earlier stage. MIT researchers have now developed an imaging system, named “DOLPHIN,” which could enable them to find tiny tumors, as small as a couple of hundred cells, deep within the body. 

In a new study, the researchers used their imaging system, which relies on near-infrared light, to track a 0.1-millimeter fluorescent probe through the digestive tract of a living mouse. They also showed that they can detect a signal to a tissue depth of 8 centimeters, far deeper than any existing biomedical optical imaging technique.

The researchers hope to adapt their imaging technology for early diagnosis of ovarian and other cancers that are currently difficult to detect until late stages.

“We want to be able to find cancer much earlier,” says Angela Belcher, the James Mason Crafts Professor of Biological Engineering and Materials Science at MIT and a member of the Koch Institute for Integrative Cancer Research, and the newly-appointed head of MIT’s Department of Biological Engineering. “Our goal is to find tiny tumors, and do so in a noninvasive way.”

Belcher is the senior author of the study, which appears in the March 7 issue of Scientific Reports. Xiangnan Dang, a former MIT postdoc, and Neelkanth Bardhan, a Mazumdar-Shaw International Oncology Fellow, are the lead authors of the study. Other authors include research scientists Jifa Qi and Ngozi Eze, former postdoc Li Gu, postdoc Ching-Wei Lin, graduate student Swati Kataria, and Paula Hammond, the David H. Koch Professor of Engineering, head of MIT’s Department of Chemical Engineering, and a member of the Koch Institute.

Deeper imaging

Existing methods for imaging tumors all have limitations that prevent them from being useful for early cancer diagnosis. Most have a tradeoff between resolution and depth of imaging, and none of the optical imaging techniques can image deeper than about 3 centimeters into tissue. Commonly used scans such as X-ray computed tomography (CT) and magnetic resonance imaging (MRI) can image through the whole body; however, they can’t reliably identify tumors until they reach about 1 centimeter in size.

Belcher’s lab set out to develop new optical methods for cancer imaging several years ago, when they joined the Koch Institute. They wanted to develop technology that could image very small groups of cells deep within tissue and do so without any kind of radioactive labeling.

Near-infrared light, which has wavelengths from 900 to 1700 nanometers, is well-suited to tissue imaging because light with longer wavelengths doesn’t scatter as much as when it strikes objects, which allows the light to penetrate deeper into the tissue. To take advantage of this, the researchers used an approach known as hyperspectral imaging, which enables simultaneous imaging in multiple wavelengths of light.

The researchers tested their system with a variety of near-infrared fluorescent light-emitting probes, mainly sodium yttrium fluoride nanoparticles that have rare earth elements such as erbium, holmium, or praseodymium added through a process called doping. Depending on the choice of the doping element, each of these particles emits near-infrared fluorescent light of different wavelengths.

Using algorithms that they developed, the researchers can analyze the data from the hyperspectral scan to identify the sources of fluorescent light of different wavelengths, which allows them to determine the location of a particular probe. By further analyzing light from narrower wavelength bands within the entire near-IR spectrum, the researchers can also determine the depth at which a probe is located. The researchers call their system “DOLPHIN”, which stands for “Detection of Optically Luminescent Probes using Hyperspectral and diffuse Imaging in Near-infrared.”

To demonstrate the potential usefulness of this system, the researchers tracked a 0.1-millimeter-sized cluster of fluorescent nanoparticles that was swallowed and then traveled through the digestive tract of a living mouse. These probes could be modified so that they target and fluorescently label specific cancer cells.

“In terms of practical applications, this technique would allow us to non-invasively track a 0.1-millimeter-sized fluorescently-labeled tumor, which is a cluster of about a few hundred cells. To our knowledge, no one has been able to do this previously using optical imaging techniques,” Bardhan says.

Earlier detection

The researchers also demonstrated that they could inject fluorescent particles into the body of a mouse or a rat and then image through the entire animal, which requires imaging to a depth of about 4 centimeters, to determine where the particles ended up. And in tests with human tissue-mimics and animal tissue, they were able to locate the probes to a depth of up to 8 centimeters, depending on the type of tissue.

Guosong Hong, an assistant professor of materials science and engineering at Stanford University, described the new method as “game-changing.”

“This is really amazing work,” says Hong, who was not involved in the research. “For the first time, fluorescent imaging has approached the penetration depth of CT and MRI, while preserving its naturally high resolution, making it suitable to scan the entire human body.”

Early Detect cancer-cells-600Read More About the Importance of Early Detection

This kind of system could be used with any fluorescent probe that emits light in the near-infrared spectrum, including some that are already FDA-approved, the researchers say. The researchers are also working on adapting the imaging system so that it could reveal intrinsic differences in tissue contrast, including signatures of tumor cells, without any kind of fluorescent label.

In ongoing work, they are using a related version of this imaging system to try to detect ovarian tumors at an early stage. Ovarian cancer is usually diagnosed very late because there is no easy way to detect it when the tumors are still small.

“Ovarian cancer is a terrible disease, and it gets diagnosed so late because the symptoms are so nondescript,” Belcher says. “We want a way to follow recurrence of the tumors, and eventually a way to find and follow early tumors when they first go down the path to cancer or metastasis. This is one of the first steps along the way in terms of developing this technology.”

The researchers have also begun working on adapting this type of imaging to detect other types of cancer such as pancreatic cancer, brain cancer, and melanoma.

The research was funded by the Koch Institute Frontier Research Program, the Marble Center for Cancer Nanomedicine, the Koch Institute Support (core) Grant from the National Cancer Institute, the NCI Center for Center for Cancer Nanotechnology Excellence, and the Bridge Project.

Advertisements

EPFL and MIT Researchers Discover the ‘Holy Grail’ of Nanowire Production


Holy Grail Nanowire 5c6d75008f989

EPFL researchers have found a way to control and standardize the production of nanowires on silicon surfaces. Credit: Ecole Polytechnique Federale de Lausanne (EPFL)

Nanowires have the potential to revolutionize the technology around us. Measuring just 5-100 nanometers in diameter (a nanometer is a millionth of a millimeter), these tiny, needle-shaped crystalline structures can alter how electricity or light passes through them.

They can emit, concentrate and absorb light and could therefore be used to add optical functionalities to electronic chips. They could, for example, make it possible to generate lasers directly on  and to integrate single-photon emitters for coding purposes. They could even be applied in  to improve how sunlight is converted into electrical energy.

Up until now, it was impossible to reproduce the process of growing nanowires on silicon semiconductors – there was no way to repeatedly produce homogeneous nanowires in specific positions.

But researchers from EPFL’s Laboratory of Semiconductor Materials, run by Anna Fontcuberta i Morral, together with colleagues from MIT and the IOFFE Institute, have come up with a way of growing nanowire networks in a highly controlled and fully reproducible manner. The key was to understand what happens at the onset of nanowire growth, which goes against currently accepted theories. Their work has been published in Nature Communications.

“We think that this discovery will make it possible to realistically integrate a series of nanowires on silicon substrates,” says Fontcuberta i Morral. “Up to now, these nanowires had to be grown individually, and the process couldn’t be reproduced.”

The holy grail of nanowire production
Two different configurations of the droplet within the opening – hole fully filled and partially filled and bellow illustration of GaAs crystals forming a full ring or a step underneath the large and small gallium droplets. Credit: Ecole Polytechnique Federale de Lausanne (EPFL)

 

Getting the right ratio

The standard process for producing nanowires is to make  in  monoxide and fill them with a nanodrop of liquid gallium. This substance then solidifies when it comes into contact with arsenic. But with this process, the substance tends to harden at the corners of the nanoholes, which means that the angle at which the nanowires will grow can’t be predicted. The search was on for a way to produce homogeneous nanowires and control their position.

Research aimed at controlling the  has tended to focus on the diameter of the hole, but this approach has not paid off. Now EPFL researchers have shown that by altering the diameter-to-height ratio of the hole, they can perfectly control how the nanowires grow. At the right ratio, the substance will solidify in a ring around the edge of the hole, which prevents the nanowires from growing at a non-perpendicular angle. And the researchers’ process should work for all types of .

“It’s kind of like growing a plant. They need water and sunlight, but you have to get the quantities right,” says Fontcuberta i Morral.

This new production technique will be a boon for nanowire research, and further samples should soon be developed.

 Explore further: Nanowires have the power to revolutionize solar energy (w/ video)

More information: J. Vukajlovic-Plestina et al. Fundamental aspects to localize self-catalyzed III-V nanowires on silicon, Nature Communications (2019). DOI: 10.1038/s41467-019-08807-9

 

A Path to Cheaper Flexible Solar Cells -Researchers at Georgia IT and MIT are Developing the Potential Perovskite-Based Solar Cells


Perovskite GT 190207142218_1_540x360
A researcher at Georgia Tech holds a perovskite-based solar cell, which is flexible and lighter than silicon-based versions. Credit: Rob Felt, Georgia Tech

There’s a lot to like about perovskite-based solar cells. They are simple and cheap to produce, offer flexibility that could unlock a wide new range of installation methods and places, and in recent years have reached energy efficiencies approaching those of traditional silicon-based cells.

But figuring out how to produce perovskite-based energy devices that last longer than a couple of months has been a challenge.

Now researchers from Georgia Institute of Technology, University of California San Diego and Massachusetts Institute of Technology have reported new findings about perovskite solar cells that could lead the way to devices that perform better.

“Perovskite solar cells offer a lot of potential advantages because they are extremely lightweight and can be made with flexible plastic substrates,” said Juan-Pablo Correa-Baena, an assistant professor in the Georgia Tech School of Materials Science and Engineering. “To be able to compete in the marketplace with silicon-based solar cells, however, they need to be more efficient.”

In a study that was published February 8 in the journal Science and was sponsored by the U.S Department Energy and the National Science Foundation, the researchers described in greater detail the mechanisms of how adding alkali metal to the traditional perovskites leads to better performance. Perov SCs 091_main

“Perovskites could really change the game in solar,” said David Fenning, a professor of nanoengineering at the University of California San Diego. “They have the potential to reduce costs without giving up performance. But there’s still a lot to learn fundamentally about these materials.”

To understand perovskite crystals, it’s helpful to think of its crystalline structure as a triad. One part of the triad is typically formed from the element lead. The second is typically made up of an organic component such as methylammonium, and the third is often comprised of other halides such as bromine and iodine.

In recent years, researchers have focused on testing different recipes to achieve better efficiencies, such as adding iodine and bromine to the lead component of the structure. Later, they tried substituting cesium and rubidium to the part of the perovskite typically occupied by organic molecules.

“We knew from earlier work that adding cesium and rubidium to a mixed bromine and iodine lead perovskite leads to better stability and higher performance,” Correa-Baena said.

But little was known about why adding those alkali metals improved performance of the perovskites.

To understand exactly why that seemed to work, the researchers used high-intensity X-ray mapping to examine the perovskites at the nanoscale.

Structure-of-perovskite-solar-cells-a-Device-architecture-and-b-energy-band-diagram

“By looking at the composition within the perovskite material, we can see how each individual element plays a role in improving the performance of the device,” said Yanqi (Grace) Luo, a nanoengineering PhD student at UC San Diego.

They discovered that when the cesium and rubidium were added to the mixed bromine and iodine lead perovskite, it caused the bromine and iodine to mix together more homogeneously, resulting in up to 2 percent higher conversion efficiency than the materials without these additives.

“We found that uniformity in the chemistry and structure is what helps a perovskite solar cell operate at its fullest potential,” Fenning said. “Any heterogeneity in that backbone is like a weak link in the chain.”

Even so, the researchers also observed that while adding rubidium or cesium caused the bromine and iodine to become more homogenous, the halide metals themselves within their own cation remained fairly clustered, creating inactive “dead zones” in the solar cell that produce no current.

“This was surprising,” Fenning said. “Having these dead zones would typically kill a solar cell. In other materials, they act like black holes that suck in electrons from other regions and never let them go, so you lose current and voltage.

“But in these perovskites, we saw that the dead zones around rubidium and cesium weren’t too detrimental to solar cell performance, though there was some current loss,” Fenning said. “This shows how robust these materials are but also that there’s even more opportunity for improvement.”

The findings add to the understanding of how the perovskite-based devices work at the nanoscale and could lay the groundwork for future improvements.

“These materials promise to be very cost effective and high performing, which is pretty much what we need to make sure photovoltaic panels are deployed widely,” Correa-Baena said. “We want to try to offset issues of climate change, so the idea is to have photovoltaic cells that are as cheap as possible.”

Story Source:

Materials provided by Georgia Institute of TechnologyNote: Content may be edited for style and length.

MIT: Unleashing perovskites’ potential for solar cells


Solar cells made of perovskite have great promise, in part because they can easily be made on flexible substrates, like this experimental cell. Image: Ken Richardson

New results show how varying the recipe could bring these materials closer to commercialization.

Perovskites — a broad category of compounds that share a certain crystal structure — have attracted a great deal of attention as potential new solar-cell materials because of their low cost, flexibility, and relatively easy manufacturing process.

But much remains unknown about the details of their structure and the effects of substituting different metals or other elements within the material.

Conventional solar cells made of silicon must be processed at temperatures above 1,400 degrees Celsius, using expensive equipment that limits their potential for production scaleup.

In contrast, perovskites can be processed in a liquid solution at temperatures as low as 100 degrees, using inexpensive equipment. What’s more, perovskites can be deposited on a variety of substrates, including flexible plastics, enabling a variety of new uses that would be impossible with thicker, stiffer silicon wafers.

Now, researchers have been able to decipher a key aspect of the behavior of perovskites made with different formulations:

With certain additives there is a kind of “sweet spot” where greater amounts will enhance performance and beyond which further amounts begin to degrade it.

The findings are detailed this week in the journal Science, in a paper by former MIT postdoc Juan-Pablo Correa-Baena, MIT professors Tonio Buonassisi and Moungi Bawendi, and 18 others at MIT, the University of California at San Diego, and other institutions.

Perovskite solar cells are thought to have great potential, and new understanding of how changes in composition affect their behavior could help to make them practical. Image: Ken Richardson

Perovskites are a family of compounds that share a three-part crystal structure. Each part can be made from any of a number of different elements or compounds — leading to a very broad range of possible formulations. Buonassisi compares designing a new perovskite to ordering from a menu, picking one (or more) from each of column A, column B, and (by convention) column X.

“You can mix and match,” he says, but until now all the variations could only be studied by trial and error, since researchers had no basic understanding of what was going on in the material.

In previous research by a team from the Swiss École Polytechnique Fédérale de Lausanne, in which Correa-Baena participated, had found that adding certain alkali metals to the perovskite mix could improve the material’s efficiency at converting solar energy to electricity, from about 19 percent to about 22 percent.

But at the time there was no explanation for this improvement, and no understanding of exactly what these metals were doing inside the compound. “Very little was known about how the microstructure affects the performance,” Buonassisi says.

Now, detailed mapping using high-resolution synchrotron nano-X-ray fluorescence measurements, which can probe the material with a beam just one-thousandth the width of a hair, has revealed the details of the process, with potential clues for how to improve the material’s performance even further.

It turns out that adding these alkali metals, such as cesium or rubidium, to the perovskite compound helps some of the other constituents to mix together more smoothly. As the team describes it, these additives help to “homogenize” the mixture, making it conduct electricity more easily and thus improving its efficiency as a solar cell.

But, they found, that only works up to a certain point. Beyond a certain concentration, these added metals clump together, forming regions that interfere with the material’s conductivity and partly counteract the initial advantage. In between, for any given formulation of these complex compounds, is the sweet spot that provides the best performance, they found.

“It’s a big finding,” says Correa-Baena, who in January became an assistant professor of materials science and engineering at Georgia Tech.

What the researchers found, after about three years of work at MIT and with collaborators at UCSD, was “what happens when you add those alkali metals, and why the performance improves.” They were able to directly observe the changes in the composition of the material, and reveal, among other things, these countervailing effects of homogenizing and clumping.

“The idea is that, based on these findings, we now know we should be looking into similar systems, in terms of adding alkali metals or other metals,” or varying other parts of the recipe, Correa-Baena says.

While perovskites can have major benefits over conventional silicon solar cells, especially in terms of the low cost of setting up factories to produce them, they still require further work to boost their overall efficiency and improve their longevity, which lags significantly behind that of silicon cells.

Although the researchers have clarified the structural changes that take place in the perovskite material when adding different metals, and the resulting changes in performance, “we still don’t understand the chemistry behind this,” Correa-Baena says. That’s the subject of ongoing research by the team. The theoretical maximum efficiency of these perovskite solar cells is about 31 percent, according to Correa-Baena, and the best performance to date is around 23 percent, so there remains a significant margin for potential improvement.

Although it may take years for perovskites to realize their full potential, at least two companies are already in the process of setting up production lines, and they expect to begin selling their first modules within the next year or so. Some of these are small, transparent and colorful solar cells designed to be integrated into a building’s façade. “It’s already happening,” Correa-Baena says, “but there’s still work to do in making these more durable.”

Once issues of large-scale manufacturability, efficiency, and durability are addressed, Buonassisi says, perovskites could become a major player in the renewable energy industry. “If they succeed in making sustainable, high-efficiency modules while preserving the low cost of the manufacturing, that could be game-changing,” he says. “It could allow expansion of solar power much faster than we’ve seen.”

Perovskite solar cells “are now primary candidates for commercialization. Thus, providing deeper insights, as done in this work, contributes to future development,” says Michael Saliba, a senior researcher on the physics of soft matter at the University of Fribourg, Switzerland, who was not involved in this research.

Saliba adds, “This is great work that is shedding light on some of the most investigated materials. The use of synchrotron-based, novel techniques in combination with novel material engineering is of the highest quality, and is deserving of appearing in such a high-ranking journal.” He adds that work in this field “is rapidly progressing. Thus, having more detailed knowledge will be important for addressing future engineering challenges.”

The study, which included researchers at Purdue University and Argonne National Laboratory, in addition to those at MIT and UCSD, was supported by the U.S. Department of Energy, the National Science Foundation, the Skolkovo Institute of Science and Technology, and the California Energy Commission.

MIT: Optimizing solar farms with ‘Smart Drones’


mit-raptor-maps-01_0As drones increasingly take on the job of inspecting growing solar farms, Raptor Maps’ software makes sense of the data they collect. Image courtesy of Raptor Maps

MIT spinoff Raptor Maps uses machine-learning software to improve the maintenance of solar panels.

As the solar industry has grown, so have some of its inefficiencies. Smart entrepreneurs see those inefficiencies as business opportunities and try to create solutions around them. Such is the nature of a maturing industry.

One of the biggest complications emerging from the industry’s breakneck growth is the maintenance of solar farms. Historically, technicians have run electrical tests on random sections of solar cells in order to identify problems. In recent years, the use of drones equipped with thermal cameras has improved the speed of data collection, but now technicians are being asked to interpret a never-ending flow of unstructured data.

That’s where Raptor Maps comes in. The company’s software analyzes imagery from drones and diagnoses problems down to the level of individual cells. The system can also estimate the costs associated with each problem it finds, allowing technicians to prioritize their work and owners to decide what’s worth fixing.

“We can enable technicians to cover 10 times the territory and pinpoint the most optimal use of their skill set on any given day,” Raptor Maps co-founder and CEO Nikhil Vadhavkar says. “We came in and said, ‘If solar is going to become the number one source of energy in the world, this process needs to be standardized and scalable.’ That’s what it takes, and our customers appreciate that approach.”

Raptor Maps processed the data of 1 percent of the world’s solar energy in 2018, amounting to the energy generated by millions of panels around the world. And as the industry continues its upward trajectory, with solar farms expanding in size and complexity, the company’s business proposition only becomes more attractive to the people driving that growth.

Picking a path

Raptor Maps was founded by Eddie Obropta ’13 SM ’15, Forrest Meyen SM ’13 PhD ’17, and Vadhavkar, who was a PhD candidate at MIT between 2011 and 2016. The former classmates had worked together in the Human Systems Laboratory of the Department of Aeronautics and Astronautics when Vadhavkar came to them with the idea of starting a drone company in 2015.

The founders were initially focused on the agriculture industry. The plan was to build drones equipped with high-definition thermal cameras to gather data, then create a machine-learning system to gain insights on crops as they grew. While the founders began the arduous process of collecting training data, they received guidance from MIT’s Venture Mentoring Service and the Martin Trust Center. In the spring of 2015, Raptor Maps won the MIT $100K Launch competition.

But even as the company began working with the owners of two large farms, Obropta and Vadhavkar were unsure of their path to scaling the company. (Meyen left the company in 2016.) Then, in 2017, they made their software publicly available and something interesting happened.

They found that most of the people who used the system were applying it to thermal images of solar farms instead of real farms. It was a message the founders took to heart.

“Solar is similar to farming: It’s out in the open, 2-D, and it’s spread over a large area,” Obropta says. “And when you see [an anomaly] in thermal images on solar, it usually means an electrical issue or a mechanical issue — you don’t have to guess as much as in agriculture. So we decided the best use case was solar. And with a big push for clean energy and renewables, that aligned really well with what we wanted to do as a team.”

Obropta and Vadhavkar also found themselves on the right side of several long-term trends as a result of the pivot. The International Energy Agency has proposed that solar power could be the world’s largest source of electricity by 2050. But as demand grows, investors, owners, and operators of solar farms are dealing with an increasingly acute shortage of technicians to keep the panels running near peak efficiency.

Since deciding to focus on solar exclusively around the beginning of 2018, Raptor Maps has found success in the industry by releasing its standards for data collection and letting customers — or the many drone operators the company partners with — use off-the-shelf hardware to gather the data themselves. After the data is submitted to the company, the system creates a detailed map of each solar farm and pinpoints any problems it finds.

“We run analytics so we can tell you, ‘This is how many solar panels have this type of issue; this is how much the power is being affected,’” Vadhavkar says. “And we can put an estimate on how many dollars each issue costs.”

The model allows Raptor Maps to stay lean while its software does the heavy lifting. In fact, the company’s current operations involve more servers than people.

The tiny operation belies a company that’s carved out a formidable space for itself in the solar industry. Last year, Raptor Maps processed four gigawatts worth of data from customers on six different continents. That’s enough energy to power nearly 3 million homes.

Vadhavkar says the company’s goal is to grow at least fivefold in 2019 as several large customers move to make the software a core part of their operations. The team is also working on getting its software to generate insights in real time using graphical processing units on the drone itself as part of a project with the multinational energy company Enel Green Power.

Ultimately, the data Raptor Maps collect are taking the uncertainty out of the solar industry, making it a more attractive space for investors, operators, and everyone in between.

“The growth of the industry is what drives us,” Vadhavkar says. “We’re directly seeing that what we’re doing is impacting the ability of the industry to grow faster. That’s huge. Growing the industry — but also, from the entrepreneurial side, building a profitable business while doing it — that’s always been a huge dream.”

MIT News – Continuing Progress toward Practical Fusion Energy – “The MIT Fusion Landscape”


mit-fusion-landscape-01_0

Dennis Whyte, director of the Plasma Science and Fusion Center. Images: Gretchen Ertl

In series of talks, researchers describe major effort to address climate change through carbon-free power.

A year after announcing a major public-private collaboration to design a fusion reactor capable of producing more power than it consumes, researchers from MIT and the startup company Commonwealth Fusion Systems on Tuesday presented the MIT community with an update on their progress. In a series of talks, they detailed the effort’s continuing work to bring about practical fusion power — based on the reaction that provides the sun’s energy — on a faster timescale than any previous efforts.

At the event, titled “The MIT Fusion Landscape,” speakers explained why fusion power is urgently needed, and described the approach MIT and CFS are taking and how the project is taking shape. According to Dennis Whyte, head of MIT’s Plasma Science and Fusion Center (PSFC), the new project’s aim is “to try to get to fusion energy a lot faster,” by creating a prototype fusion device with a net power output within the next 15 years. This timeframe is necessary to address “the greatest challenge we have now, which is climate change.”

“Humanity is standing on the edge of a precipice right now,” warned Kerry Emanuel, the Cecil and Ida Green Professor in Earth and Planetary Sciences, who studies the impacts climate change will have on the intensity and frequency of hurricanes and other storms. Because of the existential threat posed by climate change, it is crucial to develop every possible source of carbon-free energy, and fusion power has the potential to be a major part of the solution, he said.

Emanuel countered the claims by some skeptics who say that climate has always been changing, pointing out that human civilization has developed during the last several thousand years, which has been a period of exceptional climate stability. While global sea level rose by 400 feet at the end of the last ice age, he said, that was a time when humans were essentially nomads. “A 1-meter change today, in either direction, would be very problematic for humanity,” he said, adding that expected changes in rainfall patterns could have serious impacts on access to water and food.

Only three large countries have successfully shifted their economies away from fossil fuels, he said: Sweden, Belgium, and France. And all of those did so largely on the strength of hydropower and nuclear power — and did so in only about 15 years. “We’re going to have to do whatever works,” he said, and while conventional fission-based nuclear power may be essential in the near term, in the longer term fusion power could be key to weaning the world from fossil fuels.

Andrew Lo, the Charles E. and Susan T. Professor of Economics at MIT’s Sloan School of Management, said that for large projects such as the development of practical fusion power plants, new kinds of funding mechanisms may be needed, as conventional venture capitalists and other traditional sources may not be sufficient to meet their costs. “We need to get the narrative right,” he said, to make it clear to people that investments will be needed to meet the challenge. “We need to make fusion real,” which means something on the order of a billion dollars of investment in various potential approaches, to maximize odds of success, Lo said.

Katie Rae, executive director of The Engine, a program founded by MIT and designed to help spinoff companies bridge the gap between lab and commercial success, explained how that organization’s directors quickly came to unanimous agreement that the fusion project, aimed at developing a demonstration fusion device called SPARC, was worthy of the maximum investment to help bring about its transformative goals. The Engine aims to help projects whose development doesn’t fit into the 10-year expectation for a financial return that is typical of venture capital funds. Such projects require more long-range thinking — up to 18 years, in the case of the SPARC project. The goals of the project, she said, aligned perfectly with the reasons The Engine was created. “It is so central to why we exist,” she said.

Anne White, a nuclear physicist at the PSFC and the Cecil and Ida Green Associate Professor in Nuclear Engineering, explained why the SPARC concept is important for moving the field of fusion to a path that can lead directly to commercial power production. As soon as the team’s demonstration device proves that it is possible to produce more power than the device consumes — a milestone never yet achieved by a fusion device — “the narrative changes at that moment. We’ll know we are almost there,” she said.

But getting to that point has always been a daunting challenge. “It was a bit too expensive and the device was a bit too big” to move forward, until the last few years when advances in superconducting magnet technology made it possible to create more powerful magnets that could enable a smaller fusion power plant to deliver an amount of power that would have required a larger power plant with previous technology. That’s what made the new SPARC project possible, White explained.

Bob Mumgaard, who is CEO of the MIT spinoff company CFS, described the next steps the team is taking: to design and make the large superconducting magnet assemblies needed for a working fusion demonstration device. The company, which currently has 30 employees but is growing fast, is in the process of “building the strongest magnets we can build,” which in turn may find applications in other industries even as the group makes progress toward fusion power. He said within two years they should have full-scale magnets up and running.

CFS and the MIT effort are far from alone, though, Mumgaard said. There are about 20 companies actively involved in such fusion research. “This is a vibrant, evolving system,” he said. Rather than a static landscape, he said, “there’s a lot of interplay — it’s more of an ecosystem.” And MIT and CFS, with their innovative approach to designing a compact, lower-cost power plant architecture that can be built faster and more efficiently, “have changed the narrative already in that ecosystem, and that is a very exciting thing.”

 

Saving Us From AI’s Worst Case Scenarios – An Interview with MIT Professor Max Tegmark


artificial-intelligence-people-mit-00_2

(AI) “… Instead, the largest threat would be if it turns extremely competent. This is because the competent goals may not be aligned with our goals either because it is controlled by someone who does not share our goals, or because the machine itself has power over us.”

Artificial intelligence (AI) is one of the hottest trends pursued by the private sector, academics, and government institutions. The promise of AI is to make our lives better: to have an electronic brain to complement our own, to take over menial tasks so that we can focus on higher value activities, to allow us to make better decisions in our personal and professional lives.

There is also a darker side to AI that many fear. What happens when bad actors leverage AI for bad uses? How will we ensure that AI is not a wedge to divide the haves and have-nots further apart? Moreover, what happens when our jobs are fundamentally changed or go away when we derive so much of what defines us from what we do professionally?

Max Tegmark has studied these issues intimately from his perch as a professor at MIT and as the  co-founder of the Future of Life Institute. He has synthesized his own thoughts into a powerful book called Life 3.0: Being Human in the Age of Artificial Intelligence. As the title suggests, AI will redefine what it means to be human due to the scale of the changes it will bring about.

                                                                     

Tegmark likes the analogy of the automobile to make the case for what is necessary for AI to be beneficial for humanity. He notes that the three things that are necessary are that it have an engine (the power to create value), it needs steering (so that it can be moved toward good rather than evil ends), and it must have direction or a roadmap for how to get to the beneficial destination. He notes that “the way    to create a good future with technology is to continuously win the wisdom race. As technology grows more powerful, the wisdom in which we manage it must keep up.” He describes all of this and more in this interview.  professor mark tegmark https___blogs-images.forbes.com_peterhigh_files_2019_01_maxresdefault-300x169

MIT Professor and Author, Max Tegmark CREDIT: MIT 

(To listen to an unabridged podcast version of this interview, please click this link. This is the 31st interview in the Tech Influencers series. To listen to past interviews with the likes of former Mexican President Vicente Fox, Sal Khan, Sebastian Thrun, Steve Case, Craig Newmark, Stewart Butterfield, and Meg Whitman, please visit this link. To read future articles in this series, please follow me on on Twitter @PeterAHigh.)

The Interview by Peter High

Peter High is President of Metis Strategy, a business and IT advisory firm. His latest book is Implementing World Class IT Strategy. He is also the author of World Class IT: Why Businesses Succeed When IT Triumphs.

 

Peter High: Congratulations on your book, Life 3.0: Being Human in the Age of Artificial Intelligence. When and where did your interest in the topic of Artificial Intelligence [AI] begin?

High: When you have described your efforts to figure out where AI might take us, you make an analogy to driving a car. First, you need the engine and the power to make AI work. Second, you need steering because AI must be steered in one direction or another. Lastly, there needs to be a destination. Can you elaborate on each of those topics, and could you give us your hypothesis as to where we are heading?

Tegmark: If you are building a rocket or a car, it would be nuts to exclusively focus on the engine’s power while ignoring how to steer it. Even if you have the steering sorted out, you are going to have trouble if you are unable to determine where you are trying to go with it. Unfortunately, I believe this is what we are doing as we continue to build more powerful technology, especially with AI. To be as ambitious as possible, we need to think about all three elements, which are the power, the steering, and the destination of the technology.

Because it is so important, I spend a great deal of time at MIT focused on steering. Along with Jaan Tallinn and several other colleagues, I co-founded the Future of Life Institute, which [focuses on] the destination. While we are making AI more powerful, it is critical to know what type of society we are aspiring to create with this technology. If society accomplishes the original goal of AI research, which is to make so-called “Artificial General Intelligence” [AGI] that can do all jobs better than humans, we have to determine what it will mean to be a human in the future. I am convinced that if we succeed, it will either be the best or the worst advancement ever, and it will come down to the amount of planning we do now. If we have no clue about where we want to go, it is unlikely that we are going to end up in an ideal situation. However, if we plan accordingly and steer technology in the right direction, we can create an inspiring future that will allow humanity to flourish in a way that we have never seen before.

I believe this to be true because the reason that today’s society is better than the Stone Age is because of technology. Everything I love about civilization is the product of intelligence. Technology is the reason why the life expectancy is no longer 32 years. If we can take this further and amplify our intelligence with AI, we have the potential to solve humanity’s greatest challenges. These technologies can help us cure other diseases that we are currently told are incurable because we have not been smart enough to solve them. Further, technology can lift everybody out of poverty, solve the issues in our climate, and allow us to go in inspiring directions that we have not even thought of yet. It is clear that there is an enormous upside if we get this right, and that is why I am incredibly motivated to work on that.

High: I am struck by the caveman analogy. We are so far removed from cavemen and cavewomen that a modern human and caveman would not be able to recognize each other in terms of life expectancy, the ability to communicate, and the time we have to reflect and ponder our situation, among other differences.

Tegmark: That is so true, and you said something super interesting there. While we are so far removed, we are largely stuck in the caveman mindset. When we were cavemen, the most powerful technology we had were rocks and sticks, which limited our ancestors’ ability to cause significant damage. While there were always cavemen that wanted to harm as many people as possible, there was only so much damage one could do with a rock and a stick.

Unfortunately, with nuclear weapons, the damage can be devastating, and as technology gets more powerful, it becomes easier to mess up. However, at the same time, we now have more power to use technology for good. Because of both of these factors, the more powerful the technology gets, the more important the steering becomes. Technology is neither good nor evil, so when people ask me if I am for AI or against AI, I ask them if they are for fire or against fire. Fire can obviously be used to keep your house warm in the winter, or it can be used for arson. To keep this under control, we have put a great deal of effort into the steering of fire. We have fire extinguishers and fire departments, and we created ways to punish people who use fire in ways that are not appropriate.

We have to step out of our caveman mindset. The way to create a good future with technology is to continuously win the wisdom race. As technology grows more powerful, the wisdom in which we manage it must keep up. This was true with fire and with the automobile engine, and I believe we were successful in those missions. While we continuously messed up, we learned from our mistakes and invented the seat belt, the airbag, traffic lights, and laws against speeding. Ever since we were cavemen, we have been able to stay ahead in the wisdom race by learning from our mistakes. However, as technology gets more powerful, the margin for error is evaporating, and one mistake in the future may be one too many. We obviously do not want to have an “accidental” nuclear war with Russia and just brush it off as a mistake that we can learn from and be more careful of the next time. It is far more effective to be proactive and plan ahead, rather than reactive. I believe we need to implement this mindset before we build technology that can do everything better than us.

High: You mentioned there are some attributes that we still share with our distant ancestors. Even if AGI does not come for decades, the change will be almost the same in magnitude as the change from cavemen to the present day. For example, it potentially has the power to change the way in which we work. You have written persuasively about the possibility of what we do being taken over by AI. In a society where many of us are defined by the work that we do, it is quite unsettling to know that, what I love about my day job today will be done better by AI. We may need to redefine ourselves as a result. What are your perspectives on that?

Tegmark: I agree with that, and I would take it a step further and say that the jump from today to AGI is a bigger one than the jump from cavemen to the present day. When we were cavemen, we were the smartest species on the planet, and we still are today. With AGI, we will not be, which is a huge game changer. While we have doubled our life expectancy and seen new technologies emerge, we are still stuck on this tiny planet, and the majority of people still die within a century. However, if we can build AGI, the opportunities will be limitless.

People are not realizing this, and because we are still stuck in this caveman mindset, we continue to think that it will take us thousands of years to find a way to live 200, or even 1,000 years. Moreover, the mindset that we have to invent all the technologies ourselves has led us to believe that it will take thousands of years to move to another solar system. However, this is far from true because, by definition, AGI has the ability to do all jobs better than us, including jobs that can invent better AI among other technologies. This capability has led many to believe that AGI could be the last invention that we need to make. We may end up with a future where life on Earth and beyond flourishes for billions of years, not just for the next election cycle. This could all start on Earth if we can solve intelligence and use it to go in amazing directions. If we get this right, the upside will be far more significant than the benefits we reaped going from cavemen to the present day.

Regarding what it means to be a human if all jobs can be done better by machines, that is why the subtitle of my book is, Being Human in the Age of Artificial Intelligence. Jobs do not just give us an income, they give us meaning and a sense of purpose in our lives. Even if we can produce all that we need with machines and figure out how to share the wealth, it does not solve the question of how that purpose and meaning will be replaced. This crucial dilemma absolutely cannot be left to tech nerds such as myself because AI programmers are not world experts on what makes humans happy. We need to broaden this conversation to get everyone on board and discuss what type of future we want to create. This is essential, and unfortunately, I do not believe that we are going about this the right way.

Students often walk into my office asking for career advice, and in response, I always start by asking them about where they want to be in the future. If all the student can say is that they may get cancer, be murdered, or run over by a truck, that is a terrible strategy for career planning. I want these people to come in with fire in their eyes and say, “This is where I want to be.” From there, we can figure out what the challenges are and come up with a strong strategy to avoid them so that they can get to where they want to be. While we should take this same approach as a species, it is not the one we are taking. Every time I go to the movies and see something about the future, it showcases one dystopia after another. This approach makes us paranoid, and it divides us in the same way that fear always has. It is crucial for us to have a conversation around the type of futures we are excited about. I am not talking about getting 10 percent richer or curing a minor disease, but I want people to think big. If machines can do everything with technology, what kind of future would fire us up? What type of society do we want to live in? What would your typical day look like? If we can articulate a shared, positive vision that catches on around the world, I believe we have a real chance of getting there.

High: What happens if AGI gets to the point where the work that you are doing at MIT and at the Future of Life Institute is no longer meaningful?

Tegmark: That is a hard-hitting question. I get an incredible amount of joy from figuring stuff out, and if I could just press a button and the computer would write my papers for me, would it be as much fun? This is not an easy topic.

In my book, I discuss twelve different futures that people can choose between. Just because we can think about a future that we are convinced is perfect, does not mean that we should do nothing. At a minimum, we should do the necessary thinking that will allow us to steer our future in the right direction. There are some obvious decisions that need to be made now, such as how income inequality will be handled. While we may be able to dramatically grow the overall world GDP, we must be able to share this economic pie so that everybody is better off. As more and more jobs get replaced by machines, incomes that have typically been paid in salaries will go towards whoever owns the machines. This concept is why Facebook, a high-tech company, is twelve times more valuable than Ford, despite the fact that it has eight times fewer employees. Unfortunately, we have not begun to make these decisions, and if we are unable to do so to the point where everyone benefits, then shame on us. As companies become more high-tech, we must make twists to the system to avoid leaving more people behind and ending up with far more income inequality. If this problem does not get solved, we will end up with more and more angry people, which will make democracy more and more unworkable. However, on the bright side, all that wealth makes this problem relatively easy to fix. All that needs to be done is to bring in enough tax revenue so that everyone can be better off.

The second aspect, which I believe is a no-brainer, is that we must ensure that we avoid a damaging arms race with the lethal autonomous weapons. Fortunately, nearly all the research in AI is going towards helping people in various ways, and most AI researchers want to keep it that way. Around the time I was born, we were on the cusp of a horrible arms race with bioweapons. When this happened, the biologists pushed hard to get an international ban on bioweapons, and as a result, most people cannot remember the last time they read about a bioweapon terrorist attack in the newspaper. If you ask a hundred random people on the street about their opinions on biology, they are all going to smile and associate it with new cures, rather than with bioweapons. It is critical that we handle AI weapons in a similar way.

We need to put a greater focus on the steering aspect of AI. Nearly all of the funding going into AI has been around making it more powerful, and little is going towards AI safety research. Even increasing this a little bit will make an impactful difference. As we put AI in charge of more infrastructure-level decisions, we must transform buggy and hackable computers into robust AI systems that can be trusted. If we fail to do so, all these fascinating new technologies can malfunction, harm us, or be hacked and used against us.ai-davenport-artificial-intelligence-pilots-innovators-early-adopters-implementation-industry-production-1200

As AI becomes more and more capable, we have to work on the value alignment problems of AI. The real threat with AGI is not that it is going to turn evil in the way that it does in the silly Hollywood movies. Instead, the largest threat would be if it turns extremely competent. This is because the competent goals may not be aligned with our goals either because it is controlled by someone who does not share our goals, or because the machine itself has power over us. We must solve some tough technical challenges in order to neutralize this threat. We have to figure out how to make machines understand our goals, adopt our goals, and then keep these goals if they get smarter. Although work has begun in this area, these problems are hard, and it may take roughly 30 years to solve them. It is absolutely critical that we focus on this problem now so that we have the answers by the time we need them. We have to stop looking at these issues as an afterthought.

High: What role do private sector, academic, and governmental institutions play? Each is exerting influence in their own ways, and they are progressing at different rates. How do you see that balance?

mit tegmark bkawytw4szvjjak4s5xa8e-320-80Tegmark: Academia is great for developing solutions to AI safety problems while making them publicly available so that everyone in the world can use them. You want safety solutions to be free because if someone owns the IP on them, it will cause a worse outcome.

I believe private companies have mostly played a constructive role in helping encourage the safety work around AI. For example, most of the big players in AI, such as Google, IBM, Microsoft, Facebook, and many international companies, have joined together in an AI partnership to encourage safety development.

On the flip side, governments need to step it up and provide more funding for the safety research. No government should fund nuclear reactor research without funding reactor safety research. Similarly, no country should fund computer science research without putting a decent slice towards the steering part.

That is my wish list as to what we should focus on in the current day to maximize the chances of this going well. In parallel, everyone else needs to ask themselves what future they want to see. They should remember that the next time they vote and whenever they exert influence, we want to create a future for everybody.

High: How do you keep up with the progress or lack thereof of these advances?

Tegmark: Both through the research taking place at MIT and through the nerdy AI conferences that I go to. Additionally, the non-profit work that I have been doing has been fascinating. I have spent a great deal of time speaking with top researchers and CEOs who are making incredible progress on this. I am encouraged, and I find that the leaders are mostly an idealistic bunch. I do not believe that they are doing this exclusively for the money. Instead, they want this technology to represent an opportunity to create a better future. We need to make sure that the society at large shares this goal of channeling AI for good, instead of using it to hack elections and create new ways to murder people anonymously. That would be an incredibly sad result of all these good intentions.

Peter High is President of Metis Strategy, a business and IT advisory firm. His latest book is Implementing World Class IT Strategy. He is also the author of World Class IT: Why Businesses Succeed When IT Triumphs. Peter moderates the Forum on World Class IT podcast series. He speaks at conferences around the world. Follow him on Twitter @PeterAHigh.

 

I am the president of Metis Strategy, a business and IT strategy firm that I founded in 2001. I have advised many of the best chief information officers at multi-billion dollar corporations in the United States and abroad. I’ve written for the Wall Street Journal, CIO Magazi… MORE

The US and China are in a Quantum Arms Race that will Transform Future Warfare


stealth bomber 03f9261bb3c481551b60cbf6fc87adc9

Radar that can spot stealth aircraft and other quantum innovations could give their militaries a strategic edge

In the 1970s, at the height of the Cold War, American military planners began to worry about the threat to US warplanes posed by new, radar-guided missile defenses in the USSR and other nations. In response, engineers at places like US defense giant Lockheed Martin’s famous “Skunk Works” stepped up work on stealth technology that could shield aircraft from the prying eyes of enemy radar.

The innovations that resulted include unusual shapes that deflect radar waves—like the US B-2 bomber’s “flying wing” design (above)—as well as carbon-based materials and novel paints. Stealth technology isn’t yet a Harry Potter–like invisibility cloak: even today’s most advanced warplanes still reflect some radar waves. But these signals are so small and faint they get lost in background noise, allowing the aircraft to pass unnoticed.

China and Russia have since gotten stealth aircraft of their own, but America’s are still better. They have given the US the advantage in launching surprise attacks in campaigns like the war in Iraq that began in 2003.

This advantage is now under threat. In November 2018, China Electronics Technology Group Corporation (CETC), China’s biggest defense electronics company, unveiled a prototype radar that it claims can detect stealth aircraft in flight. The radar uses some of the exotic phenomena of quantum physics to help reveal planes’ locations.

It’s just one of several quantum-inspired technologies that could change the face of warfare. As well as unstealthing aircraft, they could bolster the security of battlefield communications and affect the ability of submarines to navigate the oceans undetected. The pursuit of these technologies is triggering a new arms race between the US and China, which sees the emerging quantum era as a once-in-a-lifetime opportunity to gain the edge over its rival in military tech.

Stealth spotter

How quickly quantum advances will influence military power will depend on the work of researchers like Jonathan Baugh. A professor at the University of Waterloo in Canada, Baugh is working on a device that’s part of a bigger project to develop quantum radar. Its intended users: stations in the Arctic run by the North American Aerospace Defense Command, or NORAD, a joint US-Canadian organization.

Baugh’s machine generates pairs of photons that are “entangled”—a phenomenon that means the particles of light share a single quantum state. A change in one photon immediately influences the state of the other, even if they are separated by vast distances.

Quantum radar operates by taking one photon from every pair generated and firing it out in a microwave beam. The other photon from each pair is held back inside the radar system.

Equipment from a prototype quantum radar system made by China Electronics Technology Group Corporation IMAGINECHINA VIA AP IMAGES

Only a few of the photons sent out will be reflected back if they hit a stealth aircraft. A conventional radar wouldn’t be able to distinguish these returning photons from the mass of other incoming ones created by natural phenomena—or by radar-jamming devices. But a quantum radar can check for evidence that incoming photons are entangled with the ones held back. Any that are must have originated at the radar station. This enables it to detect even the faintest of return signals in a mass of background noise.

Baugh cautions that there are still big engineering challenges. These include developing highly reliable streams of entangled photons and building extremely sensitive detectors. It’s hard to know if CETC, which already claimed in 2016 that its radar could detect objects up to 100 kilometers (62 miles) away, has solved these challenges; it’s keeping the technical details of its prototype a secret.

Seth Lloyd, an MIT professor who developed the theory underpinning quantum radar, says that in the absence of hard evidence, he’s skeptical of the Chinese company’s claims. But, he adds, the potential of quantum radar isn’t in doubt. When a fully functioning device is finally deployed, it will mark the beginning of the end of the stealth era.

China’s ambitions

CETC’s work is part of a long-term effort by China to turn itself into a world leader in quantum technology. The country is providing generous funding for new quantum research centers at universities and building a national research center for quantum science that’s slated to open in 2020. It’s (China) already leaped ahead of the US in registering patents in quantum communications and cryptography.

A study of China’s quantum strategy published in September 2018 by the Center for a New American Security (CNAS), a US think tank, noted that the Chinese People’s Liberation Army (PLA) is recruiting quantum specialists, and that big defense companies like China Shipbuilding Industry Corporation (CSIC) are setting up joint quantum labs at universities. Working out exactly which projects have a military element to them is hard, though. “There’s a degree of opacity and ambiguity here, and some of that may be deliberate,” says Elsa Kania, a coauthor of the CNAS study.

China’s efforts are ramping up just as fears are growing that the US military is losing its competitive edge. A commission tasked by Congress to review the Trump administration’s defense strategy issued a report in November 2018 warning that the US margin of superiority “is profoundly diminished in key areas” and called for more investment in new battlefield technologies.

One of those technologies is likely to be quantum communication networks. Chinese researchers have already built a satellite that can send quantum-encrypted messages between distant locations, as well as a terrestrial network that stretches between Beijing and Shanghai. Both projects were developed by scientific researchers, but the know-how and infrastructure could easily be adapted for military use.

The networks rely on an approach known as quantum key distribution (QKD). Messages are encoded in the form of classical bits, and the cryptographic keys needed to decode them are sent as quantum bits, or qubits. These qubits are typically photons that can travel easily across fiber-optic networks or through the atmosphere. If an enemy tries to intercept and read the qubits, this immediately destroys their delicate quantum state, wiping out the information they carry and leaving a telltale sign of an intrusion.

QKD technology isn’t totally secure yet. Long ground networks require way stations  similar to the repeaters that boost signals along an ordinary data cable. At these stations, the keys are decoded into classical form before being re-encoded in a quantum form and sent to the next station. While the keys are in classical form, an enemy could hack in and copy them undetected.

To overcome this issue, a team of researchers at the US Army Research Laboratory in Adelphi, Maryland, is working on an approach called quantum teleportation. This involves using entanglement to transfer data between a qubit held by a sender and another held by a receiver, using what amounts to a kind of virtual, one-time-only quantum data cable. (There’s a more detailed description here.)

Michael Brodsky, one of the researchers, says he and his colleagues have been working on a number of technical challenges, including finding ways to ensure that the qubits’ delicate quantum state isn’t disrupted during transmission through fiber-optic networks. The technology is still confined to a lab, but the team says it’s now robust enough to be tested outside. “The racks can be put on trucks, and the trucks can be moved to the field,” explains Brodsky. china teleport 2014-10-22_quantum

It may not be long before China is testing its own quantum teleportation system. Researchers are already building the fiber-optic network for one that will stretch from the city of Zhuhai, near Macau, to some islands in Hong Kong.

Quantum compass

Researchers are also exploring using quantum approaches to deliver more accurate and foolproof navigation tools to the military. US aircraft and naval vessels already rely on precise atomic clocks to help keep track of where they are. But they also count on signals from the Global Positioning System (GPS), a network of satellites orbiting Earth. This poses a risk because an enemy could falsify, or “spoof,” GPS signals—or jam them altogether.

Lockheed Martin thinks American sailors could use a quantum compass based on microscopic synthetic diamonds with atomic flaws known as nitrogen-vacancy centers, or NV centers. These quantum defects in the diamond lattice can be harnessed to form an extremely accurate magnetometer. Shining a laser on diamonds with NV centers makes them emit light at an intensity that varies according to the surrounding magnetic field.

Ned Allen, Lockheed’s chief scientist, says the magnetometer is great at detecting magnetic anomalies—distinctive variations in Earth’s magnetic field caused by magnetic deposits or rock formations. There are already detailed maps of these anomalies made by satellite and terrestrial surveys. By comparing anomalies detected using the magnetometer against these maps, navigators can determine where they are. Because the magnetometer also indicates the orientation of magnetic fields, ships and submarines can use them to work out which direction they are heading.

China’s military is clearly worried about threats to its own version of GPS, known as BeiDou. Research into quantum navigation and sensing technology is under way at various institutes across the country, according to the CNAS report.

As well as being used for navigation, magnetometers can also detect and track the movement of large metallic objects, like submarines, by fluctuations they cause in local magnetic fields. Because they are very sensitive, the magnetometers are easily disrupted by background noise, so for now they are used for detection only at very short distances. But last year, the Chinese Academy of Sciences let slip that some Chinese researchers had found a way to compensate for this using quantum technology. That might mean the devices could be used in the future to spot submarines at much longer ranges.

A tight race

It’s still early days for militaries’ use of quantum technologies. There’s no guarantee they will work well at scale, or in conflict situations where absolute reliability is essential. But if they do succeed, quantum encryption and quantum radar could make a particularly big impact. Code-breaking and radar helped change the course of World War II. Quantum communications could make stealing secret messages much harder, or impossible. Quantum radar would render stealth planes as visible as ordinary ones. Both things would be game-changing.

It’s also too early to tell whether it will be China or the US that comes out on top in the quantum arms race—or whether it will lead to a Cold War–style stalemate. But the money China is pouring into quantum research is a sign of how determined it is to take the lead.

China has also managed to cultivate close working relationships between government research institutes, universities, and companies like CSIC and CETC. The US, by comparison, has only just passed legislation to create a national plan for coordinating public and private efforts. The delay in adopting such an approach has led to a lot of siloed projects and could slow the development of useful military applications. “We’re trying to get the research community to take more of a systems approach,” says Brodsky, the US army quantum expert.

qubit-type-and-year

U.S. Leads World in Quantum Computing Patent Filings with IBM Leading the Charge

Still, the US military does have some distinct advantages over the PLA. The Department of Defense has been investing in quantum research for a very long time, as have US spy agencies. The knowledge generated helps explains why US companies lead in areas like the development of powerful quantum computers, which harness entangled qubits to generate immense amounts of processing power.

The American military can also tap into work being done by its allies and by a vibrant academic research community at home. Baugh’s radar research, for instance, is funded by the Canadian government, and the US is planning a joint research initiative with its closest military partners—Canada, the UK, Australia, and New Zealand—in areas like quantum navigation.

All this has given the US has a head start in the quantum arms race. But China’s impressive effort to turbocharge quantum research means the gap between them is closing fast.

MIT: Physicists record ‘lifetime’ of graphene qubits – Foundation for Advancing Quantum Computing


 

Researchers from MIT and elsewhere have recorded, for the first time, the “temporal coherence” of a graphene qubit

The demonstration, which used a new kind of graphene-based qubit, meaning how long it can maintain a special state that allows it to represent two logical states simultaneously, represents a critical step forward for practical quantum computing, the researchers say.

Superconducting quantum bits (simply, qubits) are artificial atoms that use various methods to produce bits of quantum information, the fundamental component of quantum computers. Similar to traditional binary circuits in computers, qubits can maintain one of two states corresponding to the classic binary bits, a 0 or 1.

But these qubits can also be a superposition of both states simultaneously, which could allow quantum computers to solve complex problems that are practically impossible for traditional computers.

The amount of time that these qubits stay in this superposition state is referred to as their “coherence time.” The longer the coherence time, the greater the ability for the qubit to compute complex problems.

Recently, researchers have been incorporating graphene-based materials into superconducting quantum computing devices, which promise faster, more efficient computing, among other perks.

Until now, however, there’s been no recorded coherence for these advanced qubits, so there’s no knowing if they’re feasible for practical quantum computing. In a paper published today in Nature Nanotechnology, the researchers demonstrate, for the first time, a coherent qubit made from graphene and exotic materials.

These materials enable the qubit to change states through voltage, much like transistors in today’s traditional computer chips — and unlike most other types of superconducting qubits. Moreover, the researchers put a number to that coherence, clocking it at 55 nanoseconds, before the qubit returns to its ground state.

The work combined expertise from co-authors William D. Oliver, a physics professor of the practice and Lincoln Laboratory Fellow whose work focuses on quantum computing systems, and Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT who researches innovations in graphene.

“Our motivation is to use the unique properties of graphene to improve the performance of superconducting qubits,” says first author Joel I-Jan Wang, a postdoc in Oliver’s group in the Research Laboratory of Electronics (RLE) at MIT.

“In this work, we show for the first time that a superconducting qubit made from graphene is temporally quantum coherent, a key requisite for building more sophisticated quantum circuits. Ours is the first device to show a measurable coherence time — a primary metric of a qubit — that’s long enough for humans to control.”

There are 14 other co-authors, including Daniel Rodan-Legrain, a graduate student in Jarillo-Herrero’s group who contributed equally to the work with Wang; MIT researchers from RLE, the Department of Physics, the Department of Electrical Engineering and Computer Science, and Lincoln Laboratory; and researchers from the Laboratory of Irradiated Solids at the École Polytechnique and the Advanced Materials Laboratory of the National Institute for Materials Science.

A pristine graphene sandwich

Superconducting qubits rely on a structure known as a “Josephson junction,” where an insulator (usually an oxide) is sandwiched between two superconducting materials (usually aluminum).

In traditional tunable qubit designs, a current loop creates a small magnetic field that causes electrons to hop back and forth between the superconducting materials, causing the qubit to switch states.

But this flowing current consumes a lot of energy and causes other issues. Recently, a few research groups have replaced the insulator with graphene, an atom-thick layer of carbon that’s inexpensive to mass produce and has unique properties that might enable faster, more efficient computation.

To fabricate their qubit, the researchers turned to a class of materials, called van der Waals materials — atomic-thin materials that can be stacked like Legos on top of one another, with little to no resistance or damage.

These materials can be stacked in specific ways to create various electronic systems. Despite their near-flawless surface quality, only a few research groups have ever applied van der Waals materials to quantum circuits, and none have previously been shown to exhibit temporal coherence.

For their Josephson junction, the researchers sandwiched a sheet of graphene in between the two layers of a van der Waals insulator called hexagonal boron nitride (hBN). Importantly, graphene takes on the superconductivity of the superconducting materials it touches.

The selected van der Waals materials can be made to usher electrons around using voltage, instead of the traditional current-based magnetic field. Therefore, so can the graphene — and so can the entire qubit.

 

When voltage gets applied to the qubit, electrons bounce back and forth between two superconducting leads connected by graphene, changing the qubit from ground (0) to excited or superposition state (1). The bottom hBN layer serves as a substrate to host the graphene.

The top hBN layer encapsulates the graphene, protecting it from any contamination. Because the materials are so pristine, the traveling electrons never interact with defects. This represents the ideal “ballistic transport” for qubits, where a majority of electrons move from one superconducting lead to another without scattering with impurities, making a quick, precise change of states.

How voltage helps

The work can help tackle the qubit “scaling problem,” Wang says. Currently, only about 1,000 qubits can fit on a single chip. Having qubits controlled by voltage will be especially important as millions of qubits start being crammed on a single chip.

“Without voltage control, you’ll also need thousands or millions of current loops too, and that takes up a lot of space and leads to energy dissipation,” he says.

Additionally, voltage control means greater efficiency and a more localized, precise targeting of individual qubits on a chip, without “cross talk.” That happens when a little bit of the magnetic field created by the current interferes with a qubit it’s not targeting, causing computation problems.

For now, the researchers’ qubit has a brief lifetime. For reference, conventional superconducting qubits that hold promise for practical application have documented coherence times of a few tens of microseconds, a few hundred times greater than the researchers’ qubit.

But the researchers are already addressing several issues that cause this short lifetime, most of which require structural modifications. They’re also using their new coherence-probing method to further investigate how electrons move ballistically around the qubits, with aims of extending the coherence of qubits in general.

Coherent control of a hybrid superconducting circuit made with graphene-based van der Waals heterostructures
Joel I-Jan Wang, Daniel Rodan-Legrain, Landry Bretheau, Daniel L. Campbell, Bharath Kannan, David Kim, Morten Kjaergaard, Philip Krantz, Gabriel O. Samach, Fei Yan, Jonilyn L. Yoder, Kenji Watanabe, Takashi Taniguchi, Terry P. Orlando, Simon Gustavsson, Pablo Jarillo-Herrero & William D. Oliver
Nature Nanotechnology (2018)
DOI: 10.1038_s41565-018-0329-2

Contact information:

William D. Oliver
MIT Physics Professor of the Practice
oliver@ll.mit.edu URL: http://www.rle.mit.edu/

Pablo Jarillo-Herrero
MIT Physics Professor
pjarillo@mit.edu URL: http://jarilloherrero.mit.edu/

Massachusetts Institute of Technology (MIT)

 

A New-Nano Approach to Liquid-Repelling Surfaces


mit-dew-omniphobicity-01_0

This photo shows water droplets placed on the nanostructured surface developed by MIT researchers. The colors are caused by diffraction of visible light from the tiny structures on the surface, ridges with a specially designed shape. Images: Kyle Wilke

Novel surface design overcomes problem of condensation that bedeviled previous systems.

 

“Omniphobic” might sound like a way to describe someone who is afraid of everything, but it actually refers to a special type of surface that repels virtually any liquid. Such surfaces could potentially be used in everything from ship hulls that reduce drag and increase efficiency, to coverings that resist stains and protect against damaging chemicals. But the omniphobic surfaces developed so far suffer from a major problem: Condensation can quickly disable their liquid-shedding properties.

Now, researchers at MIT have found a way to overcome this effect, producing a surface design that drastically reduces the effects of condensation, although at a slight sacrifice in performance. The new findings are described in the journal ACS Nano, in a paper by graduate student Kyle Wilke, professor of mechanical engineering and department head Evelyn Wang, and two others.

Creating a surface that can shed virtually all liquids requires a precise kind of texture that creates an array of microscopic air pockets separated by pillars or ridges. These air pockets keep most of the liquid away from direct contact with the surface, preventing it from “wetting,” or spreading out to cover a whole surface. Instead, the liquid beads up into droplets.

“Many liquids are perfectly wetting, meaning the liquid completely spreads out,” says Wilke. These include many of the refrigerants used in air conditioners and refrigerators, hydrocarbons such as those used as fuels and lubricants, and many alcohols. “Those are very difficult to repel. The only way to do it is through very specific surface geometry, which is not that easy to make,” he adds.

Various groups are working on fabrication methods, he says, but with surface features measured in tens of microns (millionths of a meter) or less, “it can make it quite hard to fabricate, and can make the surfaces quite fragile.”

If such surfaces are damaged — for example, if one of the tiny pillars is bent or broken — it can defeat the whole process. “One local defect can destroy the entire surface’s ability to repel liquids,” he says. And condensation, such as dew forming because of a temperature difference between the air and the surface, acts in the same way, destroying the omniphobicity.

“We considered: How can we lose some of the repellency but make the surface robust” against both damage and dew, Wilke says. “We wanted a structure that one defect wouldn’t destroy.” After much calculation and experimentation, they found a geometry that meets that goal thanks, in part, to microscopic air pockets that are disconnected rather than connected on the surfaces, making spreading between pockets much less likely.

The features have to be very small, he explains, because when droplets form they are initially at the scale of nanometers, or billionths of a meter, and the spacing between these growing droplets can be less than a micrometer.

The key architecture the team developed is based on ridges whose profiles resemble a letter T, or in some cases a letter T with serifs (the tiny hooks at the ends of letter strokes in some typefaces). Both the shape itself and the spacing of these ridges are important to achieving the surface’s resistance to damage and condensation. The shapes are designed to use the surface tension of the liquid to prevent it from penetrating the tiny surface pockets of air, and the way the ridges connect prevents any local penetration of the surface cavities from spreading to others nearby — as the team has confirmed in laboratory tests.

The ridges are made in a multistep process using standard microchip manufacturing systems, first etching away the spaces between ridges, then coating the edges of the pillars, then etching away those coatings to create the indentation in the ridges’ sides, leaving a mushroom-like overhang at the top.

Because of the limitations of the current technology, Wilke says, omniphobic surfaces are rarely used today, but improving their durability and resistance to condensation could enable many new uses. The system will need further refinement, though, beyond this initial proof of the concept. Potentially, it could be used to make self-cleaning surfaces, and to improve resistance to ice buildup, to improve the efficiency of heat transfer in industrial processes including power generation, and to reduce drag on surfaces such as the hulls of ships.

Such surfaces could also provide protection against corrosion, by reducing contact between the material surface and any corrosive liquids that it may be exposed to, the researchers say. And because the new method offers a way of precisely designing the surface architecture, Wilke says it can be used for “tailoring how a surface interacts with liquids, such as for tailoring the heat transfer for thermal management in high-performance devices.”

Chang-Jin Kim, a professor of mechanical and aerospace engineering at the University of California at Los Angeles who was not involved in this work, says “One of the most significant limitations of omniphobic surfaces is that, while such a surface has a superior liquid repellency, the entire surface is wetted once the liquid gets into the voids in the textured surface at some locations. This new approach addresses this very limitation.”

Kim adds that “I like that their key idea was based on fundamental science, while their goal was to solve a key real-life problem. The problem they addressed is an important but very difficult one.” And, he says, “This approach can potentially make some of the omniphobic surfaces useful and practical for some important applications.”

The research team also included former graduate students Daniel Preston and Zhengmao Lu. The work was supported by the cooperative agreement between MIT and the Masdar Institute of Science and Technology in Abu Dhabi (now Khalifa University), the Abu Dhabi National Oil Company, the Office of Naval Research, the Air Force Office of Scientific Research, and the National Science Foundation.

From: David L. Chandler | MIT News Office