MIT – A New Language for Quantum Computing


While the nascent field of quantum computing can feel flashy and futuristic, quantum computers have the potential for computational breakthroughs in classically unsolvable tasks, like cryptographic and communication protocols, search, and computational physics and chemistry. Credits: Photo: Graham Carlow/IBM

Twist is an MIT-developed programming language that can describe and verify which pieces of data are entangled to prevent bugs in a quantum program.

Time crystals. Microwaves. Diamonds. What do these three disparate things have in common? 

Quantum computing. Unlike traditional computers that use bits, quantum computers use qubits to encode information as zeros or ones, or both at the same time. Coupled with a cocktail of forces from quantum physics, these refrigerator-sized machines can process a whole lot of information — but they’re far from flawless. Just like our regular computers, we need to have the right programming languages to properly compute on quantum computers. 

Programming quantum computers requires awareness of something called “entanglement,”computational multiplier for qubits of sorts, which translates to a lot of power. When two qubits are entangled, actions on one qubit can change the value of the other, even when they are physically separated, giving rise to Einstein’s characterization of “spooky action at a distance.” But that potency is equal parts a source of weakness. When programming, discarding one qubit without being mindful of its entanglement with another qubit can destroy the data stored in the other, jeopardizing the correctness of the program. 

Scientists from MIT’s Computer Science and Artificial Intelligence (CSAIL) aimed to do some unraveling by creating their own programming language for quantum computing called Twist. Twist can describe and verify which pieces of data are entangled in a quantum program, through a language a classical programmer can understand. The language uses a concept called purity, which enforces the absence of entanglement and results in more intuitive programs, with ideally fewer bugs. For example, a programmer can use Twist to say that the temporary data generated as garbage by a program is not entangled with the program’s answer, making it safe to throw away.

While the nascent field can feel a little flashy and futuristic, with images of mammoth wiry gold machines coming to mind, quantum computers have potential for computational breakthroughs in classically unsolvable tasks, like cryptographic and communication protocols, search, and computational physics and chemistry. One of the key challenges in computational sciences is dealing with the complexity of the problem and the amount of computation needed. Whereas a classical digital computer would need a very large exponential number of bits to be able to process such a simulation, a quantum computer could do it, potentially, using a very small number of qubits — if the right programs are there. 

“Our language Twist allows a developer to write safer quantum programs by explicitly stating when a qubit must not be entangled with another,” says Charles Yuan, an MIT PhD student in electrical engineering and computer science and the lead author on a new paper about Twist. “Because understanding quantum programs requires understanding entanglement, we hope that Twist paves the way to languages that make the unique challenges of quantum computing more accessible to programmers.” 

Yuan wrote the paper alongside Chris McNally, a PhD student in electrical engineering and computer science who is affiliated with the MIT Research Laboratory of Electronics, as well as MIT Assistant Professor Michael Carbin. They presented the research at last week’s 2022 Symposium on Principles of Programming conference in Philadelphia.

Untangling quantum entanglement 

Imagine a wooden box that has a thousand cables protruding out from one side. You can pull any cable all the way out of the box, or push it all the way in.

After you do this for a while, the cables form a pattern of bits — zeros and ones — depending on whether they’re in or out. This box represents the memory of a classical computer. A program for this computer is a sequence of instructions for when and how to pull on the cables.

Now imagine a second, identical-looking box. This time, you tug on a cable, and see that as it emerges, a couple of other cables are pulled back inside. Clearly, inside the box, these cables are somehow entangled with each other. 

The second box is an analogy for a quantum computer, and understanding the meaning of a quantum program requires understanding the entanglement present in its data. But detecting entanglement is not straightforward. You can’t see into the wooden box, so the best you can do is try pulling on cables and carefully reason about which are entangled. In the same way, quantum programmers today have to reason about entanglement by hand. This is where the design of Twist helps massage some of those interlaced pieces. 

The scientists designed Twist to be expressive enough to write out programs for well-known quantum algorithms and identify bugs in their implementations. To evaluate Twist’s design, they modified the programs to introduce some kind of bug that would be relatively subtle for a human programmer to detect, and showed that Twist could automatically identify the bugs and reject the programs.

They also measured how well the programs performed in practice in terms of runtime, which had less than 4 percent overhead over existing quantum programming techniques.

For those wary of quantum’s “seedy” reputation in its potential to break encryption systems, Yuan says it’s still not very well known to what extent quantum computers will actually be able to reach their performance promises in practice. “There’s a lot of research that’s going on in post-quantum cryptography, which exists because even quantum computing is not all-powerful. So far, there’s a very specific set of applications in which people have developed algorithms and techniques where a quantum computer can outperform classical computers.” 

An important next step is using Twist to create higher-level quantum programming languages. Most quantum programming languages today still resemble assembly language, stringing together low-level operations, without mindfulness towards things like data types and functions, and what’s typical in classical software engineering.

“Quantum computers are error-prone and difficult to program. By introducing and reasoning about the ‘purity’ of program code, Twist takes a big step towards making quantum programming easier by guaranteeing that the quantum bits in a pure piece of code cannot be altered by bits not in that code,” says Fred Chong, the Seymour Goodman Professor of Computer Science at the University of Chicago and chief scientist at Super.tech. 

The work was supported, in part, by the MIT-IBM Watson AI Lab, the National Science Foundation, and the Office of Naval Research.

Will Quantum Computers be Able to Crack Bitcoin? In 10 Years? Less?


Quantum Computing CPU

A new study reveals whether quantum computers could crack the complex blockchain cryptography that makes Bitcoin possible, and the answer is… complicated.

Quantum computers could, in theory, crack Bitcoin, but probably not in the near future, as they would have to be about a million times larger than they are today, a report from NewScientist reveals.

So, in practice, the cryptocurrency likely won’t be at risk from quantum computer-wielding hackers for roughly a decade.

Quantum supremacy could put the Bitcoin network at risk

The Bitcoin network uses a series of increasingly complex computations in the blockchain to make transactions. The immense processing power required to make these computations is what keeps crypto wallets secure, but it’s also the reason behind climate concerns over cryptocurrencies. In February last year, for example, an analysis by the University of Cambridge showed that so-called Bitcoin miners use more energy worldwide than entire countries, including Argentina and the Netherlands.

While this energy-intensive process makes it practically impossible for ordinary computers to crack the code used by the Bitcoin network, quantum computers are expected to be orders of magnitude more powerful than today’s classical computers.

What’s more, several companies, including Google and IBM already claim to have achieved quantum supremacy, a term which refers to the successful achievement of a calculation that it would take thousands of years for a classical computer to achieve.

Cracking the Bitcoin code

These recent breakthroughs in quantum computing are the reason why a team from the University of Sussex, led by Mark Webber, Ph.D., set out to investigate the requirements one of the machines would need to crack the Bitcoin network. 

“The [Bitcoin] transactions get announced and there’s a key associated with that transaction,” Webber told NewScientist. “And there’s a finite window of time that that key is vulnerable and that varies, but it’s usually around 10 minutes to an hour, maybe a day.”

Webber and his team calculated that breaking Bitcoin’s code in this 10-minute window would require a quantum computer with 1.9 billion qubits. Cracking it in an hour would require 317 million qubits, while 13 million qubits would be required to crack it in a day. 

“This large physical qubit requirement implies that the Bitcoin network will be secure from quantum computing attacks for many years (potentially over a decade),” Webber wrote in a paperpublished in the journal AVS Quantum Science. While that is assuring for Bitcoin owners, it does also highlight the possibility that huge Bitcoin fortunes could become vulnerable in the not-too-distant future. 

IBM’s superconducting quantum computer has only 127 qubits, meaning it would have to be a million times larger to hack Bitcoin. However, the company aims to build a 1000-qubit quantum computing chip called Condor by 2024. The pace of innovation in quantum computing is difficult to predict, but you can bet a Bitcoin that hackers will be keeping an eye on the latest developments.

One Step closer to Mainstream: Quantum computing in silicon hits 99 per cent accuracy: University of NSW: Video


Quantum Comp 99 percent

Australian researchers have proven that near error-free quantum computing is possible, paving the way to build silicon-based quantum devices compatible with current semiconductor manufacturing technology.

“Today’s publication shows our operations were 99 per cent error-free,” says Professor Andrea Morello of UNSW, who led the work with partners in the US, Japan, Egypt, and at UTS and the University of Melbourne.
“When the errors are so rare, it becomes possible to detect them and correct them when they occur. This shows that it is possible to build quantum computers that have enough scale, and enough power, to handle meaningful computation.”
The team’s goal is building what’s called a ‘universal quantum computer’ that won’t be specific to any one application.
“This piece of research is an important milestone on the journey that will get us there,” Prof. Morello says.
Quantum operations with 99% fidelity – the key to practical quantum computers.

Quantum computing in silicon hits the 99 per cent threshold

Prof. Morello’s paper is one of three published in Nature (“Precision tomography of a three-qubit donor quantum processor in silicon”) that independently confirm that robust, reliable quantum computing in silicon is now a reality. The breakthrough features on the front cover of the journal.
  • Morello et al achieved one-qubit operation fidelities up to 99.95 per cent, and two-qubit fidelity of 99.37 per cent with a three-qubit system comprising an electron and two phosphorous atoms, introduced in silicon via ion implantation.
  • A Delft team in the Netherlands led by Lieven Vandersypen achieved 99.87 per cent one-qubit and 99.65 per cent two-qubit fidelities using electron spins in quantum dots formed in a stack of silicon and silicon-germanium alloy (Si/SiGe).
  • A RIKEN team in Japan led by Seigo Tarucha similarly achieved 99.84 per cent one-qubit and 99.51 per cent two-qubit fidelities in a two-electron system using Si/SiGe quantum dots.
The UNSW and Delft teams certified the performance of their quantum processors using a sophisticated method called gate set tomography, developed at Sandia National Laboratories in the U.S. and made openly available to the research community.
Prof. Morello had previously demonstrated that he could preserve quantum information in silicon for 35 seconds, due to the extreme isolation of nuclear spins from their environment.
“In the quantum world, 35 seconds is an eternity,” says Prof. Morello. “To give a comparison, in the famous Google and IBM superconducting quantum computers the lifetime is about a hundred microseconds – nearly a million times shorter.”
But the trade-off was that isolating the qubits made it seemingly impossible for them to interact with each other, as necessary to perform actual computations.
A representation of the two phosphorous atoms sharing a single electron
An artist’s impression of quantum entanglement between three qubits in silicon: the two nuclear spins (red spheres) and one electron spin (shiny ellipse) which wraps around both nuclei. (Image: UNSW/Tony Melov)

Nuclear spins learn to interact accurately

Today’s paper describes how his team overcame this problem by using an electron encompassing two nuclei of phosphorus atoms.
“If you have two nuclei that are connected to the same electron, you can make them do a quantum operation,” says Mateusz Mądzik, one of the lead experimental authors.
“While you don’t operate the electron, those nuclei safely store their quantum information. But now you have the option of making them talk to each other via the electron, to realise universal quantum operations that can be adapted to any computational problem.”
“This really is an unlocking technology,” says Dr Serwan Asaad, another lead experimental author. “The nuclear spins are the core quantum processor. If you entangle them with the electron, then the electron can then be moved to another place and entangled with other qubit nuclei further afield, opening the way to making large arrays of qubits capable of robust and useful computations.”
Professor David Jamieson, research leader at the University of Melbourne, says: “The phosphorous atoms were introduced in the silicon chip using ion implantation, the same method used in all existing silicon computer chips. This ensures that our quantum breakthrough is compatible with the broader semiconductor industry.”
All existing computers deploy some form of error correction and data redundancy, but the laws of quantum physics pose severe restrictions on how the correction takes place in a quantum computer. Prof. Morello explains: “You typically need error rates below 1 per cent, in order to apply quantum error correction protocols. Having now achieved this goal, we can start designing silicon quantum processors that scale up and operate reliably for useful calculations.”

Global collaboration key to today’s trifecta

Semiconductor spin qubits in silicon are well-placed to become the platform of choice for reliable quantum computers. They are stable enough to hold quantum information for long periods and can be scaled up using techniques familiar from existing advanced semiconductor manufacturing technology.
“Until now, however, the challenge has been performing quantum logic operations with sufficiently high accuracy,” Prof. Morello says.
“Each of the three papers published today shows how this challenge can be overcome to such a degree that errors can be corrected faster than they appear.”
While the three papers report independent results, they illustrate the benefits that arise from free academic research, and the free circulation of ideas, people and materials. For instance, the silicon and silicon-germanium material used by the Delft and RIKEN groups was grown in Delft and shared between the two groups. The isotopically purified silicon material used by the UNSW group was provided by Professor Kohei Itoh, from Keio University in Japan.
The gate set tomography (GST) method, which was key to quantifying and improving the quantum gate fidelities in the UNSW and Delft papers, was developed at Sandia National Laboratories in the US, and made publicly available. The Sandia team worked directly with the UNSW group to develop methods specific for their nuclear spin system, but the Delft group was able to independently adopt it for its research too.
There has also been significant sharing of ideas through the movement of people between the teams, for example:
  • Dr Mateusz Mądzik, an author on the UNSW paper, is now a postdoctoral researcher with the Delft team.
  • Dr Serwan Asaad, an author on the UNSW paper, was formerly a student at Delft.
  • Prof. Lieven Vandersypen, the leader of the Delft team, spent a five-month sabbatical leave at UNSW in 2016, hosted by Prof. Andrea Morello.
  • The leader of the material growth team, Dr Giordano Scappucci, is a former UNSW researcher.
The UNSW-led paper is the result of a large collaboration, involving researchers from UNSW itself, University of Melbourne (for the ion implantation), University of Technology Sydney (for the initial application of the GST method), Sandia National Laboratories (Invention and refinement of the GST method), and Keio University (supply of the isotopically purified silicon material).
Source: University of New South Wales

How Quantum Computing Could Remake Chemistry


Quantum Chemistry C6CDE761-4D0F-4C92-AB655524F25DD263_source
Photo Credit: Andriy Onufriyenko  Article By Jeanette Garcia

In my career as a chemist, I owe a huge debt to serendipity. In 2012, I was in the right place (IBM’s Almaden research lab in California) at the right time—and I did the “wrong” thing. I was supposed to be mixing three components in a beaker in the hope of systematically uncovering a combination of chemicals, meaning to replace one of the chemicals with a version that was derived from plastic waste, in an effort to increase the sustainability of thermoset polymers.

Instead, when I mixed two of the reagents together, a hard, white plastic substance formed in the beaker. It was so tough I had to smash the beaker to get it out. Furthermore, when it sat in dilute acid overnight, it reverted to its starting materials. Without meaning to, I had discovered a whole new family of recyclable thermoset polymers. Had I considered it a failed experiment, and not followed up, we would have never known what we had made. It was scientific serendipity at its best, in the noble tradition of Roy Plunkett, who invented Teflon by accident while working on the chemistry of coolant gases.

Today, I have a new goal: to reduce the need for serendipity in chemical discovery. Nature is posing some real challenges in the world, from the ongoing climate crisis to the wake-up call of COVID-19. These challenges are simply too big to rely on serendipity. Nature is complex and powerful, and we need to be able to accurately model it if we want to make the necessary scientific advances.

Specifically, we need to be able to understand the energetics of chemical reactions with a high level of confidence if we want to push the field of chemistry forward. This is not a new insight, but it is one that highlights a major constraint: accurately predicting the behavior of even simple molecules is beyond the capabilities of even the most powerful computers.

This is where quantum computing offers the possibility of major advances in the coming years. Modeling energetic reactions on classical computers requires approximations, since they can’t model the quantum behavior of electrons over a certain system size. Each approximation reduces the value of the model and increases the amount of lab work that chemists have to do to validate and guide the model. Quantum computing, however, is now at the point where it can begin to model the energetics and properties of small molecules such as lithium hydride, LiH—offering the possibility of models that will provide clearer pathways to discovery than we have now.

THE QUANTUM CHEMISTRY LEGACY

Of course, quantum chemistry as a field is nothing new. In the early 20th century, German chemists such as Walter Heitler and Fritz London showed the covalent bond could be understood using quantum mechanics. In the late the 20th century, the growth in computing power available to chemists meant it was practical to do some basic modeling on classical systems.

Even so, when I was getting my Ph.D. in the mid-2000s at Boston College, it was relatively rare that bench chemists had a working knowledge of the kind of chemical modeling that was available via computational approaches such as density functional theory (DFT). The disciplines (and skill sets involved) were orthogonal. Instead of exploring the insights of DFT, bench chemists stuck to systematic approaches combined with a hope for an educated but often lucky discovery. I was fortunate enough to work in the research group of Professor Amir Hoveyda, who was early to recognize the value of combining experimental research with theoretical research.

THE DISCONTENTS OF COARSE DATA

Today, theoretical research and modeling chemical reactions to understand experimental results is commonplace, as the theoretical discipline became more sophisticated and bench chemists gradually began to incorporate these models into their work. The output of the models provides a useful feedback loop for in-lab discovery. To take one example, the explosion of available chemical data from high throughput screening has allowed for the creation of well-developed chemical models. Industrial uses of these models include drug discovery and material experimentation.

The limiting factor of these models, however, is the need to simplify. At each stage of the simulation, you have to pick a certain area where you want to make your compromise on accuracy in order to stay within the bounds of what the computer can practically handle. In the terminology of the field, you are working with “coarse-grained” models—where you deliberately simplify the known elements of the reaction in order to prioritize accuracy in the areas you are investigating. Each simplification reduces the overall accuracy of your model and limits its usefulness in the pursuit of discovery. To put it bluntly, the coarser your data, the more labor intensive your lab work.

The quantum approach is different. At its purest, quantum computing lets you model nature as it is; no approximations. In the oft-quoted words of Richard Feynman, “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical.”

We’ve seen rapid advances in the power of quantum computers in recent years. IBM doubled its quantum volume not once but twice in 2020 and is on course to reach quantum volume of more than 1,000, compared with single-digit figures in 2016. Others in the industry have also made bold claims about the power and capabilities of their machines.

So far, we have extended the use of quantum computers to model energies related to the ground states and excited states of molecules. These types of calculations will lead us to be able to explore reaction energy landscapes and photo-reactive molecules. In addition, we’ve explored using them to model the dipole moment in small molecules, a step in the direction of understanding electronic distribution and polarizability of molecules, which can also tell us something about how they react.

Looking ahead, we’ve started laying the foundation for future modeling of chemical systems using quantum computers and have been exploring different types of calculations on different types of molecules soluble on a quantum computer today. For example, what happens when you have an unpaired electron in the system? Do the calculations lose fidelity, and how can we adjust the algorithm to get them to match the expected results? This type of work will enable us to someday look at radical species, which can be notoriously difficult to analyze in the lab or simulate classically.

To be sure, this work is all replicable on classical computers. Still, none of it would have been possible with the quantum technology that existed five years ago. The progress in recent years holds out the promise that quantum computing can serve as a powerful catalyst for chemical discovery in the near future.

QUANTUM MEETS CLASSICAL

I don’t envision a future where chemists simply plug algorithms into a quantum device and are given a clear set of data for immediate discovery in the lab. What is feasible—and may already be possible— would be incorporating quantum models as a step in the existing processes that currently rely on classical computers.

In this approach, we use classical methods for the computationally intensive part of a model. This could include an enzyme, a polymer chain or a metal surface. We then apply a quantum method to model distinct interactions—such as the chemistry in the enzyme pocket, explicit interactions between a solvent molecule and a polymer chain, or hydrogen bonding in a small molecule. We would still accept approximations in certain parts of the model but would achieve much greater accuracy in the most distinct parts of the reaction. We have already made important progress through studying the possibility of embedding quantum electronic structure calculation into a classically computed environment obtained at the Hartree-Fock (HF) or DFT level of theory.

The practical applications of advancing this approach are numerous and impactful. More rapid advances in the field of polymer chains could help address the problem of plastic pollution, which has grown more acute since China has cut its imports of recyclable material. The energy costs of domestic plastic recycling remain relatively high; if we can develop plastics that are easier to recycle, we could make a major dent in plastic waste. Beyond the field of plastics, the need for materials with lower carbon emissions is ever more pressing, and the ability to manufacture substances such as jet fuel and concrete with a smaller carbon footprint is crucial to reducing total global emissions.

MODELING THE FUTURE

The next generation of chemists emerging from grad schools across the world brings a level of data fluency that would have been unimaginable in the 2000s. But the constraints on this fluency are physical: classically built computers simply cannot handle the level of complexity of substances as commonplace as caffeine. In this dynamic, no amount of data fluency can obviate the need for serendipity: you will be working in a world where you need luck on your side to make important advances. The development of— and embrace of—quantum computers is therefore crucial to the future practice of chemists.

This is an opinion and analysis article.

Monitoring Cancer at the Nano-Level – University of Waterloo


Waterloo QC Cancer 5c7d5fb4c0cfd

Tapered nanowire array device design. Credit: Nature Nanotechnology (2019). DOI: 10.1038/s41565-019-0393-2

How a new quantum sensor could improve cancer treatment

The development of medical imaging and monitoring methods has profoundly impacted the diagnosis and treatment of cancer. These non-invasive techniques allow health care practitioners to look for cancer in the body and determine if treatment is working.

But current techniques have limitations; namely, tumours need to be a specific size to be visible. Being able to detect cancer cells, even before there are enough to form a tumour, is a challenge that researchers around the world are looking to solve.

The solution may lie in nanotechnology

Researchers at the University of Waterloo’s Institute for Quantum Computing (IQC) have developed a quantum sensor that is promising to outperform existing technologies in monitoring the success of cancer treatments.

Sensor image

 Artist’s rendering of the interaction of incident single photon pulses and a tapered semiconductor nanowire array photodetector.

 

“A sensor needs to be very efficient at detecting light,” explains principal investigator Michael Reimer, an IQC faculty member and professor in the Faculty of Engineering. “What’s unique about our sensor is that the light can be absorbed all the way, from UV to infrared. No commercially available device exists that can do that now.”

 

Current sensors reflect some of the light, and depending on the material, this reflection can add up to 30 percent of the light not being absorbed.

This next-generation quantum sensor designed in Reimer’s lab is very efficient and can detect light at the fundamental limit — a single photon — and refresh for the next one within nanoseconds. Researchers created an array of tapered nanowires that turn incoming photons into electric current that can be amplified and detected.

When applied to dose monitoring in cancer treatment, this enhanced ability to detect every photon means that a health practitioner could monitor the dose being given with incredible precision — ensuring enough is administered to kill the cancer cells, but not too much that it also kills healthy cells.

Moving quantum technology beyond the lab

Reimer published his findings in Nature Nanotechnology in March and is now working on a prototype to begin testing outside of his lab. Reimer’s goal is to commercialize the sensor in the next three to five years.

“I enjoy the fundamental research, but I’m also interested in bringing my research out of the lab and into the real world and making an impact to society,” says Reimer.

He is no stranger to bringing quantum technology to the marketplace. While completing his post doctorate at the Delft University of Technology in The Netherlands, Reimer was an integral part of the startup, Single Quantum, developing highly efficient single-photon detectors based on superconducting nanowires.

Reimer’s latest sensor has a wide range of applications beyond dose monitoring for cancer treatments. The technology also has the ability to significantly improve high-speed imaging from space and long-range, high-resolution 3D images.

“A broad range of industries and research fields will benefit from a quantum sensor with these capabilities,” said Reimer. “It impacts quantum communication to quantum lidar to biological applications. Anywhere you have photon-starved situations, you would want an efficient sensor.”

He is exploring all industries and opportunities to put this technology to use.

Breakthroughs come in unexpected places

After earning his undergraduate degree in physics at the University of Waterloo, Reimer moved to Germany to play professional hockey. While taking graduate courses at the Technical University of Munich, he met a professor of nanotechnology who sparked his interest in the field.

“I played hockey and science was my hobby,” says Reimer. “Science is still my hobby, and it’s amazing that it is now my job.” Reimer went on to complete his PhD at the University of Ottawa/National Research Council of Canada, and turned his attention to quantum light sources. Reimer is an internationally renowned expert in quantum light sources and sensors. The idea for the quantum sensor came from his initial research in quantum light sources.

“To get the light out from the quantum light source, we had to come up with a way that you don’t have reflections, so we made this tapered shape. We realized that if we can get the light out that way we could also do the reverse — that’s where the idea for the sensor came from.”

Reimer will be at the Waterloo Innovation Summit on October 1, to present his latest breakthrough and its potential impact on the health care sector. And while he works to bring the sensor to market, Reimer’s lab continues to push the boundaries of quantum photonics.

From discovering the path to perfect photon entanglement to developing novel solid-state quantum devices, Reimer’s research is advancing technologies that could disrupt a multitude of industries and research fields.

Scientists ‘Reverse Time’ with a Quantum Computer


“Now the thing about time is that time isn’t really real.

It’s just your point of view, how does it feel to you?

Einstein said we could never understand it all … “

James Taylor ~ “The Secret of Life”

Researchers from the Moscow Institute of Physics and Technology teamed up with colleagues from the U.S. and Switzerland and returned the state of a quantum computer a fraction of a second into the past.

They also calculated the probability that an electron in empty interstellar space will spontaneously travel back into its recent past. The study is published in Scientific Reports.

“This is one in a series of papers on the possibility of violating the . That law is closely related to the notion of the arrow of time that posits the one-way direction of time from the past to the future,” said the study’s lead author Gordey Lesovik, who heads the Laboratory of the Physics of Quantum Information Technology at MIPT.

“We began by describing a so-called local perpetual motion machine of the second kind. Then, in December, we published a paper that discusses the violation of the second law via a device called a Maxwell’s demon,” Lesovik said. “The most recent paper approaches the same problem from a third angle: We have artificially created a state that evolves in a direction opposite to that of the thermodynamic arrow of time.”

What makes the future different from the past

Most laws of physics make no distinction between the future and the past. For example, let an equation describe the collision and rebound of two identical billiard balls. If a close-up of that event is recorded with a camera and played in reverse, it can still be represented by the same equation. Moreover, it is not possible to distinguish from the recording if it has been doctored. Both versions look plausible. It would appear that the billiard balls defy the intuitive sense of time.

However, imagine recording a cue ball breaking the pyramid, the billiard balls scattering in all directions. In that case, it is easy to distinguish the real-life scenario from reverse playback. What makes the latter look so absurd is our intuitive understanding of the second law of thermodynamics—an isolated system either remains static or evolves toward a state of chaos rather than order.

Most other laws of physics do not prevent rolling billiard balls from assembling into a pyramid, infused tea from flowing back into the tea bag, or a volcano from “erupting” in reverse.

But these phenomena are not observed, because they would require an isolated system to assume a more ordered state without any outside intervention, which runs contrary to the second law. The nature of that law has not been explained in full detail, but researchers have made great headway in understanding the basic principles behind it.

Read More: “Quantum Time Travel”

Spontaneous time reversal

Quantum physicists from MIPT decided to check if time could spontaneously reverse itself at least for an individual particle and for a tiny fraction of a second. That is, instead of colliding billiard balls, they examined a solitary electron in empty interstellar space.

“Suppose the electron is localized when we begin observing it. This means that we’re pretty sure about its position in space. The laws of quantum mechanics prevent us from knowing it with absolute precision, but we can outline a small region where the electron is localized,” says study co-author Andrey Lebedev from MIPT and ETH Zurich.

The physicist explains that the evolution of the electron state is governed by Schrödinger’s equation. Although it makes no distinction between the future and the past, the region of space containing the electron will spread out very quickly. That is, the system tends to become more chaotic. The uncertainty of the electron’s position is growing. This is analogous to the increasing disorder in a large-scale system—such as a billiard table—due to the second law of thermodynamics.

The four stages of the actual experiment on a quantum computer mirror the stages of the thought experiment involving an electron in space and the imaginary analogy with billiard balls. Each of the three systems initially evolves from order toward chaos, but then a perfectly timed external disturbance reverses this process. Credit: @tsarcyanide/MIPT

“However, Schrödinger’s equation is reversible,” adds Valerii Vinokur, a co-author of the paper, from the Argonne National Laboratory, U.S.

“Mathematically, it means that under a certain transformation called complex conjugation, the equation will describe a ‘smeared’ electron localizing back into a small region of space over the same time period.”

Although this phenomenon is not observed in nature, it could theoretically happen due to a random fluctuation in the cosmic microwave background permeating the universe.

The team set out to calculate the probability to observe an electron “smeared out” over a fraction of a second spontaneously localizing into its recent past. It turned out that even across the entire lifetime of the universe—13.7 billion years—observing 10 billion freshly localized electrons every second, the reverse evolution of the particle’s state would only happen once. And even then, the electron would travel no more than a mere one ten-billionth of a second into the past.

Large-scale phenomena involving billiard balls and volcanoes obviously unfold on much greater timescales and feature an astounding number of  and other particles. This explains why we do not observe old people growing younger or an ink blot separating from the paper.

Reversing time on demand

The researchers then attempted to reverse time in a four-stage experiment. Instead of an electron, they observed the state of a quantum computer made of two and later three basic elements called superconducting qubits.

  • Stage 1: Order. Each qubit is initialized in the ground state, denoted as zero. This highly ordered configuration corresponds to an electron localized in a small region, or a rack of billiard balls before the break.
  • Stage 2: Degradation. The order is lost. Just like the electron is smeared out over an increasingly large region of space, or the rack is broken on the pool table, the state of the qubits becomes an ever more complex changing pattern of zeros and ones. This is achieved by briefly launching the evolution program on the quantum computer. Actually, a similar degradation would occur by itself due to interactions with the environment. However, the controlled program of autonomous evolution will enable the last stage of the experiment.
  • Stage 3: Time reversal. A special program modifies the state of the quantum computer in such a way that it would then evolve “backwards,” from chaos toward order. This operation is akin to the random microwave background fluctuation in the case of the electron, but this time, it is deliberately induced. An obviously far-fetched analogy for the billiards example would be someone giving the table a perfectly calculated kick.
  • Stage 4: Regeneration. The evolution program from the second stage is launched again. Provided that the “kick” has been delivered successfully, the program does not result in more chaos but rather rewinds the state of the qubits back into the past, the way a smeared electron would be localized or the billiard balls would retrace their trajectories in reverse playback, eventually forming a triangle.

The researchers found that in 85 percent of the cases, the two-qubit quantum computer returned back into the initial state. When three qubits were involved, more errors happened, resulting in a roughly 50 percent success rate. According to the authors, these errors are due to imperfections in the actual quantum computer. As more sophisticated devices are designed, the error rate is expected to drop.

Interestingly, the time reversal algorithm itself could prove useful for making quantum computers more precise. “Our algorithm could be updated and used to test programs written for computers and eliminate noise and errors,” Lebedev explained.

More information: Scientific Reports(2019). DOI: 10.1038/s41598-019-40765-6

Provided by Moscow Institute of Physics and Technology

Explore further: Quantum Maxwell’s demon ‘teleports’ entropy out of a qubit

The US and China are in a Quantum Arms Race that will Transform Future Warfare


stealth bomber 03f9261bb3c481551b60cbf6fc87adc9

Radar that can spot stealth aircraft and other quantum innovations could give their militaries a strategic edge

In the 1970s, at the height of the Cold War, American military planners began to worry about the threat to US warplanes posed by new, radar-guided missile defenses in the USSR and other nations. In response, engineers at places like US defense giant Lockheed Martin’s famous “Skunk Works” stepped up work on stealth technology that could shield aircraft from the prying eyes of enemy radar.

The innovations that resulted include unusual shapes that deflect radar waves—like the US B-2 bomber’s “flying wing” design (above)—as well as carbon-based materials and novel paints. Stealth technology isn’t yet a Harry Potter–like invisibility cloak: even today’s most advanced warplanes still reflect some radar waves. But these signals are so small and faint they get lost in background noise, allowing the aircraft to pass unnoticed.

China and Russia have since gotten stealth aircraft of their own, but America’s are still better. They have given the US the advantage in launching surprise attacks in campaigns like the war in Iraq that began in 2003.

This advantage is now under threat. In November 2018, China Electronics Technology Group Corporation (CETC), China’s biggest defense electronics company, unveiled a prototype radar that it claims can detect stealth aircraft in flight. The radar uses some of the exotic phenomena of quantum physics to help reveal planes’ locations.

It’s just one of several quantum-inspired technologies that could change the face of warfare. As well as unstealthing aircraft, they could bolster the security of battlefield communications and affect the ability of submarines to navigate the oceans undetected. The pursuit of these technologies is triggering a new arms race between the US and China, which sees the emerging quantum era as a once-in-a-lifetime opportunity to gain the edge over its rival in military tech.

Stealth spotter

How quickly quantum advances will influence military power will depend on the work of researchers like Jonathan Baugh. A professor at the University of Waterloo in Canada, Baugh is working on a device that’s part of a bigger project to develop quantum radar. Its intended users: stations in the Arctic run by the North American Aerospace Defense Command, or NORAD, a joint US-Canadian organization.

Baugh’s machine generates pairs of photons that are “entangled”—a phenomenon that means the particles of light share a single quantum state. A change in one photon immediately influences the state of the other, even if they are separated by vast distances.

Quantum radar operates by taking one photon from every pair generated and firing it out in a microwave beam. The other photon from each pair is held back inside the radar system.

Equipment from a prototype quantum radar system made by China Electronics Technology Group Corporation IMAGINECHINA VIA AP IMAGES

Only a few of the photons sent out will be reflected back if they hit a stealth aircraft. A conventional radar wouldn’t be able to distinguish these returning photons from the mass of other incoming ones created by natural phenomena—or by radar-jamming devices. But a quantum radar can check for evidence that incoming photons are entangled with the ones held back. Any that are must have originated at the radar station. This enables it to detect even the faintest of return signals in a mass of background noise.

Baugh cautions that there are still big engineering challenges. These include developing highly reliable streams of entangled photons and building extremely sensitive detectors. It’s hard to know if CETC, which already claimed in 2016 that its radar could detect objects up to 100 kilometers (62 miles) away, has solved these challenges; it’s keeping the technical details of its prototype a secret.

Seth Lloyd, an MIT professor who developed the theory underpinning quantum radar, says that in the absence of hard evidence, he’s skeptical of the Chinese company’s claims. But, he adds, the potential of quantum radar isn’t in doubt. When a fully functioning device is finally deployed, it will mark the beginning of the end of the stealth era.

China’s ambitions

CETC’s work is part of a long-term effort by China to turn itself into a world leader in quantum technology. The country is providing generous funding for new quantum research centers at universities and building a national research center for quantum science that’s slated to open in 2020. It’s (China) already leaped ahead of the US in registering patents in quantum communications and cryptography.

A study of China’s quantum strategy published in September 2018 by the Center for a New American Security (CNAS), a US think tank, noted that the Chinese People’s Liberation Army (PLA) is recruiting quantum specialists, and that big defense companies like China Shipbuilding Industry Corporation (CSIC) are setting up joint quantum labs at universities. Working out exactly which projects have a military element to them is hard, though. “There’s a degree of opacity and ambiguity here, and some of that may be deliberate,” says Elsa Kania, a coauthor of the CNAS study.

China’s efforts are ramping up just as fears are growing that the US military is losing its competitive edge. A commission tasked by Congress to review the Trump administration’s defense strategy issued a report in November 2018 warning that the US margin of superiority “is profoundly diminished in key areas” and called for more investment in new battlefield technologies.

One of those technologies is likely to be quantum communication networks. Chinese researchers have already built a satellite that can send quantum-encrypted messages between distant locations, as well as a terrestrial network that stretches between Beijing and Shanghai. Both projects were developed by scientific researchers, but the know-how and infrastructure could easily be adapted for military use.

The networks rely on an approach known as quantum key distribution (QKD). Messages are encoded in the form of classical bits, and the cryptographic keys needed to decode them are sent as quantum bits, or qubits. These qubits are typically photons that can travel easily across fiber-optic networks or through the atmosphere. If an enemy tries to intercept and read the qubits, this immediately destroys their delicate quantum state, wiping out the information they carry and leaving a telltale sign of an intrusion.

QKD technology isn’t totally secure yet. Long ground networks require way stations  similar to the repeaters that boost signals along an ordinary data cable. At these stations, the keys are decoded into classical form before being re-encoded in a quantum form and sent to the next station. While the keys are in classical form, an enemy could hack in and copy them undetected.

To overcome this issue, a team of researchers at the US Army Research Laboratory in Adelphi, Maryland, is working on an approach called quantum teleportation. This involves using entanglement to transfer data between a qubit held by a sender and another held by a receiver, using what amounts to a kind of virtual, one-time-only quantum data cable. (There’s a more detailed description here.)

Michael Brodsky, one of the researchers, says he and his colleagues have been working on a number of technical challenges, including finding ways to ensure that the qubits’ delicate quantum state isn’t disrupted during transmission through fiber-optic networks. The technology is still confined to a lab, but the team says it’s now robust enough to be tested outside. “The racks can be put on trucks, and the trucks can be moved to the field,” explains Brodsky. china teleport 2014-10-22_quantum

It may not be long before China is testing its own quantum teleportation system. Researchers are already building the fiber-optic network for one that will stretch from the city of Zhuhai, near Macau, to some islands in Hong Kong.

Quantum compass

Researchers are also exploring using quantum approaches to deliver more accurate and foolproof navigation tools to the military. US aircraft and naval vessels already rely on precise atomic clocks to help keep track of where they are. But they also count on signals from the Global Positioning System (GPS), a network of satellites orbiting Earth. This poses a risk because an enemy could falsify, or “spoof,” GPS signals—or jam them altogether.

Lockheed Martin thinks American sailors could use a quantum compass based on microscopic synthetic diamonds with atomic flaws known as nitrogen-vacancy centers, or NV centers. These quantum defects in the diamond lattice can be harnessed to form an extremely accurate magnetometer. Shining a laser on diamonds with NV centers makes them emit light at an intensity that varies according to the surrounding magnetic field.

Ned Allen, Lockheed’s chief scientist, says the magnetometer is great at detecting magnetic anomalies—distinctive variations in Earth’s magnetic field caused by magnetic deposits or rock formations. There are already detailed maps of these anomalies made by satellite and terrestrial surveys. By comparing anomalies detected using the magnetometer against these maps, navigators can determine where they are. Because the magnetometer also indicates the orientation of magnetic fields, ships and submarines can use them to work out which direction they are heading.

China’s military is clearly worried about threats to its own version of GPS, known as BeiDou. Research into quantum navigation and sensing technology is under way at various institutes across the country, according to the CNAS report.

As well as being used for navigation, magnetometers can also detect and track the movement of large metallic objects, like submarines, by fluctuations they cause in local magnetic fields. Because they are very sensitive, the magnetometers are easily disrupted by background noise, so for now they are used for detection only at very short distances. But last year, the Chinese Academy of Sciences let slip that some Chinese researchers had found a way to compensate for this using quantum technology. That might mean the devices could be used in the future to spot submarines at much longer ranges.

A tight race

It’s still early days for militaries’ use of quantum technologies. There’s no guarantee they will work well at scale, or in conflict situations where absolute reliability is essential. But if they do succeed, quantum encryption and quantum radar could make a particularly big impact. Code-breaking and radar helped change the course of World War II. Quantum communications could make stealing secret messages much harder, or impossible. Quantum radar would render stealth planes as visible as ordinary ones. Both things would be game-changing.

It’s also too early to tell whether it will be China or the US that comes out on top in the quantum arms race—or whether it will lead to a Cold War–style stalemate. But the money China is pouring into quantum research is a sign of how determined it is to take the lead.

China has also managed to cultivate close working relationships between government research institutes, universities, and companies like CSIC and CETC. The US, by comparison, has only just passed legislation to create a national plan for coordinating public and private efforts. The delay in adopting such an approach has led to a lot of siloed projects and could slow the development of useful military applications. “We’re trying to get the research community to take more of a systems approach,” says Brodsky, the US army quantum expert.

qubit-type-and-year

U.S. Leads World in Quantum Computing Patent Filings with IBM Leading the Charge

Still, the US military does have some distinct advantages over the PLA. The Department of Defense has been investing in quantum research for a very long time, as have US spy agencies. The knowledge generated helps explains why US companies lead in areas like the development of powerful quantum computers, which harness entangled qubits to generate immense amounts of processing power.

The American military can also tap into work being done by its allies and by a vibrant academic research community at home. Baugh’s radar research, for instance, is funded by the Canadian government, and the US is planning a joint research initiative with its closest military partners—Canada, the UK, Australia, and New Zealand—in areas like quantum navigation.

All this has given the US has a head start in the quantum arms race. But China’s impressive effort to turbocharge quantum research means the gap between them is closing fast.

MIT: Physicists record ‘lifetime’ of graphene qubits – Foundation for Advancing Quantum Computing


 

Researchers from MIT and elsewhere have recorded, for the first time, the “temporal coherence” of a graphene qubit

The demonstration, which used a new kind of graphene-based qubit, meaning how long it can maintain a special state that allows it to represent two logical states simultaneously, represents a critical step forward for practical quantum computing, the researchers say.

Superconducting quantum bits (simply, qubits) are artificial atoms that use various methods to produce bits of quantum information, the fundamental component of quantum computers. Similar to traditional binary circuits in computers, qubits can maintain one of two states corresponding to the classic binary bits, a 0 or 1.

But these qubits can also be a superposition of both states simultaneously, which could allow quantum computers to solve complex problems that are practically impossible for traditional computers.

The amount of time that these qubits stay in this superposition state is referred to as their “coherence time.” The longer the coherence time, the greater the ability for the qubit to compute complex problems.

Recently, researchers have been incorporating graphene-based materials into superconducting quantum computing devices, which promise faster, more efficient computing, among other perks.

Until now, however, there’s been no recorded coherence for these advanced qubits, so there’s no knowing if they’re feasible for practical quantum computing. In a paper published today in Nature Nanotechnology, the researchers demonstrate, for the first time, a coherent qubit made from graphene and exotic materials.

These materials enable the qubit to change states through voltage, much like transistors in today’s traditional computer chips — and unlike most other types of superconducting qubits. Moreover, the researchers put a number to that coherence, clocking it at 55 nanoseconds, before the qubit returns to its ground state.

The work combined expertise from co-authors William D. Oliver, a physics professor of the practice and Lincoln Laboratory Fellow whose work focuses on quantum computing systems, and Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT who researches innovations in graphene.

“Our motivation is to use the unique properties of graphene to improve the performance of superconducting qubits,” says first author Joel I-Jan Wang, a postdoc in Oliver’s group in the Research Laboratory of Electronics (RLE) at MIT.

“In this work, we show for the first time that a superconducting qubit made from graphene is temporally quantum coherent, a key requisite for building more sophisticated quantum circuits. Ours is the first device to show a measurable coherence time — a primary metric of a qubit — that’s long enough for humans to control.”

There are 14 other co-authors, including Daniel Rodan-Legrain, a graduate student in Jarillo-Herrero’s group who contributed equally to the work with Wang; MIT researchers from RLE, the Department of Physics, the Department of Electrical Engineering and Computer Science, and Lincoln Laboratory; and researchers from the Laboratory of Irradiated Solids at the École Polytechnique and the Advanced Materials Laboratory of the National Institute for Materials Science.

A pristine graphene sandwich

Superconducting qubits rely on a structure known as a “Josephson junction,” where an insulator (usually an oxide) is sandwiched between two superconducting materials (usually aluminum).

In traditional tunable qubit designs, a current loop creates a small magnetic field that causes electrons to hop back and forth between the superconducting materials, causing the qubit to switch states.

But this flowing current consumes a lot of energy and causes other issues. Recently, a few research groups have replaced the insulator with graphene, an atom-thick layer of carbon that’s inexpensive to mass produce and has unique properties that might enable faster, more efficient computation.

To fabricate their qubit, the researchers turned to a class of materials, called van der Waals materials — atomic-thin materials that can be stacked like Legos on top of one another, with little to no resistance or damage.

These materials can be stacked in specific ways to create various electronic systems. Despite their near-flawless surface quality, only a few research groups have ever applied van der Waals materials to quantum circuits, and none have previously been shown to exhibit temporal coherence.

For their Josephson junction, the researchers sandwiched a sheet of graphene in between the two layers of a van der Waals insulator called hexagonal boron nitride (hBN). Importantly, graphene takes on the superconductivity of the superconducting materials it touches.

The selected van der Waals materials can be made to usher electrons around using voltage, instead of the traditional current-based magnetic field. Therefore, so can the graphene — and so can the entire qubit.

 

When voltage gets applied to the qubit, electrons bounce back and forth between two superconducting leads connected by graphene, changing the qubit from ground (0) to excited or superposition state (1). The bottom hBN layer serves as a substrate to host the graphene.

The top hBN layer encapsulates the graphene, protecting it from any contamination. Because the materials are so pristine, the traveling electrons never interact with defects. This represents the ideal “ballistic transport” for qubits, where a majority of electrons move from one superconducting lead to another without scattering with impurities, making a quick, precise change of states.

How voltage helps

The work can help tackle the qubit “scaling problem,” Wang says. Currently, only about 1,000 qubits can fit on a single chip. Having qubits controlled by voltage will be especially important as millions of qubits start being crammed on a single chip.

“Without voltage control, you’ll also need thousands or millions of current loops too, and that takes up a lot of space and leads to energy dissipation,” he says.

Additionally, voltage control means greater efficiency and a more localized, precise targeting of individual qubits on a chip, without “cross talk.” That happens when a little bit of the magnetic field created by the current interferes with a qubit it’s not targeting, causing computation problems.

For now, the researchers’ qubit has a brief lifetime. For reference, conventional superconducting qubits that hold promise for practical application have documented coherence times of a few tens of microseconds, a few hundred times greater than the researchers’ qubit.

But the researchers are already addressing several issues that cause this short lifetime, most of which require structural modifications. They’re also using their new coherence-probing method to further investigate how electrons move ballistically around the qubits, with aims of extending the coherence of qubits in general.

Coherent control of a hybrid superconducting circuit made with graphene-based van der Waals heterostructures
Joel I-Jan Wang, Daniel Rodan-Legrain, Landry Bretheau, Daniel L. Campbell, Bharath Kannan, David Kim, Morten Kjaergaard, Philip Krantz, Gabriel O. Samach, Fei Yan, Jonilyn L. Yoder, Kenji Watanabe, Takashi Taniguchi, Terry P. Orlando, Simon Gustavsson, Pablo Jarillo-Herrero & William D. Oliver
Nature Nanotechnology (2018)
DOI: 10.1038_s41565-018-0329-2

Contact information:

William D. Oliver
MIT Physics Professor of the Practice
oliver@ll.mit.edu URL: http://www.rle.mit.edu/

Pablo Jarillo-Herrero
MIT Physics Professor
pjarillo@mit.edu URL: http://jarilloherrero.mit.edu/

Massachusetts Institute of Technology (MIT)

 

Four Emerging Technology Areas That Will Help Define Our World In 2019


Welcome to 2019....

2018 was surely a transformative year for technological innovation. We saw early development of ambient computing, quantum teleportation, cloaks of invisibility, genomics advancements and even robocops.

Granted we’re not flying around in our own cars like the Jetsons did yet, but we’re closer. In 2019 we will continue on the transformation path and expand even more into adopting cutting edge immersive technologies.

What’s ahead for the coming year? I envision four emerging technology areas that will significantly impact our lives in 2019.

1.  The Internet of Things and Smart Cities

The Internet of Things (IoT) refers to the general idea of devices and equipment that are readable, recognizable, locatable, addressable, and/or controllable via the internet. 

This includes everything from home appliances, wearable technology and cars. These days, if a device can be turned on, it most likely can be connected to the internet. Because of this, data can be shared quickly across a multitude of objects and devices increasing the rate of communications.

Cisco, who terms the “Internet of Things,” “The Internet of Everything,” predicts that 50 billion devices (including our smartphones, appliances and office equipment) will be wirelessly connected via a network of sensors to the internet by 2020.

The term “Smart City” connotes creating a public/private infrastructure to conduct activities that protect and secure citizens. The concept of Smart Cities integrates communications (5-G), transportation, energy, water resources, waste collections, smart-building technologies, and security technologies and services. They are the cities of the future.

IoT is the cog of Smart Cities that integrates these resources, technologies, services and infrastructure.

The research firm Frost & Sullivan estimates the combined global market potential of Smart City segments (transportation, healthcare, building, infrastructure, energy and governance) to be $1.5 Trillion ($20B by 2050 on sensors alone according to Navigant Technology).

The combined growth of IoT and Smart Cities will be a force to reckon with in 2019!

     2.  Artificial Intelligence (AI)

Emergent artificial intelligence (AI), machine learning, human-computer interface, and augmented reality technologies are no longer science fiction. Head-spinning technological advances allow us to gain greater data-driven insights than ever before.

The ethical debate about AI is fervent over the threatening implications of future technologies that can think like a human (or better) and make their own decisions. The creation of a “Hal” type entity as depicted in Stanley Kubrick’s film, 2001 A Space Odyssey, is not far-fetched.

To truly leverage our ability to use data driven insights we need to make sure our thinking about how to best use this data keeps pace with its availability.

The vast majority of digital data is unstructured: a complex mesh of images, texts, videos and other data formats. Estimates suggest 80-90 percent of the world’s data is unstructured and growing at an increasingly rapid rate each day.

To even begin to make sense of this much data, advanced technologies are required. Artificial intelligence is the means by which this data is processed today, and it’s already a part of your everyday life.

In 2019, companies and governments will continue to develop technology that distributes artificial intelligence and machine learning software to millions of graphics and computer processors around the world. The question is how far away are we from a “Hal” with the ability for human analysis and techno emotions? 

     3.  Quantum Computing

The world of computing has witnessed seismic advancements since the invention of the electronic calculator in the 1960s. The past few years in information processing have been especially transformational.

What were once thought of as science fiction fantasies are now technological realities. Classical computing has become more exponentially faster and more capable and our enabling devices smaller and more adaptable.

We are starting to evolve beyond classical computing into a new data era called quantum computing. It is envisioned that quantum computing will accelerate us into the future by impacting the landscape of artificial intelligence and data analytics.

The quantum computing power and speed will help us solve some of the biggest and most complex challenges we face as humans.

Gartner describes quantum computing as: “[T]he use of atomic quantum states to effect computation. Data is held in qubits (quantum bits), which have the ability to hold all possible states simultaneously. Data held in qubits is affected by data held in other qubits, even when physically separated.

This effect is known as entanglement.” In a simplified description, quantum computers use quantum bits or qubits instead of using binary traditional bits of ones and zeros for digital communications.

Futurist Ray Kurzweil said that mankind will be able to “expand the scope of our intelligence a billion-fold” and that “the power of computing doubles, on average, every two years.” Recent breakthroughs in physics, nanotechnology and materials science have brought us into a computing reality that we could not have imagined a decade ago.

As we get closer to a fully operational quantum computer, a new world of supercomputing beckons that will impact on almost every aspect of our lives. In 2019 we are inching closer.

     4.  Cybersecurity (and Risk Management)

Many corporations, organizations and agencies have continued to be breached throughout 2018 despite cybersecurity investments on information assurance. The cyber threats grow more sophisticated and deadly with each passing year. The firm Gemalto estimated that data breaches compromised 4.5 billion records in first half of 2018. And a University of Maryland study found that hackers now attack computers every 39 seconds.

In 2019 we will be facing a new and more sophisticated array of physical security and cybersecurity challenges (including automated hacker tools) that pose significant risk to people, places and commercial networks.

The nefarious global threat actors are terrorists, criminals, hackers, organized crime, malicious individuals, and in some cases, adversarial nation states.

The physical has merged with the digital in the cybersecurity ecosystem. The more digitally interconnected we become in our work and personal lives, the more vulnerable we will become. Now everyone and anything connected is a target.

Cybersecurity is the digital glue that keeps IoT, Smart Cities, and our world of converged machines, sensors, applications and algorithms operational.

Addressing the 2019 cyber-threat also requires incorporating a better and more calculated risk awareness and management security strategy by both the public and private sectors. A 2019 cybersecurity risk management strategy will need to be comprehensive, adaptive and elevated to the C-Suite. 

I have just touched on a few of the implications of four emerging technology areas that will have significant impact in our lives in 2019.

These areas are just the tip of the iceberg as we really are in the midst of a paradigm shift in applied scientific knowledge.  We have entered a new renaissance of accelerated technological development that is exponentially transforming our civilization.

Yet with these benefits come risks. With such catalyzing innovation, we cannot afford to lose control. The real imperative for this new year is for planning and systematic integration.  

Hopefully that will provide us with a guiding technological framework that will keep us prosperous and safe.

Article by Chuck Brooks Special to Forbes Magazine
Chuck Brooks is an Advisor and Contributor to Cognitive World. In his full time role he is the Principal Market Growth Strategist for General Dynamics Mission Systems…MORE

‘Quantum Internet’ – Moving toward ‘Unhackable’ Communications and how Single Particles of Light could make it Possible: Purdue University – Next Step ‘On-Chip Circuitry’


towardunhack
Purdue researchers have created a new light source that generates at least 35 million photons per second, increasing the speed of quantum communication. Credit: Massachusetts Institute of Technology image/Mikhail Shalaginov

Hacker attacks on everything from social media accounts to government files could be largely prevented by the advent of quantum communication, which would use particles of light called “photons” to secure information rather than a crackable code.

The problem is that quantum communication is currently limited by how much   can help send securely, called a “secret bit rate.” Purdue University researchers created a new technique that would increase the secret bit rate 100-fold, to over 35 million photons per second.

“Increasing the bit rate allows us to use single photons for sending not just a sentence a second, but rather a relatively large piece of information with extreme security, like a megabyte-sized file,” said Simeon Bogdanov, a Purdue postdoctoral researcher in electrical and computer engineering.

Eventually, a high  will enable an ultra-secure “quantum internet,” a network of channels called “waveguides” that will transmit single photons between devices, chips, places or parties capable of processing quantum information.

“No matter how computationally advanced a hacker is, it would be basically impossible by the laws of physics to interfere with these quantum communication channels without being detected, since at the quantum level,  and matter are so sensitive to disturbances,” Bogdanov said.

The work was first published online in July for inclusion in a print Nano Letters issue on August 8, 2018.

Using light to send information is a game of probability: Transmitting one bit of information can take multiple attempts. The more photons a light source can generate per second, the faster the rate of successful information transmission.

Toward unhackable communication: Single particles of light could bring the 'quantum internet'
The Purdue University Quantum Center, including Simeon Bogdanov (left) and Sajid Choudhury (right), is investigating how to advance quantum communication for practical uses. Credit: Purdue University image/Susan Fleck

“A source might generate a lot of photons per second, but only a few of them may actually be used to transmit information, which strongly limits the speed of quantum communication,” Bogdanov said.

For faster  , Purdue researchers modified the way in which a light pulse from a laser beam excites electrons in a man-made “defect,” or local disturbance in a crystal lattice, and then how this defect emits one  at a time.

The researchers sped up these processes by creating a new light source that includes a tiny diamond only 10 nanometers big, sandwiched between a silver cube and silver film. Within the nanodiamond, they identified a single defect, resulting from one atom of carbon being replaced by nitrogen and a vacancy left by a missing adjacent carbon atom.

The nitrogen and the missing atom together formed a so-called “nitrogen-vacancy center” in a diamond with electrons orbiting around it.

A metallic antenna coupled to this defect facilitated the interaction of photons with the orbiting electrons of the nitrogen-vacancy center, through hybrid light-matter particles called “plasmons.” By the center absorbing and emitting one plasmon at a time, and the nanoantenna converting the plasmons into photons, the rate of generating photons for  became dramatically faster.

“We have demonstrated the brightest single-photon source at room temperature. Usually sources with comparable brightness only operate at very low temperatures, which is impractical for implementing on computer chips that we would use at room temperature,” said Vlad Shalaev, the Bob and Anne Burnett Distinguished Professor of Electrical and Computer Engineering.

Next, the researchers will be adapting this system for on-chip circuitry. This would mean connecting the plasmonic antenna with waveguides so that photons could be routed to different parts of the chip rather than radiating in all directions.

 Explore further: Physicists demonstrate new method to make single photons

More information: Simeon I. Bogdanov et al. Ultrabright Room-Temperature Sub-Nanosecond Emission from Single Nitrogen-Vacancy Centers Coupled to Nanopatch Antennas, Nano Letters (2018). DOI: 10.1021/acs.nanolett.8b01415

%d bloggers like this: