‘Quantum Internet’ – Moving toward ‘Unhackable’ Communications and how Single Particles of Light could make it Possible: Purdue University – Next Step ‘On-Chip Circuitry’


towardunhack
Purdue researchers have created a new light source that generates at least 35 million photons per second, increasing the speed of quantum communication. Credit: Massachusetts Institute of Technology image/Mikhail Shalaginov

Hacker attacks on everything from social media accounts to government files could be largely prevented by the advent of quantum communication, which would use particles of light called “photons” to secure information rather than a crackable code.

The problem is that quantum communication is currently limited by how much   can help send securely, called a “secret bit rate.” Purdue University researchers created a new technique that would increase the secret bit rate 100-fold, to over 35 million photons per second.

“Increasing the bit rate allows us to use single photons for sending not just a sentence a second, but rather a relatively large piece of information with extreme security, like a megabyte-sized file,” said Simeon Bogdanov, a Purdue postdoctoral researcher in electrical and computer engineering.

Eventually, a high  will enable an ultra-secure “quantum internet,” a network of channels called “waveguides” that will transmit single photons between devices, chips, places or parties capable of processing quantum information.

“No matter how computationally advanced a hacker is, it would be basically impossible by the laws of physics to interfere with these quantum communication channels without being detected, since at the quantum level,  and matter are so sensitive to disturbances,” Bogdanov said.

The work was first published online in July for inclusion in a print Nano Letters issue on August 8, 2018.

Using light to send information is a game of probability: Transmitting one bit of information can take multiple attempts. The more photons a light source can generate per second, the faster the rate of successful information transmission.

Toward unhackable communication: Single particles of light could bring the 'quantum internet'
The Purdue University Quantum Center, including Simeon Bogdanov (left) and Sajid Choudhury (right), is investigating how to advance quantum communication for practical uses. Credit: Purdue University image/Susan Fleck

“A source might generate a lot of photons per second, but only a few of them may actually be used to transmit information, which strongly limits the speed of quantum communication,” Bogdanov said.

For faster  , Purdue researchers modified the way in which a light pulse from a laser beam excites electrons in a man-made “defect,” or local disturbance in a crystal lattice, and then how this defect emits one  at a time.

The researchers sped up these processes by creating a new light source that includes a tiny diamond only 10 nanometers big, sandwiched between a silver cube and silver film. Within the nanodiamond, they identified a single defect, resulting from one atom of carbon being replaced by nitrogen and a vacancy left by a missing adjacent carbon atom.

The nitrogen and the missing atom together formed a so-called “nitrogen-vacancy center” in a diamond with electrons orbiting around it.

A metallic antenna coupled to this defect facilitated the interaction of photons with the orbiting electrons of the nitrogen-vacancy center, through hybrid light-matter particles called “plasmons.” By the center absorbing and emitting one plasmon at a time, and the nanoantenna converting the plasmons into photons, the rate of generating photons for  became dramatically faster.

“We have demonstrated the brightest single-photon source at room temperature. Usually sources with comparable brightness only operate at very low temperatures, which is impractical for implementing on computer chips that we would use at room temperature,” said Vlad Shalaev, the Bob and Anne Burnett Distinguished Professor of Electrical and Computer Engineering.

Next, the researchers will be adapting this system for on-chip circuitry. This would mean connecting the plasmonic antenna with waveguides so that photons could be routed to different parts of the chip rather than radiating in all directions.

 Explore further: Physicists demonstrate new method to make single photons

More information: Simeon I. Bogdanov et al. Ultrabright Room-Temperature Sub-Nanosecond Emission from Single Nitrogen-Vacancy Centers Coupled to Nanopatch Antennas, Nano Letters (2018). DOI: 10.1021/acs.nanolett.8b01415

Advertisements

North Western U: Study Provides insight into how Nanoparticles interact with Biological Systems


7-studyprovide.jpg
Computer simulation of a lipid corona around a 5-nanometer nanoparticle showing ammonium-phosphate ion pairing. Credit: Northwestern University

Personal electronic devices—smartphones, computers, TVs, tablets, screens of all kinds—are a significant and growing source of the world’s electronic waste. Many of these products use nanomaterials, but little is known about how these modern materials and their tiny particles interact with the environment and living things.

Now a research team of Northwestern University chemists and colleagues from the national Center for Sustainable Nanotechnology has discovered that when certain coated  interact with living organisms it results in new properties that cause the nanoparticles to become sticky. Fragmented  coronas form on the particles, causing them to stick together and grow into long kelp-like strands. Nanoparticles with 5-nanometer diameters form long structures that are microns in size in solution. The impact on cells is not known.

“Why not make a particle that is benign from the beginning?” said Franz M. Geiger, professor of chemistry in Northwestern’s Weinberg College of Arts and Sciences. He led the Northwestern portion of the research.

“This study provides insight into the molecular mechanisms by which nanoparticles interact with biological systems,” Geiger said. “This may help us understand and predict why some /ligand coating combinations are detrimental to cellular organisms while others are not. We can use this to engineer nanoparticles that are benign by design.”

Using experiments and computer simulations, the research team studied polycation-wrapped gold nanoparticles and their interactions with a variety of bilayer membrane models, including bacteria. The researchers found that a nearly circular layer of lipids forms spontaneously around the particles. These “fragmented lipid coronas” have never been seen before.

The study points to solving problems with chemistry. Scientists can use the findings to design a better ligand coating for nanoparticles that avoids the ammonium-phosphate interaction, which causes the aggregation. (Ligands are used in nanomaterials for layering.)

The results will be published Oct. 18 in the journal Chem.

Geiger is the study’s corresponding author. Other authors include scientists from the Center for Sustainable Nanotechnology’s other institutional partners. Based at the University of Wisconsin-Madison, the center studies engineered nanomaterials and their interaction with the environment, including biological systems—both the negative and positive aspects.

“The nanoparticles pick up parts of the lipid cellular membrane like a snowball rolling in a snowfield, and they become sticky,” Geiger said. “This unintended effect happens because of the presence of the nanoparticle. It can bring lipids to places in cells where lipids are not meant to be.”

The experiments were conducted in idealized laboratory settings that nevertheless are relevant to environments found during the late summer in a landfill—at 21-22 degrees Celsius and a couple feet below ground, where soil and groundwater mix and the food chain begins.

By pairing spectroscopic and imaging experiments with atomistic and coarse-grain simulations, the researchers identified that ion pairing between the lipid head groups of biological membranes and the polycations’ ammonium groups in the nanoparticle wrapping leads to the formation of fragmented lipid coronas. These coronas engender new properties, including composition and stickiness, to the particles with diameters below 10 nanometers.

The study’s insights help predict the impact that the increasingly widespread use of engineered nanomaterials has on the nanoparticles’ fate once they enter the food chain, which many of them may eventually do.

“New technologies and mass consumer products are emerging that feature nanomaterials as critical operational components,” Geiger said. “We can upend the existing paradigm in nanomaterial production towards one in which companies design nanomaterials to be sustainable from the beginning, as opposed to risking expensive product recalls—or worse—down the road.”

 Explore further: Water matters to metal nanoparticles

More information: “Lipid Corona Formation from Nanoparticle Interactions with Bilayers,” Chem (2018). DOI: 10.1016/j.chempr.2018.09.018

 

The Future Of Energy Isn’t Fossil Fuels Or Renewables, It’s Nuclear Fusion (Really?)


 

Co State Nuc Fussion 2Colorado State University scientists, using a compact but powerful laser to heat arrays of ordered nanowires, have demonstrated micro-scale nuclear fusion in the lab.

Let’s pretend, for a moment, that the climate doesn’t matter. That we’re completely ignoring the connection between carbon dioxide, the Earth’s atmosphere, the greenhouse effect, global temperatures, ocean acidification, and sea-level rise. From a long-term point of view, we’d still need to plan for our energy future. Fossil fuels, which make up by far the majority of world-wide power today, are an abundant but fundamentally limited resource. Renewable sources like wind, solar, and hydroelectric power have different limitations: they’re inconsistent. There is a long-term solution, though, that overcomes all of these problems: nuclear fusion.

Even the most advanced chemical reactions, like combusting thermite, shown here, generate about a million times less energy per unit mass compared to a nuclear reaction.

Even the most advanced chemical reactions, like combusting thermite, shown here, generate about a million times less energy per unit mass compared to a nuclear reaction.NIKTHESTUNNED OF WIKIPEDIA

It might seem that the fossil fuel problem is obvious: we cannot simply generate more coal, oil, or natural gas when our present supplies run out. We’ve been burning pretty much every drop we can get our hands on for going on three centuries now, and this problem is going to get worse. Even though we have hundreds of years more before we’re all out, the amount isn’t limitless. There are legitimate, non-warming-related environmental concerns, too.

Even if we ignored the CO2-global climate change problem, fossil fuels are limited in the amount Earth contains, and also extracting, transporting, refining and burning them causes large amounts of pollution.

Even if we ignored the CO2-global climate change problem, fossil fuels are limited in the amount Earth contains, and also extracting, transporting, refining and burning them causes large amounts of pollution.GREG GOEBEL

The burning of fossil fuels generates pollution, since these carbon-based fuel sources contain a lot more than just carbon and hydrogen in their chemical makeup, and burning them (to generate energy) also burns all the impurities, releasing them into the air. In addition, the refining and/or extraction process is dirty, dangerous and can pollute the water table and entire bodies of water, like rivers and lakes.

Wind farms, like many other sources of renewable energy, are dependent on the environment in an inconsistent, uncontrollable way.

Wind farms, like many other sources of renewable energy, are dependent on the environment in an inconsistent, uncontrollable way.WINCHELL JOSHUA, U.S. FISH AND WILDLIFE SERVICE

On the other hand, renewable energy sources are inconsistent, even at their best. Try powering your grid during dry, overcast (or overnight), and drought-riddled times, and you’re doomed to failure. The sheer magnitude of the battery storage capabilities required to power even a single city during insufficient energy-generation conditions is daunting. Simultaneously, the pollution effects associated with creating solar panels, manufacturing wind or hydroelectric turbines, and (especially) with creating the materials needed to store large amounts of energy are tremendous as well. Even what’s touted as “green energy” isn’t devoid of drawbacks.

Reactor nuclear experimental RA-6 (Republica Argentina 6), en marcha. The blue glow is known as Cherenkov radiation, from the faster-than-light-in-water particles emitted.

Reactor nuclear experimental RA-6 (Republica Argentina 6), en marcha. The blue glow is known as Cherenkov radiation, from the faster-than-light-in-water particles emitted.CENTRO ATOMICO BARILOCHE, VIA PIECK DARÍO

But there is always the nuclear option. That word itself is enough to elicit strong reactions from many people: nuclear. The idea of nuclear bombs, of radioactive fallout, of meltdowns, and of disasters like Chernobyl, Three Mile Island, and Fukushima — not to mention residual fear from the Cold War — make “NIMBY” the default position for a large number of people. And that’s a fear that’s not wholly without foundation, when it comes to nuclear fission. But fission isn’t the only game in town.

Watch the Video: Nuclear Bomb – The First H Bomb Test

 

In 1952, the United States detonated Ivy Mike, the first demonstrated nuclear fusion reaction to occur on Earth. Whereas nuclear fission involves taking heavy, unstable (and already radioactive) elements like Thorium, Uranium or Plutonium, initiating a reaction that causes them to split apart into smaller, also radioactive components that release energy, nothing involved in fusion is radioactive at all. The reactants are light, stable elements like isotopes of hydrogen, helium or lithium; the products are also light and stable, like helium, lithium, beryllium or boron.

 

The proton-proton chain responsible for producing the vast majority of the Sun's power is an example of nuclear fusion.

The proton-proton chain responsible for producing the vast majority of the Sun’s power is an example of nuclear fusion.BORB / WIKIMEDIA COMMONS

So far, fission has taken place in either a runaway or controlled environment, rushing past the breakeven point (where the energy output is greater than the input) with ease, while fusion has never reached the breakeven point in a controlled setting. But four main possibilities have emerged. img_0787

  1. Inertial Confinement Fusion. We take a pellet of hydrogen — the fuel for this fusion reaction — and compress it using many lasers that surround the pellet. The compression causes the hydrogen nuclei to fuse into heavier elements like helium, and releases a burst of energy.
  2. Magnetic Confinement Fusion. Instead of using mechanical compression, why not let the electromagnetic force do the confining work? Magnetic fields confine a superheated plasma of fusible material, and nuclear fusion reactions occur inside a Tokamak-style reactor.
  3. Magnetized Target Fusion. In MTF, a superheated plasma is created and confined magnetically, but pistons surrounding it compress the fuel inside, creating a burst of nuclear fusion in the interior.
  4. Subcritical Fusion. Instead of trying to trigger fusion with heat or inertia, subcritical fusion uses a subcritical fission reaction — with zero chance of a meltdown — to power a fusion reaction.

The first two have been researched for decades now, and are the closest to the coveted breakeven point. But the latter two are new, with the last one gaining many new investors and start-ups this decade.

The preamplifiers of the National Ignition Facility are the first step in increasing the energy of laser beams as they make their way toward the target chamber. NIF recently achieved a 500 terawatt shot - 1,000 times more power than the United States uses at any instant in time.

The preamplifiers of the National Ignition Facility are the first step in increasing the energy of laser beams as they make their way toward the target chamber. NIF recently achieved a 500 terawatt shot – 1,000 times more power than the United States uses at any instant in time.DAMIEN JEMISON/LLNL

Even if you reject climate science, the problem of powering the world, and doing so in a sustainable, pollution-free way, is one of the most daunting long-term ones facing humanity. Nuclear fusion as a power source has never been given the necessary funding to develop it to fruition, but it’s the one physically possible solution to our energy needs with no obvious downsides. If we can get the idea that “nuclear” means “potential for disaster” out of our heads, people from all across the political spectrum just might be able to come together and solve our energy and environmental needs in one single blow. If you think the government should be investing in science with national and global payoffs, you can’t do better than the ROI that would come from successful fusion research. The physics works out beautifully; we now just need the investment and the engineering breakthroughs.

Special Contribution to Forbes by: Ethan Siegel 

University of Cambridge: Researchers to target hard-to-treat cancers


img_0784

A £10 million interdisciplinary collaboration is to target the most challenging of cancers using nanomedicine.

“We are going to pierce through the body’s natural barriers and deliver anti-cancer drugs to the heart of the tumour.” – George Malliaras

While the survival rate for most cancers has doubled over the past 40 years, some cancers such as those of the pancreas, brain, lung and oesophagus still have low survival rates.

Such cancers are now the target of an Interdisciplinary Research Collaboration (IRC) led by the University of Cambridge and involving researchers from Imperial College London, University College London and the Universities of Glasgow and Birmingham.

“Some cancers are difficult to remove by surgery and highly invasive, and they are also hard to treat because drugs often cannot reach them at high enough concentration,” explains George Malliaras, Prince Philip Professor of Technology in Cambridge’s Department of Engineering, who leads the IRC. “Pancreatic tumour cells, for instance, are protected by dense stromal tissue, and tumours of the central nervous system by the blood-brain barrier.”

The aim of the project, which is funded for six years by the Engineering and Physical Sciences Research Council, is to develop an array of new delivery technologies that can deliver almost any drug to any tumour in a large enough concentration to kill the cancerous cells.

img_0782

Chemists, engineers, material scientists and pharmacologists will focus on developing particles, injectable gels and implantable devices to deliver the drugs. Cancer scientists and clinicians from the Cancer Research UK Cambridge Centre and partner sites will devise and carry out clinical trials. Experts in innovative manufacturing technologies will ensure the devices are able to be manufactured and robust enough to withstand surgical manipulation.

One technology the team will examine is the ability of advanced materials to self-assemble and entrap drugs inside metal-organic frameworks. These structures can carry enormous amounts of drugs, and be tuned both to target the tumour and to release the drug at an optimal rate.

“We are going to pierce through the body’s natural barriers,” says Malliaras, “and deliver anti-cancer drugs to the heart of the tumour.”

Dr Su Metcalfe, a member of George Malliaras’s team and who is already using NanoBioMed to treat Multuple Sclerosis, added “the power of nanotechnology to synergise with potent anti-cancer drugs will be profound and the award will speed delivery to patients.”

How nanotechnology is advancing drug delivery


Nanotechnology brings a lot to the medical field, and a specific branch known as nanomedicine has evolved because of the growing interest in this area.

Drug delivery systems derived from materials (or particles) at the nano-level provide a way for drugs, that might otherwise be toxic to the body, to reach their intended target through encapsulation or conjugation approaches.

There are some issues which need to be ironed out, with respect to the size of some of these carriers against the regulatory definitions, but it is an area that is expanding drug delivery approaches beyond what was previously possible with conventional approaches.

Inorganic Nanocarriers

Inorganic nanocarriers were the first type of nanotechnology-based drug delivery system to be trialled, yet their use and research is becoming less and less frequent. Many types of inorganic nanoparticle have been tried and tested, from gold, to iron oxide, to calcium phosphate, and beyond. Many inorganic nanoparticles are not biocompatible within the body, however, this can be overcome by functionalising the surface with organic molecules, such as PEG, to increase their compatibility within the body.

However, where this area has been let down is in their inability to be easily broken down after use and the subsequent difficulty to be excreted.

Organic Nanocarriers

Organic-based nanocarriers are the fastest growing area of nano-inspired drug delivery systems, and the reason for this expansion is due to the (often) inability of inorganic drug carriers to be broken down within the body and excreted. By comparison, the organic make-up of organic carriers, such as those made of certain types of polymers, dendrimer architectures and lipid-based encapsulating vessels (liposomes), can be broken down and excreted and offer a much greater degree of biocompatibility.

Each mechanism of delivery is different for these systems. For example, dendrimer-based delivery vessels will often have the drug covalently linked (conjugated) to the dendrimer backbone itself, and when it reaches a target of interest, certain functional groups at the edges will bind to the target and release the drug through molecular cleavage.

However, the most common way of delivering drugs is through encapsulation, as the toxicity (and the possibility of the drug interacting with the body before it reaches the target) is significantly reduced.

By using this approach, the nanocarrier can uptake the drug of interest into its core, where it is only released once the nanocarrier has reached the target of interest—thus lowering the risk of the drug being cleaved and released on route to the target site.

Solid Drug Nanoparticles

 Solid drug nanoparticles are another growing nanotechnology-inspired drug delivery system, but their use is not (yet) as widespread as organic delivery vessels. However, they do avoid some of the regulatory complications, as their use does not involve any extra species other than already approved drugs in an efficient nanoparticle form.

Solid drug nanoparticles are the nanoparticle form of a conventional drug; and take the form of being packed into a template, or as a suspension—therefore no delivery system is required and are administered by injection. The drug nanoparticles are often created through a bottom-up controlled precipitation of the drug to be administered, or by a top-down grinding approach of larger pieces of the drug until they are in the nanoparticle size range.

Aside from providing a more straightforward route to the clinic from a regulatory perspective, they also offer a way to tackle drug adherence issues—i.e. where people don’t take their required medication on time, which causes the effectiveness of the drug to be reduced—by providing a long-lasting, slow release of the drug over a period of 1 to 6 months.

Contributed and Written by Liam Critchley

Nano Magazine the magazine for Small Science

The $80 Trillion World Economy in One Chart: The World Bank View


The latest estimate from the World Bank puts global GDP at roughly $80 trillion in nominal terms for 2017.

Today’s chart from HowMuch.net uses this data to show all major economies in a visualization called a Voronoi diagram – let’s dive into the stats to learn more.

THE WORLD’S TOP 10 ECONOMIES

Here are the world’s top 10 economies, which together combine for a whopping two-thirds of global GDP.

Rank Country GDP % of Global GDP
#1 United States $19.4 trillion 24.4%
#2 China $12.2 trillion 15.4%
#3 Japan $4.87 trillion 6.1%
#4 Germany $3.68 trillion 4.6%
#5 United Kingdom $2.62 trillion 3.3%
#6 India $2.60 trillion 3.3%
#7 France $2.58 trillion 3.3%
#8 Brazil $2.06 trillion 2.6%
#9 Italy $1.93 trillion 2.4%
#10 Canada $1.65 trillion 2.1%

In nominal terms, the U.S. still has the largest GDP at $19.4 trillion, making up 24.4% of the world economy.

While China’s economy is far behind in nominal terms at $12.2 trillion, you may recall that the Chinese economy has been the world’s largest when adjusted for purchasing power parity (PPP) since 2016. 

The next two largest economies are Japan ($4.9 trillion) and Germany ($4.6 trillion) – and when added to the U.S. and China, the top four economies combined account for over 50% of the world economy.

MOVERS AND SHAKERS

Over recent years, the list of top economies hasn’t changed much – and in a similar visualization we posted 18 months ago, the four aforementioned top economies all fell in the exact same order.

However, look outside of these incumbents, and you’ll see that the major forces shaping the future of the global economy are in full swing, especially when it comes to emerging markets.

Here are some of the most important movements:

India has now passed France in nominal terms with a $2.6 trillion economy, which is about 3.3% of the global total. In the most recent quarter, Indian GDP growth saw its highest growth rate in two years at about 8.2%.

Brazil, despite its very recent economic woes, surpassed Italy in GDP rankings to take the #8 spot overall. 

Turkey has surpassed The Netherlands to become the world’s 17th largest economy, and Saudi Arabia has jumped past Switzerland to claim the 19th spot.

And what about the Future?

Read About How China will lead the world by 2050 Photo: REUTERS/Stringer

Nanoplatform developed with three (3) molecular imaging modalities for tumor diagnosis – Making it possible to expand detection to more types of cancer


nanoplatform for tumor diagnosisThe composition and application of the JANUS nanoplatform for multimodal medical imaging. Credit: Marco Filice

Researchers at the Complutense University of Madrid (UCM) have developed a hybrid nanoplatform that locates tumours using three different types of contrast simultaneously to facilitate multimodal molecular medical imaging: magnetic resonance imaging (MRI), computed tomography (CT) and fluorescence optical imaging (OI).

The results of this study, led by the UCM Life Sciences Nanobiotechnology research team directed by Marco Filice and published in ACS Applied Materials & Interfaces, represent a major advance in medical diagnosis since just one session using a single contrast medium yields more precise, specific results with higher resolution, sensitivity and capacity to penetrate tissues.

“No single molecular imaging modality provides a perfect diagnosis. Our nanoplatform is designed to enable multimodal molecular imaging, thus overcoming the intrinsic limitations of each single image modality while maximising their advantages,” noted Marco Filice, a researcher in the Department of Chemistry and Pharmaceutical Sciences at the Complutense University of Madrid and the director of the study.

The platform, which has been tested on mice, targets solid cancers such as sarcomas. “However, due to its flexibility, the proposed nanoplatform can be modified, and with a suitable design of recognition element siting, it will be possible to expand detection to more types of cancer,” Filice said.

Named after the Roman god Janus, usually depicted as having two faces, these nanoparticles also “have two opposing faces, one of iron oxide embedded in a silica matrix that serves as a contrast medium for MRI and another of gold for CT,” explained Alfredo Sánchez, a researcher in the UCM Department of Analytical Chemistry and the first author of the study.

In addition, a molecular probe sited in a specific manner in the golden area permits fluorescence optical imaging while a peptide selective for hyperexpressed receptors in tumours (RGD sequence) and sited on the silica surface enveloping the  identifies the tumour and makes it possible to direct and transport the nanoplatform to its target.

Once the research team had synthesised the nanoparticles and determined their characteristics and toxicity, they then tested them in mouse models reared to present a fibrosarcoma in the right leg. The nanoparticle was injected in the tail. “Excellent imaging results were obtained for each modality tested,” reported Filice.

Although there is still much to do before these experiments can be applied to humans, this research shows that personalised treatment is closer than ever to becoming a reality, thanks to nanotechnology and biotechnology.

 Explore further: Nanoparticles on track to distinguish tumour tissue

More information: Alfredo Sánchez et al, Hybrid Decorated Core@Shell Janus Nanoparticles as a Flexible Platform for Targeted Multimodal Molecular Bioimaging of Cancer, ACS Applied Materials & Interfaces (2018). DOI: 10.1021/acsami.8b10452

 

MIT: Research opens route to flexible electronics made from exotic materials – Provides a cost-effective alternative that could perform better than current silicon-based devices


MIT-Transpararent-Graphene_0

MIT researchers have devised a way to grow single crystal GaN thin film on a GaN substrate through two-dimensional materials. The GaN thin film is then exfoliated by a flexible substrate, showing the rainbow color that comes from thin film interference. This technology will pave the way to flexible electronics and the reuse of the wafers.

Photo credits: Wei Kong and Kuan Qiao

Cost-effective method produces semiconducting films from materials that outperform silicon.

“In smart cities, where we might want to put small computers everywhere, we would need low power, highly sensitive computing and sensing devices, made from better materials,” Kim says. “This [study] unlocks the pathway to those devices.”

 

The vast majority of computing devices today are made from silicon, the second most abundant element on Earth, after oxygen. Silicon can be found in various forms in rocks, clay, sand, and soil. And while it is not the best semiconducting material that exists on the planet, it is by far the most readily available. As such, silicon is the dominant material used in most electronic devices, including sensors, solar cells, and the integrated circuits within our computers and smartphones.

Now MIT engineers have developed a technique to fabricate ultrathin semiconducting films made from a host of exotic materials other than silicon. To demonstrate their technique, the researchers fabricated flexible films made from gallium arsenide, gallium nitride, and lithium fluoride — materials that exhibit better performance than silicon but until now have been prohibitively expensive to produce in functional devices.

The new technique, researchers say, provides a cost-effective method to fabricate flexible electronics made from any combination of semiconducting elements, that could perform better than current silicon-based devices.

“We’ve opened up a way to make flexible electronics with so many different material systems, other than silicon,” says Jeehwan Kim, the Class of 1947 Career Development Associate Professor in the departments of Mechanical Engineering and Materials Science and Engineering. Kim envisions the technique can be used to manufacture low-cost, high-performance devices such as flexible solar cells, and wearable computers and sensors.

Details of the new technique are reported today in Nature Materials. In addition to Kim, the paper’s MIT co-authors include Wei Kong, Huashan Li, Kuan Qiao, Yunjo Kim, Kyusang Lee, Doyoon Lee, Tom Osadchy, Richard Molnar, Yang Yu, Sang-hoon Bae, Yang Shao-Horn, and Jeffrey Grossman, along with researchers from Sun Yat-Sen University, the University of Virginia, the University of Texas at Dallas, the U.S. Naval Research Laboratory, Ohio State University, and Georgia Tech.

mit_logoNow you see it, now you don’t

In 2017, Kim and his colleagues devised a method to produce “copies” of expensive semiconducting materials using graphene — an atomically thin sheet of carbon atoms arranged in a hexagonal, chicken-wire pattern. They found that when they stacked graphene on top of a pure, expensive wafer of semiconducting material such as gallium arsenide, then flowed atoms of gallium and arsenide over the stack, the atoms appeared to interact in some way with the underlying atomic layer, as if the intermediate graphene were invisible or transparent. As a result, the atoms assembled into the precise, single-crystalline pattern of the underlying semiconducting wafer, forming an exact copy that could then easily be peeled away from the graphene layer.

The technique, which they call “remote epitaxy,” provided an affordable way to fabricate multiple films of gallium arsenide, using just one expensive underlying wafer.

Soon after they reported their first results, the team wondered whether their technique could be used to copy other semiconducting materials. They tried applying remote epitaxy to silicon, and also germanium — two inexpensive semiconductors — but found that when they flowed these atoms over graphene they failed to interact with their respective underlying layers. It was as if graphene, previously transparent, became suddenly opaque, preventing atoms of silicon and germanium from “seeing” the atoms on the other side.

As it happens, silicon and germanium are two elements that exist within the same group of the periodic table of elements. Specifically, the two elements belong in group four, a class of materials that are ionically neutral, meaning they have no polarity.

“This gave us a hint,” says Kim.

Perhaps, the team reasoned, atoms can only interact with each other through graphene if they have some ionic charge. For instance, in the case of gallium arsenide, gallium has a negative charge at the interface, compared with arsenic’s positive charge. This charge difference, or polarity, may have helped the atoms to interact through graphene as if it were transparent, and to copy the underlying atomic pattern.

“We found that the interaction through graphene is determined by the polarity of the atoms. For the strongest ionically bonded materials, they interact even through three layers of graphene,” Kim says. “It’s similar to the way two magnets can attract, even through a thin sheet of paper.”

Flexible Electronics MARKET_1_9

Opposites attract

The researchers tested their hypothesis by using remote epitaxy to copy semiconducting materials with various degrees of polarity, from neutral silicon and germanium, to slightly polarized gallium arsenide, and finally, highly polarized lithium fluoride — a better, more expensive semiconductor than silicon.

They found that the greater the degree of polarity, the stronger the atomic interaction, even, in some cases, through multiple sheets of graphene. Each film they were able to produce was flexible and merely tens to hundreds of nanometers thick.

The material through which the atoms interact also matters, the team found. In addition to graphene, they experimented with an intermediate layer of hexagonal boron nitride (hBN), a material that resembles graphene’s atomic pattern and has a similar Teflon-like quality, enabling overlying materials to easily peel off once they are copied.

However, hBN is made of oppositely charged boron and nitrogen atoms, which generate a polarity within the material itself. In their experiments, the researchers found that any atoms flowing over hBN, even if they were highly polarized themselves, were unable to interact with their underlying wafers completely, suggesting that the polarity of both the atoms of interest and the intermediate material determines whether the atoms will interact and form a copy of the original semiconducting wafer.

“Now we really understand there are rules of atomic interaction through graphene,” Kim says.

With this new understanding, he says, researchers can now simply look at the periodic table and pick two elements of opposite charge. Once they acquire or fabricate a main wafer made from the same elements, they can then apply the team’s remote epitaxy techniques to fabricate multiple, exact copies of the original wafer.

flexiblecircuitAlso Read About: Chinese Researchers Develop Non-Toxic, Flexible Material for Circuits

“People have mostly used silicon wafers because they’re cheap,” Kim says. “Now our method opens up a way to use higher-performing, nonsilicon materials. You can just purchase one expensive wafer and copy it over and over again, and keep reusing the wafer. And now the material library for this technique is totally expanded.”

Kim envisions that remote epitaxy can now be used to fabricate ultrathin, flexible films from a wide variety of previously exotic, semiconducting materials — as long as the materials are made from atoms with a degree of polarity. Such ultrathin films could potentially be stacked, one on top of the other, to produce tiny, flexible, multifunctional devices, such as wearable sensors, flexible solar cells, and even, in the distant future, “cellphones that attach to your skin.”

“In smart cities, where we might want to put small computers everywhere, we would need low power, highly sensitive computing and sensing devices, made from better materials,” Kim says. “This [study] unlocks the pathway to those devices.”

This research was supported in part by the Defense Advanced Research Projects Agency, the Department of Energy, the Air Force Research Laboratory, LG Electronics, Amore Pacific, LAM Research, and Analog Devices.

 

Jennifer Chu | MIT News Office

BIG Discoveries from Tiny Particles – from Photonics to Pharmaceuticals, materials made with Polymer Nanoparticles hold promise for products of the future – U of Delaware


Big discovery nanoparticles 181008101017_1_540x360
In this illustration, arrows indicate the vibrational activity of particles studied by UD researchers, while the graph shows the frequencies of this vibration.
Credit: Illustration courtesy of Hojin Kim
Summary:
Understanding the mechanical properties of nanoparticles are essential to realizing their promise in being used to create exciting new products. This new research has taken a significant step toward gaining the knowledge that can lead to better performance with products using polymer nanoparticles.

From photonics to pharmaceuticals, materials made with polymer nanoparticles hold promise for products of the future. However, there are still gaps in understanding the properties of these tiny plastic-like particles.

Now, Hojin Kim, a graduate student in chemical and biomolecular engineering at the University of Delaware, together with a team of collaborating scientists at the Max Planck Institute for Polymer Research in Germany, Princeton University and the University of Trento, has uncovered new insights about polymer nanoparticles. The team’s findings, including properties such as surface mobility, glass transition temperature and elastic modulus, were published in Nature Communications.

Under the direction of MPI Prof. George Fytas, the team used Brillouin light spectroscopy, a technique that spelunks the molecular properties of microscopic nanoparticles by examining how they vibrate.

“We analyzed the vibration between each nanoparticle to understand how their mechanical properties change at different temperatures,” Kim said. “We asked, ‘What does a vibration at different temperatures indicate? What does it physically mean?’ ”

The characteristics of polymer nanoparticles differ from those of larger particles of the same material. “Their nanostructure and small size provide different mechanical properties,” Kim said. “It’s really important to understand the thermal behavior of nanoparticles in order to improve the performance of a material.”

Take polystyrene, a material commonly used in nanotechnology. Larger particles of this material are used in plastic bottles, cups and packaging materials.

“Polymer nanoparticles can be more flexible or weaker at the glass transition temperature at which they soften from a stiff texture to a soft one, and it decreases as particle size decreases,” Kim said. That’s partly because polymer mobility at small particle surface can be activated easily. It’s important to know when and why this transition occurs, since some products, such as filter membranes, need to stay strong when exposed to a variety of conditions.

For example, a disposable plastic cup made with the polymer polystyrene might hold up in boiling water — but that cup doesn’t have nanoparticles. The research team found that polystyrene nanoparticles start to experience the thermal transition at 343 Kelvin (158 degrees F), known as the softening temperature, below a glass transition temperature of 372 K (210 F) of the nanoparticles, just short of the temperature of boiling water. When heated to this point, the nanoparticles don’t vibrate — they stand completely still.

This hadn’t been seen before, and the team found evidence to suggest that this temperature may activate a highly mobile surface layer in the nanoparticle, Kim said. As particles heated up between their softening temperature and glass transition temperature, the particles interacted with each other more and more. Other research groups have previously suspected that glass transition temperature drops with decreases in particle size decreases because of differences in particle mobility, but they could not observe it directly.

“Using different method and instruments, we analyzed our data at different temperatures and actually verified there is something on the polymer nanoparticle surface that is more mobile compared to its core,” he said.

By studying interactions between the nanoparticles, the team also uncovered their elastic modulus, or stiffness.

Next up, Kim plans to use this information to build a nanoparticle film that can govern the propagation of sound waves.

Eric Furst, professor and chair of the Department of Chemical and Biomolecular Engineering at UD, is also a corresponding author on the paper.

“Hojin took the lead on this project and achieved results beyond what I could have predicted,” said Furst. “He exemplifies excellence in doctoral engineering research at Delaware, and I can’t wait to see what he does next.”

Story Source:

Materials provided by University of DelawareNote: Content may be edited for style and length.


Journal Reference:

  1. Hojin Kim, Yu Cang, Eunsoo Kang, Bartlomiej Graczykowski, Maria Secchi, Maurizio Montagna, Rodney D. Priestley, Eric M. Furst, George Fytas. Direct observation of polymer surface mobility via nanoparticle vibrationsNature Communications, 2018; 9 (1) DOI: 10.1038/s41467-018-04854-w

A battery for the next century – Could it happen here? Massachusetts Moves Forward to Secure Clean Energy Future and … JOBS


Tesla Red Car 0e0c44e592964b68aad7d2cefa03807b-0e0c44e592964b68aad7d2cefa03807b-0

Clean energy advocates are increasingly focusing their hopes on battery storage to supply power to the grid from the sun and the wind, particularly during times of peak demand when the weather might be, inconveniently, cloudy and still.

In fact, the clean energy bill passed this week on Beacon Hill called for increasing the energy storage target from 200 megawatts to 1,000 megawatts by the end of 2025, and ordered study of a mobile emergency relief battery system. “Batteries are key to extending the life of clean energy and we want to see that battery sector really grow,” state Senator Michael Barrett told the State House News Service on Monday night. “So this is a major job-creation piece.”

He’s got that right. Lithium-ion batteries have improved markedly in recent years and are being used in New England, California, and in Europe to store power from renewable energy sources. In Casco Bay, Maine, a battery room packed with more than 1,000 lithium-ion batteries helps stabilize the grid, according to NextEra, helping to keep electricity flowing at 60 hertz, or cycles per second, the longtime standard for US households. And ISO New England reports that there are a dozen projects in the pipeline that involve connecting a battery to either a new or existing solar or wind facility.

Because renewable energy sources are crucial for reducing the greenhouse gases responsible for climate change, demand is only going to increase as stricter regulations kick in and as new products are developed — car companies project that 10 million to 20 million electric vehicles will be produced each year by 2025.

There’s a catch: Lithium-ion battery technology is approaching some very real limits imposed by the physical world, according to researchers. While battery performance has improved markedly and costs have fallen to around $150 per kilowatt hour, that’s still more than the $100 per kWh goal set by the US Department of Energy.

Costs are also soaring for rare metals used in battery electrodes. High demand has led to shocking abuses in Africa, where some cobalt mines exploit child labor, and to environmental violations in China, where mining dust has polluted villages, according to recent reporting in the science journal Nature. In any case, Mother Earth isn’t making any more cobalt or nickel: Demand will outstrip production within 20 years, researchers predict. Although crucial, current battery technology is neither clean nor renewable.

 

But soaring demand could also drive a market for new technology. As Eric Wilkinson, general counsel and director of energy policy for the Environmental League of Massachusetts, said: “It’s good for policy makers to be thinking about this, because it helps to energize the private sector.” Aging technology, dwindling natural resources, and harsh working conditions all make the lithium-ion battery industry ripe for disruption. Bill Gates’s $1 billion bet on energy, Breakthrough Energy Ventures, has invested in Form Energy, which is developing aqueous sulfur-based flow batteries that could last longer and cost less.

Battery storage may not grab as many headlines as advances in cancer research or genetics, but clean tech projects deserve a prime place on the Commonwealth’s R&D agenda. The right innovation ecosystem is already in place: science and engineering talent, academic institutions, and financial prowess that could unlock business opportunities and expand the state’s tax base. Strong public-private partnerships built MassBio. Maybe it’s time for MassBattery.