Long-lasting solar cells from and “unexpected gray area” … U of Wisconsin-Madison Make Surprising Discovery


anunexpectedUW-Madison engineers found a way to dramatically extend the lifespan of solar energy-harvesting devices, which use energy from sunlight to generate hydrogen from water. Credit: iStock

University of Wisconsin-Madison materials engineers have made a surprising discovery that could dramatically improve the lifetime of solar energy harvesting devices.

The findings allowed them to achieve the longest-ever lifetime for a key component of some types of photovoltaic cells called the photoelectrochemical electrode, which uses sunlight to split water into its constituent parts of hydrogen and oxygen.

In a paper published July 24, 2018, in the research journal Nano Letters, a team led by UW-Madison materials science and engineering Ph.D. student Yanhao Yu and his advisor, Professor Xudong Wang, described a strategy that extended the lifetime of a photochemical electrode to a whopping 500 hours—more than five times (5X) the typical 80-hour lifespan.

Usually, these types of electrodes are made of silicon, which splits water well, but is highly unstable and quickly degrades when it comes into contact with corrosive conditions. To protect these electrodes, engineers often thinly coat their surfaces.

It’s a tactic that only delays their eventual breakdown—sometimes after a few days and sometimes within hours.

“Performance varies widely and nobody really knows why. It’s a big question,” says Wang, a professor of materials science and engineering at UW-Madison.

Intriguingly, the researchers didn’t make any changes to the coating material. Rather, they boosted the electrode’s lifetime by applying an even thinner coating of  than usual.

In other words, less really was more!

Key to this exceptional performance was the team’s discovery about the atomic structure of titanium dioxide thin , which the researchers create using a technique called .

Previously, researchers believed that the atoms in titanium dioxide thin films adopted one of two conformations—either scrambled and disordered in a state referred to as “amorphous,” or locked into a regularly repeating and predictable arrangement called the crystalline form.

Crucially, researchers were certain that all the atoms in a given thin film behaved the same way. Crystalline or amorphous. Black or white. No in-between.

What Wang colleagues found, however, is a gray area: They saw that small pockets of an in-between state persisted in the final coatings—the  in these areas was neither amorphous nor crystalline. These intermediates have never been observed before.

“This is a cutting edge of materials synthesis science,” says Wang. “We’re thinking that crystallization is not as straightforward as people believe.”

Observing those intermediates was no easy feat. Enter Wang’s colleague Paul Voyles, a microscopy expert who leveraged UW-Madison’s unique facilities to perform sophisticated scanning transmission electron microscopy measurements, enabling him to detect the tiny structures.

From there, the researchers determined those intermediates lowered the lifetime of titanium dioxide thin films by leading to spikes of electronic current that ate tiny holes in the protective coatings.

Eliminating those intermediates—thus extending the ‘s lifetime—is as simple as using a thinner film.

Thinner films make it more difficult for intermediates to form within the film, so by reducing the thickness by three quarters (from 10 nanometers to 2.5), the researchers created coatings that lasted more than five times longer than traditional coatings.

And now that they’ve discovered these peculiar structures, the researchers want to learn more about how they form and influence amorphous film properties. That’s knowledge that could reveal other strategies for eliminating them—which not only could improve performance, says Wang, but also open new opportunities in other energy-related systems, such as catalysts, solar cells and batteries.

“These intermediates could be something very important that has been overlooked,” says Wang. “They could be a critical aspect that controls properties of the film.”

 Explore further: Discovery brings renewable fuel production one step closer to reality

More information: Yanhao Yu et al. Metastable Intermediates in Amorphous Titanium Oxide: A Hidden Role Leading to Ultra-Stable Photoanode Protection, Nano Letters (2018). DOI: 10.1021/acs.nanolett.8b02559

 

Advertisements

Novel Nano-Materials for Quantum Electronics and more …


novelnanomatThe use of redox-active organic molecules and magnetic metal ions as molecular building blocks for materials represents a new strategy towards novel types of 2D materials exhibiting both high electronic conductivity and magnetic order. Credit: Kasper Steen Pedersen and We Love People.

An international team led by Assistant Professor Kasper Steen Pedersen, DTU Chemistry, has synthesized a novel nano material with electrical and magnetic properties making it suitable for future quantum computers and other applications in electronics.

Chromium-chloride-pyrazine (chemical formula CrCl2(pyrazine)2) is a layered material, which is a precursor for a so-called 2-D material. In principle, a 2-D material has a thickness of just a single molecule and this often leads to properties very different from those of the same material in a normal 3-D version; not least of which, the electrical properties will differ. While in a 3-D material, electrons are able to take any direction, in a 2-D material they will be restricted to moving horizontally—as long as the wavelength of the electron is longer than the thickness of the 2-D layer.

Organic/inorganic hybrid

Graphene is the most well-known 2-D material. Graphene consists of  in a lattice structure, which yields its remarkable strength. Since the first synthesis of graphene in 2004, hundreds of other 2-D materials have been synthesized, some of which may be candidates for  electronics applications. However, the novel material is based on a very different concept. While the other candidates are all inorganic—just like graphene—chromium-chloride-pyrazine is an organic/inorganic hybrid material.

“The material marks a new type of chemistry, in which we are able to replace various building blocks in the material and thereby modify its physical and chemical properties. This cannot be done in graphene. For example, one can’t choose to replace half the carbon atoms in  with another kind of atom. Our approach allows designing properties much more accurately than known in other 2-D materials,” Kasper Steen Pedersen explains.

Novel nano material for quantum electronics
The use of redox-active organic molecules and magnetic metal ions as molecular building blocks for materials represents a new strategy towards novel types of 2D materials exhibiting both high electronic conductivity and magnetic order. Credit: Kasper Steen Pedersen and We Love People.

Besides the electrical properties, also the magnetic properties in Chromium-Chloride-Pyrazine can be accurately designed. This is especially relevant in relation to “spintronics”.

Watch a Video from Danish Technical Institute on ‘Nanomaterials for Printed Electronics

 

“While in normal electronics, only the charge of the electrons is utilized, But also electron spin—which is a quantum mechanical property—is used in . This is highly interesting for quantum computing applications. Therefore, development of nano-scale materials which are both conducting and magnetic is most relevant,” Kasper Steen Pedersen notes.

A new world of 2-D materials

Besides quantum computing, chromium-chloride-pyrazine may be of interest in future superconductors, catalysts, batteries, fuel cells, and electronics in general.

Companies are not keen to begin producing the material right away, the researcher stresses: “Not yet, at least! This is still fundamental research. Since we are suggesting a material synthesized from an entirely novel approach, a number of questions remain unanswered. For instance, we are not yet able to determine the degree of stability of the material in various applications. However, even if chromium-chloride-pyrazine should for some reason prove unfit for the various possible applications, the new principles behind its synthesis will still be relevant. This is the door to a new world of more advanced 2-D  opening up.”

 Explore further: New chemical method could revolutionize graphene

More information: Kasper S. Pedersen et al, Formation of the layered conductive magnet CrCl2(pyrazine)2 through redox-active coordination chemistry, Nature Chemistry (2018). DOI: 10.1038/s41557-018-0107-7

Read more at: https://phys.org/news/2018-09-nano-material-quantum-electronics.html#jCp

“All About Renewable Energy” Read Genesis Nanotech Nano-News Online


 

img_0705-1 

“All About Renewable Energy” Read Genesis Nanotech Nano-News Online: Genesis Nanotech Online Nano-News

Articles Like:

A Fuel Cell / Electric Bus – What You Need to Know” + More …

Genesis Nanotech Online Nano-News

 

Forbes on Energy: Two Ways Energy Storage Will Be A True Market Disruptor In The U.S. Power Sector


Post written by

Eric Gimon

Eric Gimon is a Senior Fellow for Energy Innovation, and works on the firm’s America’s Power Plan project.

The term “market disruptor” is seemingly thrown around for every new technology with promise, but it will be quite prescient when it comes to energy storage and U.S. power markets.

New U.S. energy storage projects make solar power competitive against existing coal and new natural gas generation, and could soon displace these power market incumbents.  Meanwhile, projects in Australia and Germany show how energy storage can completely reshape power market economics and generate revenue in unexpected ways .

In part one of this series, we discussed the three ways energy storage can tap economic opportunities in U.S. organized power markets. Now in part two of the series, let’s explore how storage will disrupt power markets as more and more capacity comes online.

New projects in Colorado and Nevada embody “market disruption”

True market disruption happens when existing or incumbent technologies can only improve their performance or costs incrementally and industries focus on achieving those incremental improvements, while an entirely new technology enters the market with capabilities incumbents can’t dream of with exponentially falling costs incumbents can’t approach.

As energy storage continues getting cheaper, it will increasingly out-compete other resources and change the mix of resources that run the grid.  Recent contracts for new solar-plus-storage projects signed by Xcel Energy in Colorado and NV Energy in Nevada will allow solar production to extend past sunset and into the evening peak demand period, making it competitive against existing fossil fuel resources and new natural gas.

In fact, energy storage can increasingly replace inefficient (and often dirty) peaker plants and gas plants maintained for reliability.  This trend isn’t limited to utility-scale power plants – behind the meter (i.e., small-scale or residential) energy storage surged in Q2 2018, installing more capacity than front-of-meter storage for the first time.

U.S. energy storage deployment by quarter 2013-2018WOODS MACKENZIE POWER & RENEWABLES

Energy storage’s economic edge will accelerate in the future. Bloomberg New Energy Finance forecasts utility-scale battery system costs will fall from $700 per kilowatt-hour (KWh) in 2016 to less than $300/KWh in 2030, drawing $103 billion in investment, and doubling in market size six times by 2030.

Tesla’s Australian “Big Battery” shows how storage will upend the existing order

But energy storage won’t disrupt power markets simply because of its continued cost declines versus resources it could replace, but also because of its different deployment and dispatch characteristics.  It won’t merely replace peaker plants or substation upgrades, it will modify how other resources operate and are considered. This will require a change in regulations at all scales for the power grid, as well as in power market rules.

Consider the Hornsdale Power Reserve in South Australia, otherwise known as the “Tesla Big Battery.”  This 100 megawatt (MW)/129 megawatt-hour (MWh) project is the largest lithium-ion battery in the world.  Through South Australian government grants and payments, it contributes to grid stability and ancillary services (also known as “FCAS”) while allowing the associated Hornsdale Wind Farm owners to arbitrate energy prices.  A recent report from the Australian Energy Market Operator shows that in Q1 2018, the average arbitrage (price difference between charging and discharging) for this project was AUS $90.56/MWh.

This exemplifies “value staking” where the Hornsdale Power Reserve takes advantage of all three ways storage can earn revenue in organized markets with a hydrid compensation model under its single owner/operator (French company Neoen).  Hornsdale is already impacting FCAS prices in Australia, with prices tumbling 57% in Q1 2018 from Q4 2017.

AEMO frequency control ancillary services markets 2016-2018AUSTRALIAN ENERGY MARKET OPERATOR

Value stacking for reliability contracts plus market-based revenues (or “Storage as a Transmission Asset”) is also actively being debated by California’s CAISO market.

Because energy storage provides countless benefits at both the local and regional level, in ever-more overlapping combinations, it will create contentious debates and innumerable headaches for power market regulators in coming years.   In 2014, observers were treated to a family feud, as Luminant (generation utility) and TXU (retail power provider) argued against battery storage being installed by Oncor (poles-and-wires utility) for competitive reasons.  More recently, Luminant has argued against AEP building energy storage to relieve transmission bottlenecks to remote communities in southwest Texas because they are “tantamount to peak-shaving and will result in the distortion of competitive market signals.” In California, policy makers are struggling with how to adjust rate structures so behind-the-meter storage projects can meet the state’s emissions reduction goals tied to the subsidies they receive.

Meanwhile, batteries are being combined with more than transmission, wind, and solar projects.  In Germany, a recently closed coal-fired power station is being used simultaneously as a grid-tied storage facility and “live replacement parts store” for third-generation electric vehicle battery packs by Mercedes-Benz Energy.  German automotive supplier Bosch and utility EnBW have installed a storage battery at EnBW’s coal-fired Heilbronn plant to supply balancing power market when demand outstrips supply.

Today, inflexible coal plants often receive these type of “uplift” payments when they are committed by power markets to meet demand or for reliability reasons, but can only offer resources in much bigger chunks then economic dispatch would warrant.  This puts billions of dollars at stake the eastern U.S., where power market operator PJM is considering dramatic changes in rules to pay higher prices to these inflexible plants.  What if in the future, these plants might be required to install or sponsor a certain amount of energy storage capacity in order to set marginal power market prices?

Even today, hybrid combinations of storage and other resources are changing the game in subtle but important ways.  Mark Ahlstrom of the Energy Systems Integration Group recently outlined how FERC’s Order 841 allows all kinds of resources to change the way they interact with wholesale power markets, their participation model, in a unforeseen and unpredictable ways.  For example, the end-point of a point-to-point high-voltage DC transmission line could use a storage participation model to bid or offer into power markets.  Some demand response resources are already combining with storage today “to harness the better qualities of each resource, and allow customers to tap a broader range of cost-reduction and revenue-generating capabilities.”

A recent projection from The Brattle Group underscores this point, forecasting that Order 841 could make energy storage projects profitable from 7 GW/20 GWh, with up to 50 GW of energy storage projects “participating in grid-level energy, ancillary service, and capacity markets.”

Power market disruption is the only guarantee

Eventually the hybrid storage model may become a universal template for all resources, creating additional revenue through improved flexibility.  For example, a hybrid storage-natural gas plant could provide power reserves during a cold start – even if a gas plant was not running, reserve power can come from energy storage while the gas turbine fires up.

If fixed start times for some resources, which are constraints that are accepted facts of life today, could be eliminated by hybridizing with storage, then standard market design might start requiring or incentivizing such upgrades to reduce the mathematical complexity and improve the precision of the algorithms that dispatch power plants and set prices today.

As utility-scale batteries continue their relentless cost declines, it’s hard to completely imagine exactly what the future might hold but energy storage is guaranteed to disrupt power markets – meaning this sector warrants close attention from savvy investors.

The reality of quantum computing could be … just three years away


Quantum computing has moved out of the realm of theoretical physics and into the real world, but its potential and promise are still years away.

Onstage at TechCrunch Disrupt SF, a powerhouse in the world of quantum research and a young upstart in the field presented visions for the future of the industry that illustrated both how far the industry has come and how far the technology has to go.

For both Dario Gil, the chief operating officer of IBM Research and the company’s vice president of artificial intelligence and quantum computing, and Chad Rigetti, a former IBM researcher who founded Rigetti Computing and serves as its chief executive, the moment that a quantum computer will be able to perform operations better than a classical computer is only three years away.

“[It’s] generating a solution that is better, faster or cheaper than you can do otherwise,” said Rigetti. “Quantum computing has moved out of a field of research into now an engineering discipline and an engineering enterprise.”

Considering the more than 30 years that IBM has been researching the technology and the millions (or billions) that have been poured into developing it, even seeing an end of the road is a victory for researchers and technologists.

Achieving this goal, for all of the brainpower and research hours that have gone into it, is hardly academic.

The Chinese government is building a $10 billion National Laboratory for Quantum Information in Anhui province, which borders Shanghai and is slated to open in 2020. Meanwhile, the U.S. public research into quantum computing is running at around $200 million per year.

Source: Patin Informatics via Bloomberg News.

One of the reasons why governments, especially, are so interested in the technology is its potential to completely remake the cybersecurity landscape. Some technologists argue that quantum computers will have the potential to crack any type of encryption technology, opening up all of the networks in the world to potential hacking.

The quantum computing apocalypse is imminent

According to experts, quantum computers will be able to create breakthroughs in many of the most complicated data processing problems, leading to the development of new medicines, building molecular structures and doing analysis going far beyond the capabilities of today’s binary computers.

Of course, quantum computing is so much more than security. It will enable new ways of doing things we can’t even imagine because we have never had this much pure compute power. Think about artificial and machine learning or drug development; any type of operation that is compute-intensive could benefit from the exponential increase in compute power that quantum computing will bring.

Security may be the Holy Grail for governments, but both Rigetti and Gil say that the industrial chemical business will be the first place where the potentially radical transformation of a market will appear first.

What is quantum computing anyway?

To understand quantum computing it helps to understand the principles of the physics behind it.

As Gil explained onstage (and on our site), quantum computing depends on the principles of superposition, entanglement and interference.

A Turning Point For Quantum Computing

Quantum computing is moving from theory and experimentation into engineering and applications. But now that quantum computing is going mainstream, it is incumbent on businesses and governments to understand its potential, for universities to beef up their teaching programs in quantum computing and related subjects and for students to become aware of promising new career paths.

Superposition is the notion that physicists can observe multiple potential states of a particle. “If you a flip a coin it is one or two states,” said Gil. Meaning that there’s a single outcome that can be observed. But if someone were to spin a coin, they’d see a number of potential outcomes.

Once you’ve got one particle that’s being observed, you can add another and pair them thanks to a phenomenon called quantum entanglement. “If you have two coins where each one can be in superpositions and then you can have measurements can be taken” of the difference of both.

Finally, there’s interference, where the two particles can be manipulated by an outside force to change them and create different outcomes.

“In classical systems you have these bits of zeros and ones and the logical operations of the ands and the ors and the nots,” said Gil. “The classical computer is able to process the logical operations of bits expressed in zeros and ones.”

“In an algorithm you put the computer in a super positional state,” Gil continued. “You can take the amplitude and states and interfere them and the algorithm is the thing that interferes… I can have many, many states representing different pieces of information and then i can interfere with it to get these data.”

These operations are incredibly hard to sustain. In the early days of research into quantum computing the superconducting devices only had one nanosecond before a qubit transforms into a traditional bit of data. Those ranges have increased between 50 and 100 microseconds, which enabled IBM and Rigetti to open up their platforms to researchers and others to conduct experimentation (more on that later).

The physical quantum computer

As one can imagine, dealing with quantum particles is a delicate business. So the computing operations have to be carefully controlled. At the base of the machine is what basically amounts to a huge freezer that maintains a temperature in the device of 15 millikelvin — near absolute zero degrees and 180 times colder than the temperatures in interstellar space.

“These qubits are very delicate,” said Gil. “Anything from the outside world can couple to it and destroy its state and one way to protect it is to cool it.”

Wiring for the quantum computer is made of superconducting coaxial cables. The inputs to the computers are microwave pulses that manipulates the particles creating a signal that is then interpreted by the computers’ operators.

Those operators used to require a degree in quantum physics. But both IBM and Rigetti have been working on developing tools that can enable a relative newbie to use the tech.

Quantum computing in the “cloud”

Even as companies like IBM and Rigetti bring the cost of quantum computing down from tens of millions of dollars to roughly $1 million to $2 million, these tools likely will never become commodity hardware that a consumer buys to use as a personal computer.

Rather, as with most other computing these days, quantum computing power will be provided as a service to users.

Indeed, Rigetti announced onstage a new hybrid computing platform that can provide computing services to help the industry both reach quantum advantage — that tipping point at which quantum is commercially viable — and to enable industries to explore the technologies to acclimatize to the potential ways in which typical operations could be disrupted by it.

Rigetti announces its hybrid quantum computing platform — and a $1M prize

Rigetti, a quantum computing startup that is challenging the likes of IBM, Microsoft and Google in this nascent space, today at our TechCrunch Disrupt SF 2018 event announced the launch of its new hybrid quantum computing platform. While Rigetti already offered API access to its quantum computing platform, this new service, dubbed Quantum Cloud Services … Continue reading

“A user logs on to their own device and use our software development kit to write a quantum application,” said Rigetti. “That program is sent to a compiler and kicks off an optimization kit that runs on a quantum and classical computer… This is the architecture that’s needed to achieve quantum advantage.”

Both IBM and Rigetti — and a slew of other competitors — are preparing users for accessing quantum computing opportunities on the cloud.

IBM has more than a million chips performing millions of quantum operations requested by users in over 100 countries around the world.

“In a cloud-first era I’m not sure the economic forces will be there that will drive us to develop the miniaturized environment in the laptop,” Rigetti said. But the ramifications of the technology’s commercialization will be felt by everyone, everywhere.

“Quantum computing is going to change the world and it’s all going to come in our lifetime, whether that’s two years or five years,” he said. “Quantum computing is going to redefine every industry and touch every market. Every major company will be involved in some capacity in that space.”

Rice University engineers develop system to remove contaminants from water


Rice0806_SULFATE-3-WEB-2hajrg6_800x533

Engineer Qilin Li at Rice University’s lab is building a treatment system that can be tuned to selectively pull toxins from wastewater from factories, sewage systems and oil and gas wells, as well as drinking water. The researchers said their technology will cut costs and save energy compared to conventional systems.

“Traditional methods to remove everything, such as reverse osmosis, are expensive and energy intensive,” said Li, the lead scientist and co-author of a study about the new technology in the American Chemical Society journal Environmental Science & Technology. “If we figure out a way to just fish out these minor components, we can save a lot of energy.”

The heart of Rice’s system is a set of novel composite electrodes that enable capacitive deionization. The charged, porous electrodes selectively pull target ions from fluids passing through the maze-like system. When the pores get filled with toxins, the electrodes can be cleaned, restored to their original capacity and reused.

“This is part of a broad scope of research to figure out ways to selectively remove ionic contaminants,” said Li, a professor of civil and environmental engineering and of materials science and nanoengineering. “There are a lot of ions in water. Not everything is toxic. For example, sodium chloride (salt) is perfectly benign. We don’t have to remove it unless the concentration gets too high.”

In tests, an engineered coating of resin, polymer and activated carbon removed and trapped harmful sulfate ions, and other coatings can be used in the same platform to target other contaminants. Illustration by Kuichang Zuo

The proof-of-principal system developed by Li’s team removed sulfate ions. The system’s electrodes were coated with activated carbon, which was in turn coated by a thin film of tiny resin particles held together by quaternized polyvinyl alcohol. When sulfate-contaminated water flowed through a channel between the charged electrodes, sulfate ions were attracted by the electrodes, passed through the resin coating and stuck to the carbon. Tests in the Rice lab showed the positively charged coating on the cathode preferentially captured sulfate ions over salt at a ratio of more than 20 to 1. The electrodes retained their properties over 50 cycles. “But in fact, in the lab, we’ve run the system for several hundred cycles and I don’t see any breaking or peeling of the material,” said Kuichang Zuo, lead author of the paper and a postdoctoral researcher in Li’s lab. “It’s very robust.”

In Rice’s new water-treatment platform, electrode coatings can be swapped out to allow the device to selectively remove a range of contaminants from wastewater, drinking water and industrial fluids. Illustration by Kuichang Zuo

“The true merit of this work is not that we were able to selectively remove sulfate, because there are many other contaminants that are perhaps more important,” she said. “The merit is that we developed a technology platform that we can use to target other contaminants as well by varying the composition of the electrode coating.”

The research was supported by the Rice-based National Science Foundation-backed Center for Nanotechnology-Enabled Water Treatment, the Welch Foundation and the Shanghai Municipal International Cooperation Foundation.

Nidec Motor Corp. appoints CEO

Nidec Motor Corporation (NMC) named Henk van Duijnhoven as its CEO and global business leader of ACIM (Appliances, Commercial and Industrial Motors). Van Duijnhoven was most recently a partner and managing director of The Boston Consulting Group where he was responsible for business turnaround, mergers and acquisitions, and strategy planning for clients in the industrial and medtech markets. He holds a Bachelor of Science degree from the College of Automotive Engineering and a Master of Business Administration from the Massachusetts Institute of Technology.

Woodard & Curran names new business unit leader

Woodard & Curran named Peter Nangeroni as its new industrial and commercial strategic business unit leader. He brings experience managing large, multidisciplinary projects for industrial clients with emphasis on generating positive environmental outcomes, return on investment and improved risk management. He has been with Woodard & Curran for 13 years in various roles, most recently as director of technical practices. He takes over for the long-time leader of the business unit, Mike Curato, who is retiring after 11 years in the role and 20 with the firm.

Nangeroni is a Professional Engineer with a degree in civil engineering from Tufts University and more than 35 years of experience working with clients on engineering and construction management projects. In his new role, he will oversee staffing, business development and project execution at a strategic level for the industrial and commercial strategic business unit, which focuses on water treatment, manufacturing and process utilities for clients in a wide range of industrial sectors.

MIT researchers 3-D print colloidal crystals – For the Scale-Up of optical sensors, color displays, and light-guided electronics + YouTube Video


3D MIT-Free-Form-Printing_0

3-D-printed colloidal crystals viewed under a light microscope. Image: Felice Franke

Technique could be used to scale-up self-assembled materials for use as optical sensors, color displays, and light-guided electronics.

MIT engineers have united the principles of self-assembly and 3-D printing using a new technique, which they highlight today in the journal Advanced Materials.

By their direct-write colloidal assembly process, the researchers can build centimeter-high crystals, each made from billions of individual colloids, defined as particles that are between 1 nanometer and 1 micrometer across.

“If you blew up each particle to the size of a soccer ball, it would be like stacking a whole lot of soccer balls to make something as tall as a skyscraper,” says study co-author Alvin Tan, a graduate student in MIT’s Department of Materials Science and Engineering. “That’s what we’re doing at the nanoscale.”

The researchers found a way to print colloids such as polymer nanoparticles in highly ordered arrangements, similar to the atomic structures in crystals. They printed various structures, such as tiny towers and helices, that interact with light in specific ways depending on the size of the individual particles within each structure.

Nanoparticles dispensed from a needle onto a rotating stage, creating a helical crystal containing billions of nanoparticles. (Credit: Alvin Tan)

The team sees the 3-D printing technique as a new way to build self-asssembled materials that leverage the novel properties of nanocrystals, at larger scales, such as optical sensors, color displays, and light-guided electronics.

“If you could 3-D print a circuit that manipulates photons instead of electrons, that could pave the way for future applications in light-based computing, that manipulate light instead of electricity so that devices can be faster and more energy efficient,” Tan says.

Tan’s co-authors are graduate student Justin Beroz, assistant professor of mechanical engineering Mathias Kolle, and associate professor of mechanical engineering A. John Hart.

Out of the fog

Colloids are any large molecules or small particles, typically measuring between 1 nanometer and 1 micrometer in diameter, that are suspended in a liquid or gas. Common examples of colloids are fog, which is made up of soot and other ultrafine particles dispersed in air, and whipped cream, which is a suspension of air bubbles in heavy cream. The particles in these everyday colloids are completely random in their size and the ways in which they are dispersed through the solution.

If uniformly sized colloidal particles are driven together via evaporation of their liquid solvent, causing them to assemble into ordered crystals, it is possible to create structures that, as a whole, exhibit unique optical, chemical, and mechanical properties. These crystals can exhibit properties similar to interesting structures in nature, such as the iridescent cells in butterfly wings, and the microscopic, skeletal fibers in sea sponges.

So far, scientists have developed techniques to evaporate and assemble colloidal particles into thin films to form displays that filter light and create colors based on the size and arrangement of the individual particles. But until now, such colloidal assemblies have been limited to thin films and other planar structures.

“For the first time, we’ve shown that it’s possible to build macroscale self-assembled colloidal materials, and we expect this technique can build any 3-D shape, and be applied to an incredible variety of materials,” says Hart, the senior author of the paper.

Building a particle bridge

The researchers created tiny three-dimensional towers of colloidal particles using a custom-built 3-D-printing apparatus consisting of a glass syringe and needle, mounted above two heated aluminum plates. The needle passes through a hole in the top plate and dispenses a colloid solution onto a substrate attached to the bottom plate.

The team evenly heats both aluminum plates so that as the needle dispenses the colloid solution, the liquid slowly evaporates, leaving only the particles. The bottom plate can be rotated and moved up and down to manipulate the shape of the overall structure, similar to how you might move a bowl under a soft ice cream dispenser to create twists or swirls.

Beroz says that as the colloid solution is pushed through the needle, the liquid acts as a bridge, or mold, for the particles in the solution. The particles “rain down” through the liquid, forming a structure in the shape of the liquid stream. After the liquid evaporates, surface tension between the particles holds them in place, in an ordered configuration.

As a first demonstration of their colloid printing technique, the team worked with solutions of polystyrene particles in water, and created centimeter-high towers and helices. Each of these structures contains 3 billion particles. In subsequent trials, they tested solutions containing different sizes of polystyrene particles and were able to print towers that reflected specific colors, depending on the individual particles’ size.

“By changing the size of these particles, you drastically change the color of the structure,” Beroz says. “It’s due to the way the particles are assembled, in this periodic, ordered way, and the interference of light as it interacts with particles at this scale. We’re essentially 3-D-printing crystals.”

The team also experimented with more exotic colloidal particles, namely silica and gold nanoparticles, which can exhibit unique optical and electronic properties. They printed millimeter-tall towers made from 200-nanometer diameter silica nanoparticles, and 80-nanometer gold nanoparticles, each of which reflected light in different ways.

“There are a lot of things you can do with different kinds of particles ranging from conductive metal particles to semiconducting quantum dots, which we are looking into,” Tan says. “Combining them into different crystal structures and forming them into different geometries for novel device architectures, I think that would be very effective in fields including sensing, energy storage, and photonics.”

This work was supported, in part, by the National Science Foundation, the Singapore Defense Science Organization Postgraduate Fellowship, and the National Defense Science and Engineering Graduate Fellowship Program.

 

MIT: Fish-eye lens may entangle pairs of atoms – may be a promising vehicle for necessary building blocks in designing quantum computers


MIT-Fish-Eye_0

James Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges.

Nearly 150 years ago, the physicist James Maxwell proposed that a circular lens that is thickest at its center, and that gradually thins out at its edges, should exhibit some fascinating optical behavior. Namely, when light is shone through such a lens, it should travel around in perfect circles, creating highly unusual, curved paths of light.

He also noted that such a lens, at least broadly speaking, resembles the eye of a fish. The lens configuration he devised has since been known in physics as Maxwell’s fish-eye lens — a theoretical construct that is only slightly similar to commercially available fish-eye lenses for cameras and telescopes.

Now scientists at MIT and Harvard University have for the first time studied this unique, theoretical lens from a quantum mechanical perspective, to see how individual atoms and photons may behave within the lens. In a study published Wednesday in Physical Review A, they report that the unique configuration of the fish-eye lens enables it to guide single photons through the lens, in such a way as to entangle pairs of atoms, even over relatively long distances.

Entanglement is a quantum phenomenon in which the properties of one particle are linked, or correlated, with those of another particle, even over vast distances. The team’s findings suggest that fish-eye lenses may be a promising vehicle for entangling atoms and other quantum bits, which are the necessary building blocks for designing quantum computers.

“We found that the fish-eye lens has something that no other two-dimensional device has, which is maintaining this entangling ability over large distances, not just for two atoms, but for multiple pairs of distant atoms,” says first author Janos Perczel, a graduate student in MIT’s Department of Physics. “Entanglement and connecting these various quantum bits can be really the name of the game in making a push forward and trying to find applications of quantum mechanics.”

The team also found that the fish-eye lens, contrary to recent claims, does not produce a perfect image. Scientists have thought that Maxwell’s fish-eye may be a candidate for a “perfect lens” — a lens that can go beyond the diffraction limit, meaning that it can focus light to a point that is smaller than the light’s own wavelength. This perfect imaging, scientist predict, should produce an image with essentially unlimited resolution and extreme clarity.

However, by modeling the behavior of photons through a simulated fish-eye lens, at the quantum level, Perczel and his colleagues concluded that it cannot produce a perfect image, as originally predicted.

“This tells you that there are these limits in physics that are really difficult to break,” Perczel says. “Even in this system, which seemed to be a perfect candidate, this limit seems to be obeyed. Perhaps perfect imaging may still be possible with the fish eye in some other, more complicated way, but not as originally proposed.”

Perczel’s co-authors on the paper are Peter Komar and Mikhail Lukin from Harvard University.

A circular path

Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. The denser a material, the slower light moves through it. This explains the optical effect when a straw is placed in a glass half full of water. Because the water is so much denser than the air above it, light suddenly moves more slowly, bending as it travels through water and creating an image that looks as if the straw is disjointed.

In the theoretical fish-eye lens, the differences in density are much more gradual and are distributed in a circular pattern, in such a way that it curves rather bends light, guiding light in perfect circles within the lens.

In 2009, Ulf Leonhardt, a physicist at the Weizmann Institute of Science in Israel was studying the optical properties of Maxwell’s fish-eye lens and observed that, when photons are released through the lens from a single point source, the light travels in perfect circles through the lens and collects at a single point at the opposite end, with very little loss of light.

“None of the light rays wander off in unwanted directions,” Perczel says. “Everything follows a perfect trajectory, and all the light will meet at the same time at the same spot.”

Leonhardt, in reporting his results, made a brief mention as to whether the fish-eye lens’ single-point focus might be useful in precisely entangling pairs of atoms at opposite ends of the lens.

“Mikhail [Lukin] asked him whether he had worked out the answer, and he said he hadn’t,” Perczel says. “That’s how we started this project and started digging deeper into how well this entangling operation works within the fish-eye lens.”

Playing photon ping-pong

To investigate the quantum potential of the fish-eye lens, the researchers modeled the lens as the simplest possible system, consisting of two atoms, one at either end of a two-dimensional fish-eye lens, and a single photon, aimed at the first atom. Using established equations of quantum mechanics, the team tracked the photon at any given point in time as it traveled through the lens, and calculated the state of both atoms and their energy levels through time.

They found that when a single photon is shone through the lens, it is temporarily absorbed by an atom at one end of the lens. It then circles through the lens, to the second atom at the precise opposite end of the lens. This second atom momentarily absorbs the photon before sending it back through the lens, where the light collects precisely back on the first atom.

“The photon is bounced back and forth, and the atoms are basically playing ping pong,” Perczel says. “Initially only one of the atoms has the photon, and then the other one. But between these two extremes, there’s a point where both of them kind of have it. It’s this mind-blowing quantum mechanics idea of entanglement, where the photon is completely shared equally between the two atoms.”

Perczel says that the photon is able to entangle the atoms because of the unique geometry of the fish-eye lens. The lens’ density is distributed in such a way that it guides light in a perfectly circular pattern and can cause even a single photon to bounce back and forth between two precise points along a circular path.

“If the photon just flew away in all directions, there wouldn’t be any entanglement,” Perczel says. “But the fish-eye gives this total control over the light rays, so you have an entangled system over long distances, which is a precious quantum system that you can use.”

As they increased the size of the fish-eye lens in their model, the atoms remained entangled, even over relatively large distances of tens of microns. They also observed that, even if some light escaped the lens, the atoms were able to share enough of a photon’s energy to remain entangled. Finally, as they placed more pairs of atoms in the lens, opposite to one another, along with corresponding photons, these atoms also became simultaneously entangled.

“You can use the fish eye to entangle multiple pairs of atoms at a time, which is what makes it useful and promising,” Perczel says.

Fishy secrets

In modeling the behavior of photons and atoms in the fish-eye lens, the researchers also found that, as light collected on the opposite end of the lens, it did so within an area that was larger than the wavelength of the photon’s light, meaning that the lens likely cannot produce a perfect image.

“We can precisely ask the question during this photon exchange, what’s the size of the spot to which the photon gets recollected? And we found that it’s comparable to the wavelength of the photon, and not smaller,” Perczel says. “Perfect imaging would imply it would focus on an infinitely sharp spot. However, that is not what our quantum mechanical calculations showed us.”

Going forward, the team hopes to work with experimentalists to test the quantum behaviors they observed in their modeling. In fact, in their paper, the team also briefly proposes a way to design a fish-eye lens for quantum entanglement experiments.

“The fish-eye lens still has its secrets, and remarkable physics buried in it,” Perczel says. “But now it’s making an appearance in quantum technologies where it turns out this lens could be really useful for entangling distant quantum bits, which is the basic building block for building any useful quantum computer or quantum information processing device.”

Nano-Scale Crystals enable a new design of digital devices – Greater computational power packed into a smaller space


tiny new particles electriccircCredit: CC0 Public Domain

Curtin researchers have developed a tiny electrical circuit that may enable an entirely new design of digital devices.

The  is made from crystals of copper that are grown and electrically wired at nanoscale and may lead to digital devices that have increasing amounts of computational power packed into a smaller space.

In a paper published today in the leading nanotechnology journal ACS Nano, researchers used a single nanoparticle to create an ensemble of different diodes – a basic electronic component of most modern , which functions by directing the flow of electric currents.

Lead researcher Ph.D. candidate Yan Vogel, from Curtin’s School of Molecular and Life Sciences and the Curtin Institute for Functional Molecules and Interfaces, said the research team used a single copper nanoparticle to compress in a single physical entity that would normally require many individual  elements.

Mr Vogel said the research showed that each nanoparticle had an in-built range of electrical signatures and had led to something akin to ‘one particle, many diodes’, thereby opening up the concept of single-particle circuitry.

Mr Vogel said the breakthrough would enable new concepts and methods in the design of miniaturised circuitry.

“Instead of wiring-up a large number of different sorts of diodes, as is done now, we have shown that the same outcome is obtained by many wires landing accurately over a single physical entity, which in our case is a copper nanocrystal,” Mr Vogel said.

Team leader Dr. Simone Ciampi, also from Curtin’s School of Molecular and Life Sciences and the Curtin Institute for Functional Molecules and Interfaces, said the new research followed that published by himself and his Curtin colleague Dr. Nadim Darwish in 2017, when they created a diode out of a single-molecule, with a size of approximately 1 nanometer, and would help to continue the downsizing trend of electronic devices.

“Last year, we made a breakthrough in terms of the size of the diode and now we are building on that work by developing more tuneable diodes, which can potentially be used to make more powerful and faster-thinking electronic devices,” Dr. Ciampi said.

“Current technology is reaching its limit and molecular or nanoparticle diodes and transistors are the only way that we can continue the improvement of computer performances. We are trying to contribute to the development of the inevitable next generation of electronics.”

This research was co-authored by Dr. Darwish and Ms. Jinyang Zhang, also from Curtin’s School of Molecular and Life Sciences and the Curtin Institute for Functional Molecules and Interfaces.

 Explore further: Researchers build a single-molecule diode

More information: Yan B. Vogel et al. Switching of Current Rectification Ratios within a Single Nanocrystal by Facet-Resolved Electrical Wiring, ACS Nano (2018). DOI: 10.1021/acsnano.8b02934

 

A Look at Graphene-Polymer Composite Medical Implants


This article is based around a talk given by Professor Alexander Seifalian from NanoRegMed Ltd, UK, at the NANOMED conference hosted by the NANOSMAT Society in Manchester on the 26-28th June 2018. In his talk, Alexander talks about how his company is developing a series of medical implants that are made from a biocompatible graphene-polymer composite.

Special Contribution by: Liam Critchley

 

Regenerative medicine and tissue engineering have been around for a while now, but these fields continue to advance and are now utilizing many different types of nanomaterials. Alexander has created a wide range of prostheses, including a trachea, grafts for heart bypasses, tear ducts, ears, and noses using various materials; including graphene. There has been a need for many years to create grafts which have smaller diameters, are less prone to blockages and can be used in a human patient without it being rejected by the body.

Life Science / Shutterstock

Most of the biomaterials used in various prostheses have been around for many decades and still encounter problems. So, Alexander and his company have come up with a new range of materials involving graphene for these prosthesis applications.

There are not many areas of medical research where graphene is used, because graphene by itself can be toxic to humans if internalized. But this can be avoided by compositing graphene with other materials. Aside from its strength, graphene’s lightweight nature, antimicrobial properties, flexibility and corrosion resistance make it an ideal material for medical implants when it is formulated into biocompatible materials.

The materials developed by Alexander are a composite of polycaprolactone (PCL), and graphene and the materials can be tuned to be either biodegradable or non-biodegradable depending on the intended application. To make the material, they graft the graphene and then conjugate it to the polymers so that it sits within the polymer matrix, thus preventing it from being harmful to a patient. A critical aspect of why the materials work is because they integrate with the surrounding tissue and cells.

The fabricated materials are very strong, and it requires 80 kilos of force to break the composite. This high strength property can also be further improved, but it is at the expense of the viscoelasticity of the material, which is required for many implant applications. It is also possible to create polycarbonate-graphene composites using this method, but a higher concentration of graphene is required, and this again affects the viscoelastic properties of the composite. It is also possible to 3D print these composite materials into variously shaped scaffolds loaded with stem cells.

RELATED STORIES

Nanotech Breakthrough Shows How Brain Cells Communicate – New Technology

Grafoid Unveils GPURE Graphene Polymer Membrane for Next-Generation Li-Ion Battery Applications

Nanoparticles for Advanced Medical Diagnosis Through Non-Invasive In Vivo Imaging Techniques – Supplier Data By Nanovic

Alexander has created many grafts with these materials, and they have been tested on mouse models. These grafts have been shown to grow cells, and the proliferated cells directly integrate with the tissue of interest to help with the growth of new tissue. This type of graft can also be loaded with nitrous oxide (sometimes alongside other kinds of particles or biological matter) and has excellent potential for wound healing applications.

Billion Photos / Shutterstock

Alexander has also created artificial arteries using these polymer-graphene composites. It was also possible to conjugated antibodies (made from peptides) inside the artery, which also becomes endothelialized under shear flow. The tunable nature of the composites has enabled Alexander to fabricate these pseudo-arteries with the same viscoelastic properties as natural arteries.

Because medical devices can take a while to become commercialized, all the products created from these composites are not at the commercial level just yet. However, they show a lot of promise and many have gone to clinical trials, with success. One of the key aspects that make this composite an exciting material is its tunability. The ratios can be altered such that it is flexible enough to be used as an artery, or it can be made more rigid for external prostheses, such as the nose. This, coupled with the fact that the materials are biocompatible, make it an interesting area to keep an eye on in the near future.

Sources:

• NANOMED 2018: http://www.nanomed.uk.com/

• NanoRegMed: http://www.nanoregmed.com/